Skip to Main Content
United States Military Academy Library


Generative Artificial Intelligence

As with all sources of information, the outputs of Generative AI should be evaluated for accuracy. While hallucinations and bias are considerable problems in Generative Artificial Intelligence, fact checking and using the SIFT method can mitigate believing and passing on misinformation.

Generative AI can hallucinate which means it can produce false events, stories, sources, or general nonsense and present it as factual. If you were to ask ChatGPT to write an essay and include sources for the information presented those sources may not even exist!  Here are a few notable examples:

 

  • In May 2024, Google's AI Overviews generated outputs that included telling users to use non-toxic glue to keep the cheese from sliding off their pizza and to eat rocks daily.  These errors were the result of Generative AI not being able to distinguish between satire, humor, forums, and well-researched articles. So just as there are websites that promote misinformation, disinformation, or malinformation, Generative AI outputs can present false information as true.
  • In February 2023, Kevin Roose, a writer for the New York Times, wrote "Bing's AI Chat: I Want to Be Alive" in which he describes a conversation with an early version of CoPilot which the Chatbot tried to convince him to leave his wife. A copy of the transcript is included in the article. 
  • In June 2023, two New York attorneys were sanctioned for filing a brief which included six hallucinated cases. The lawyers admitted to using ChatGPT to write the brief.

There is inherent bias in the data that Generative AI LLMs are trained on. The data that goes in determines the data which comes out. When asking Microsoft CoPilot to produce an image of a West Point cadet, this is the image that was returned:

Next I asked CoPilot to produce West Point cadets in a few different artistic styles.

So far, DALL-E, the image generator used by CoPilot, has produced only images of white, male, cadets. This output demonstrates that in the data the image generator was trained, on the majority of the images it identified as cadets were white males even though the Corps is a diverse group of individuals.

Most of us were taught to look at the following markers, but they are no longer a sure-fire way to determine credibility. Superficial markers of credibility, such as professional website design, authoritative language, presence of citations, or a reputable publisher's name, can be misleading when determining the accuracy of information. While these elements may create an initial impression of trustworthiness, they don't guarantee the content's reliability or factual correctness. Sleek designs can mask unreliable information, authoritative language might come from biased or unqualified sources, and citations could be irrelevant or misinterpreted.  

By encouraging the consultation of multiple sources to verify claims, lateral reading can help identifying inaccuracies or biases in the original content. This approach provides broader context and allows for a more comprehensive understanding of the topic, while also enabling readers to assess the credibility of the original source by examining what other reputable sources say about it. By comparing various perspectives, lateral reading helps detect potential biases and promotes the use of fact-checking websites and trusted digital sources to ensure the reliability of information. Ultimately, this method leads to a more thorough and accurate assessment of content, particularly in the digital age. See the chart below on how to lateral read. 

Footer Library Logo How Do I?
 
Research
 
About
 
USMA Library Terms of Use