ChatGPT: Clever, but not infallible

As I explored the capabilities of ChatGPT, I put it to the test by tasking it with generating an annotated bibliography entry for a well-known work, Paulo Freire’s “Pedagogy of the Oppressed“. To my surprise, it delivered a satisfactory entry in just 20 seconds. However, I decided to challenge ChatGPT further by omitting the author’s name when requesting an entry for a lesser-known work. The results were intriguing, and I couldn’t help but wonder about its limitations. 

Playing in ChatGPT

This required a test to see the capabilities of ChatGPT. I used an article that I co-wrote with a colleague in 2017, “The emerging technology collection at a university library: Supporting experiential learning in the curriculum.” Intentionally I left out the authors’ names and asked ChatGPT to create an annotated bibliography entry for the article. While I expected some inaccuracies, the results were surprising, and the output was unsettling. 

I expected that ChatGPT would make an error in naming the authors of the article, but I was not prepared for the outcome. To my surprise, the model attributed the article to our newly hired Associate University Librarian, even though their name was not mentioned in the prompt. Additionally, the journal, volume/issue, pagination, and URL were all fabricated. I shared the story with AUL and we both were fascinated by this outcome. 

I was not expecting the AI to generate the name of one of my colleagues.

After seeing the experiment’s results, we started to question the model’s potential to recognize and correctly attribute authors to a given piece of work. We asked ourselves, “Can the AI be trained to learn the authors’ names?” 

Back to the lab again (yes, this is an Eminem reference)

Starting a fresh session, I used the same prompt as before. However, this time, the output credited John Smith as the author. This confusion was compounded as the chosen name was quite generic. To clarify, I informed ChatGPT that the author was incorrect and supplied the correct name. As a result, the first prompt was corrected with the proper author information. 

Telling ChatGPT the correct authors prompts the output to be corrected.

I asked ChatGPT to retain the authors of the article and it confidently assured me that it would remember the information in the future. The level of assurance and accuracy in ChatGPT’s responses is impressive.

The confidence in ChatGPT’s responses really make it feel like it’s learning.

I launched a new chat session and repeated the first prompt, half expecting the AI to have retained the information I provided previously. Unfortunately, it did not, causing me to question its ability to learn in this context. I also realized that the AI likely cannot actually “read” or fully understand the article’s content but rather match the title and author based on earlier inputs. 

Conclusion

The revelation that my initial prompt had mistakenly attributed the article I had written to a recently hired colleague at our institution was intriguing. Subsequent searches generated more commonplace names like John Smith, Jane Smith, and John Doe, which aligned more with my initial expectations. Despite its limitations, utilizing ChatGPT as a powerful AI writing program proved to be a pleasurable experience to cap off my week.

Header Image by Gerd Altmann from Pixabay 

2 thoughts on “ChatGPT: Clever, but not infallible”

  1. I asked Chat GPT 3.5 to conduct a literature search on [topic] and produce an annotated bibliography in APA style. I was aware that the articles would be few, if any, and that the topic may have been limited to the future directions or next steps section of some articles. Chat GPT treated it like a creative writing assignment, to generate what a bibliography WOULD look like on this subject, with fake articles – authors who research the topic, titles that are on topic, journals that exist, but as an article, does not exist. I would have rather been told that there are not articles on this exact topic.

    Reply
    • Hi Kate,

      The confidence with which ChatGPT generates responses is impressive and can sometimes be misinterpreted by users as authoritative or absolutely true. It’s important to realize, ChatGPT does not possess knowledge or understanding; it simply generates responses based on patterns in data. An important point to consider is the need for transparency about the information being generated. For instance, users might appreciate being informed when there are no articles on a specific topic, rather than receiving a made-up response. Essentially, ChatGPT is not conveying factual knowledge but is instead creating responses based on its programming and the data it has been trained on.

      That said, we’ve noticed that OpenAI is generating snippets such as this (tested this morning April 18) “These titles are hypothetical and meant to guide you in searching for relevant articles in databases like JSTOR, ERIC, or Google Scholar. For access to actual studies, you would typically look up these topics in academic journals related to library science, education, and social sciences”.

      Fascinating stuff!

      -Ryan

      Reply

Leave a Reply