ChatGPT: Clever, but not infallible

As I explored the capabilities of ChatGPT, I put it to the test by tasking it with generating an annotated bibliography entry for a well-known work, Paulo Freire’s “Pedagogy of the Oppressed“. To my surprise, it delivered a satisfactory entry in just 20 seconds. However, I decided to challenge ChatGPT further by omitting the author’s name when requesting an entry for a lesser-known work. The results were intriguing, and I couldn’t help but wonder about its limitations. 

Playing in ChatGPT

This required a test to see the capabilities of ChatGPT. I used an article that I co-wrote with a colleague in 2017, “The emerging technology collection at a university library: Supporting experiential learning in the curriculum.” Intentionally I left out the authors’ names and asked ChatGPT to create an annotated bibliography entry for the article. While I expected some inaccuracies, the results were surprising, and the output was unsettling. 

I expected that ChatGPT would make an error in naming the authors of the article, but I was not prepared for the outcome. To my surprise, the model attributed the article to our newly hired Associate University Librarian, even though their name was not mentioned in the prompt. Additionally, the journal, volume/issue, pagination, and URL were all fabricated. I shared the story with AUL and we both were fascinated by this outcome. 

I was not expecting the AI to generate the name of one of my colleagues.

After seeing the experiment’s results, we started to question the model’s potential to recognize and correctly attribute authors to a given piece of work. We asked ourselves, “Can the AI be trained to learn the authors’ names?” 

Back to the lab again (yes, this is an Eminem reference)

Starting a fresh session, I used the same prompt as before. However, this time, the output credited John Smith as the author. This confusion was compounded as the chosen name was quite generic. To clarify, I informed ChatGPT that the author was incorrect and supplied the correct name. As a result, the first prompt was corrected with the proper author information. 

Telling ChatGPT the correct authors prompts the output to be corrected.

I asked ChatGPT to retain the authors of the article and it confidently assured me that it would remember the information in the future. The level of assurance and accuracy in ChatGPT’s responses is impressive.

The confidence in ChatGPT’s responses really make it feel like it’s learning.

I launched a new chat session and repeated the first prompt, half expecting the AI to have retained the information I provided previously. Unfortunately, it did not, causing me to question its ability to learn in this context. I also realized that the AI likely cannot actually “read” or fully understand the article’s content but rather match the title and author based on earlier inputs. 


The revelation that my initial prompt had mistakenly attributed the article I had written to a recently hired colleague at our institution was intriguing. Subsequent searches generated more commonplace names like John Smith, Jane Smith, and John Doe, which aligned more with my initial expectations. Despite its limitations, utilizing ChatGPT as a powerful AI writing program proved to be a pleasurable experience to cap off my week.

Header Image by Gerd Altmann from Pixabay 

Leave a Comment