I have been spending some time exploring ChatGPT, the new AI powered, conversational chatbot, which is attracting a lot of attention for the range and the quality of its output. ChatGPT, by OpenAI, was launched at the end of November. It can do things as diverse as writing letters / e-mails, short answers, long articles (even whole scripts for TV shows), code, and code explanations. What’s more, it produces those answers in a matter of seconds, in a very human-like way, and with excellent context awareness.
I have been trying different tasks on ChatGPT, from asking it for gift ideas for 15 years old boys, to drafting an e-mail reaching out to student who failed to submit assignment, a lecture plan and various assignment responses. It did fairly well on the first two types of tasks, OKish on the third one, and… it crashed on the fourth one.
Let me show you.
The first topic that I tried was “recent research regarding customer perceptions of chatbots”. It produced a very readable narrative, but it wouldn’t work for an assignment because I would expect students to include references supporting the claims made.
So, I tried again with the instruction “recent research regarding customer perceptions of chatbots with academic references”.
This answer wouldn’t do, either, because it is more of an annotated bibliography rather than an overview of current research on the topic.
Still, I thought that this could be a great starting point for a student: they could check the references listed in the second version of the response, and then produce something along the lines of the first version. And… that is where I came across something bizarre.
While the journals mentioned in the second response existed, and the combination of volume number – year of publication was correct, the actual papers mentioned did not exist. For example, the first reference listed was: “Javalgi, R. G., White, D. S., & Ali, M. A. (2018). Customer adoption of chatbots in the service sector. Journal of Service Marketing, 32(1), 47-57.”
The Journal of Service Marketing exists. And the volume 32, issue 1 was published in 2018. However, the paper “Customer adoption of chatbots in the service sector” does not exist! The exact same thing happened for every other reference listed in ChatGPT’s response.
I tried a few more topics, and the same thing happened every time: the generic reply was well written but lacked substance; while the substantiated reply was convincing, but it was a complete fabrication.
I am still trying to grapple with the consequences of ChatGPT for teaching, learning and assessment, as well as for research. But some pointers are beginning to emerge:
- ChatGPT is brilliant at coming up with filler text.
- It also displays a certain level of common sense and contextual awareness, which helps “fool” a reader.
- However, we should not trust it to provide correct information about a topic;
- And, when in doubt about whether the text was written by a person or ChatGPT, it is worth checking the sources listed.
What are your experiences of using ChatGPT?