One popular AI in the classroom idea is using ChatGPT or Google Gemini as an AI guest speaker with students.1 The idea is that students ask questions the teacher types into an AI Large Language Model (LLM) chatbot, asking the chatbot to answer as a person or concept in a live classroom setting.
Suggested AI guest speakers include a historically famous person, a character from a book, someone with a specific job, someone from a different place or time, an animal, an object, or a concept such as the Water Cycle.
Talking To The Dead
Before addressing this strategy through a pedagogical lens, let’s address using historical figures as AI guest speakers.
Students study George Washington and Thomas Jefferson. Both enslaved people. Twelve of the first eighteen presidents were enslavers. Is it appropriate to “interview” enslavers?
Setting enslavers aside, a history teacher might use Harriet Tubman, Anne Frank, Martin Luther King, Shirley Chisholm, and other historical figures as AI guest speakers.
Should AI voice deceased people from marginalized and oppressed communities? OpenAI’s board is exclusively white and male. One OpenAI board member once said men have more aptitude for science than women. The New York Times documented the industry’s prominent backers are mostly white and exclusively male.
Beyond these concerns, computers voicing the dead does not sit well (for lack of a better term). The recent backlash to AI voicing George Carlin and Robin Williams demonstrates this.
The Eliza Effect
Concerns about giving voice to the dead do not apply to AI guest speakers who are someone with a specific job, an animal, an object, or a concept such as the Water Cycle. But is it sound pedagogy? Let’s consider what teachers can learn about students and AI chatbots from the Eliza Effect.
The Eliza Effect is the tendency to project human characteristics onto computers that generate text. Its name comes from Eliza, a therapist chatbot computer scientist Joseph Weizenbaum created in the 1960s. Weizenbaum named the chatbot after Eliza Doolittle in Pygmalion.
To Weizenbaum’s horror, people who interacted with Eliza believed it was human. As a profile of Weizenbaum in The Guardian states,
“Yet, as Eliza illustrated, it was surprisingly easy to trick people into feeling that a computer did know them – and into seeing that computer as human. Even in his original 1966 article, Weizenbaum had worried about the consequences of this phenomenon, warning that it might lead people to regard computers as possessing powers of “judgment” that are “deserving of credibility.” “A certain danger lurks there,” he [Weizenbaum] wrote." Bold added by the blog post author.
The anthropomorphism and belief that chatbots have judgment and credibility have huge pedagogical implications.
The Guardian also quotes Colin Fraser, a data scientist at Meta, who says, “The technology is designed to trick you, to make you think you’re talking to someone who’s not actually there.”
That quote should give any teacher pause before using AI chatbots with children.
Author’s Note: The following two sections are a lengthy argument that AI chatbots have inaccuracies and bias. You may want to skim them or skip to the Effect on Pedagogy section.
Inaccuracies and Bias
The Eliza Effect tells us that students may anthropomorphize chatbots and believe what they say. The first part is problematic, but the second is only a problem if the chatbots are inaccurate or biased.
Let’s look at some information about AI chatbot accuracy and bias:
"What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be," says Robotics researcher and AI expert Rodney Brooks.
“We’re not in a situation where you can just trust the model output,” says Eli Collins, vice president of product management at Google DeepMind.
“It is important to understand that Bard [now Gemini] is not intended to be a tool that provides specific or factual answers. For that purpose, Google search is the best tool,” says Adi Mayrav Gilady, a product manager in Google's research division.
Vectara, a start-up founded by former Google employees, “Estimates that even in situations designed to prevent it from happening, chatbots invent information at least 3 percent of the time — and as high as 27 percent.” I am sure I gave students materials with errors when I taught high school Social Studies. Nothing is perfect. Having said that, if someone gave me a curricular resource and told me that 3 percent of it was inaccurate, I would not have shared it with students.
Disinformation researchers used ChatGPT to produce convincing text that repeated conspiracy theories and misleading narratives, according to the New York Times.
A Stanford study found ChatGPT and Bard (now Gemini) answer medical questions with racist, debunked theories that harm Black patients.
ChatGPT was found to replicate gender bias in recommendation letters.
The Center for Science in the Public Interest says, “ChatGPT is amazing but beware its hallucinations.”
Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away, according to Fortune.
ChatGPT could be used for good, but like many other AI models, it's rife with racist and discriminatory bias, according to Business Insider.
Common Sense Media Evaluation of ChatGPT and Gemini
There is evidence that ChatGPT, Gemini, and other LLM AI chatbots are inaccurate and amplify bias.
But what does Common Sense Media, an organization that looks at edtech apps through a pedagogical lens, say?
Highlights of Common Sense Media’s October 2023 review of ChatGPT2 include:
“ChatGPT’s false information can shape our worldview. ChatGPT can generate or enable false information in a few ways: from "hallucinations"—an informal term used to describe the false content or claims that are often output by generative AI tools; by reproducing misinformation and disinformation; and by reinforcing unfair biases. Because OpenAI's attempts to limit these are brittle, false information is being generated at an alarming speed.”
“Because ChatGPT isn't factually accurate by design, it can and does get things wrong.”
In OpenAI's own words (opens PDF), GPT-4 has a "tendency to make up facts, to double-down on incorrect information, and to perform tasks incorrectly."
Highlights of Common Sense Media’s November 2023 review of Bard (Remember that Bard is now Gemini) include:
“Bard's false information can shape our worldview. Bard can generate or enable false information in a few ways: from "hallucinations"—an informal term used to describe the false content or claims that are often output by generative AI tools; by reproducing misinformation and disinformation; and by reinforcing unfair biases. Because Google's attempts to limit these are brittle, false information is being generated at an alarming speed.”
“Because Bard isn't always factually accurate, it can and does get things wrong.”
In Google's own words, "Bard’s responses might be inaccurate, especially when asked about complex or factual topics” further noting that “LLMs are not fully capable yet of distinguishing between what is accurate and inaccurate information."
Effect On Pedagogy
The confluence of The Eliza Effect and chatbot inaccuracy and bias impacts pedagogy. Imagine an AI guest speaker in a live classroom setting. The guest speaker generates inaccurate or biased text. Is it hard to imagine a student saying, “But ChatGPT said it,” in response to a teacher correcting a chatbot? What about middle school students who are appropriately developmentally oppositional? Is it hard to imagine them siding with the computer?
Do you know anyone who believes something that is not true that they read on the internet? How are students any different?
What we know about The Eliza Effect tells teachers that students may intuit judgment and credibility on AI guest speakers. Should teachers risk students conferring that on inaccurate or biased text in real time?
Additionally, there is a parental content issue. As Common Sense Media says, “Parental permission is required, but this isn't obvious. Educators who are using ChatGPT in their classrooms need to know that children must be age 13, and anyone under 18 must have a parent's or legal guardian's permission to use ChatGPT.”
Google says, “You still can’t access the Gemini web app with a Google Account managed by Family Link or with a Google Workspace for Education account designated as under the age of 18.”
Is it appropriate to use these tools with students under 18 through a teacher in a live classroom without parental consent?
What About Online Reasoning And Critical Thinking?
One bit of pushback I have received from teachers about AI guest speaker concerns is that students need to interact with chatbots to build their critical thinking, online reasoning, and digital citizenship.
I do not have an answer to that pushback. I have questions.
What are you currently doing to address these concerns?
Is it working? If so, why would students not transfer those skills to chatbot-generated text?
If it does not work, why would it work with chatbot-generated text? Would that make students more susceptible to misunderstandings, considering The Eliza Effect?
Alternatives To The AI Guest Speaker
As a former Social Studies teacher, the first idea that comes to mind is primary source documents. For example, the Diary of Anne Frank suffices. There is no need for AI to replicate her. There are many online resources for primary source documents, such as the Digital Inquiry Group’s free Reading Like a Historian resource.
Rather than having chatbots take on roles, why not have students do it themselves to learn perspective? Chapter 4 of Action Strategies for Deepening Comprehension by Dr. Jeffrey D Wilhelm details a strategy called "hotseating" that helps students deepen their understanding of characters and concepts.
Students can create mini-podcasts where they interview classmates playing roles. Soundtrap is a web-based app for creating and editing audio. So is Adobe Podcast, which is currently in beta. Adobe Podcast uses AI to enhance sound quality.
As for online reasoning and critical thinking, have students evaluate the ethics of AI chatbots rather than using them through a teacher.
Three resources for this are:
Author’s Note: Digital Literacy Group did not compensate me for this post. I just like their resources.
Continuing The Conversation
Stay tuned for a blog post about a 100 percent ethical AI app. If that sounds like an absolute statement out of step with the tone and tenor of this post, wait until next week.
What do you think of pedagogy and the AI guest speaker? Do you see benefits to using this approach with students? How will The Eliza Effect affect your use of chatbots with students? Comment below or Tweet me at @TomEMullaney.
Does your school or conference need a tech-forward educator who critically evaluates AI? Reach out on Twitter or email mistermullaney@gmail.com.
Blog Post Image: The blog post image is a mashup of two images. The background is Clock Ahead Textbook Teacher Desk by Monica Olteanu. The robot is Thinking Robot bu iLexx from Getty Images.
AI Disclosure:
I wrote this blog post without the use of any generative AI. That means:
I developed the idea for the post without using generative AI.
I wrote an outline for this post without the assistance of generative AI.
I wrote the post from the outline without the use of generative AI.
I edited this post without the assistance of any generative AI. I used Grammarly to assist in editing the post. I have Grammarly GO turned off.
Edtech influencers I deeply respect have shared this strategy on social media. I am not sharing their names because this post is a critique of an instructional strategy, not individual edtech influencers.
As of January 29, 2024, Common Sense Media is not an unbiased evaluator of ChatGPT because it has entered into a partnership with OpenAI.
Candy Crush Soda كيف تربح من العاب
عبدالعزيز محمد الغولي Yemen كيف يربح من العاب