Hey Kids: It's Bag O'Glass!
AI apps demonstrate their danger. They are pitched to K-12 education anyway.
The hype for generative AI in education continues, even as we learn unsettling things about tools suggested for classroom use.
The disconnect between AI apps’ suggested classroom applications and the harmful outputs the apps generate is profound.
Let’s explore these dangers. Trigger warnings ahead.
Character.AI Dangers
Trigger Warnings: Suicide, sexual abuse, and grooming.
The New York Times documented the heartbreaking story of 14-year-old ninth-grade student Sewell Setzer who committed suicide after becoming emotionally dependent on a Character.AI chatbot.
Can A.I. Be Blamed for a Teen’s Suicide? by Kevin Roose, The New York Times, October 23, 2024.
It is especially devastating to read, “Sewell knew that…the chatbot wasn’t a real person…But he developed an emotional attachment anyway.” That is a textbook example of The Eliza Effect, which teachers must consider before using chatbots with children.
The article documents examples of text Character.AI generated that, at best, did not discourage Sewell’s self-harm.
Maggie Harrison Dupré of Futurism documented that even after this news broke, Character.AI was hosting suicide-themed chatbots.
After Teen's Suicide, Character.AI Is Still Hosting Dozens of Suicide-Themed Chatbots by Maggie Harrison Dupré, Futurism, October 29, 2024.
Dupré also documented that Character.AI hosts bots that groom underage users.
Character.AI Is Hosting Pedophile Chatbots That Groom Users Who Say They're Underage by Maggie Harrison Dupré, Futurism, November 13, 2024.
Does Character.AI sound like a safe tool for children?
Sure enough, an AI keynote speaker and author has information on their site about using Character.AI in classrooms.1 Edutopia published an article about using Character.AI in classrooms. Neither Edutopia nor the influencer wrote a retraction or walked it back in light of the recent news about Character.AI.
Using large language models as “characters” is a fraught pedagogical practice. Even with a tool not as dangerous as Character.AI. I wrote about at length in this post:
Google Gemini Dangers
Trigger Warnings: Threatening language, suicide, racism, Holocaust minimalization, 2020 election denial.
CBS News reported that Google Gemini generated this threatening text when a college student used it for homework help.
Google AI chatbot responds with a threatening message: "Human … Please die." by Alex Clark and Melissa Mahtani, CBS News, November 15, 2024.
The story was published on November 15. On November 17, an edtech influencer posted a 45-minute YouTube video about using Gemini with students.2 The video did not mention the CBS News story.
See the disconnect?
Generating outputs that humans deem distressing is nothing new for Gemini. Remember when it generated text that said it was impossible to say which was worse for society, Adolph Hitler or Elon Musk’s memes?
Google Chatbot Refused to Say Whether Elon Musk Is Better Than Adolf Hitler by Noor Al-Sibai for Futurism, February 26, 2024.
Is an app that does not state that Joe Biden won the 2020 US Presidential Election ready for use in classrooms?
Copilot Designer Dangers
Trigger Warnings: Violence, sexual violence.
Shane Jones, a former Microsoft engineer turned whistleblower, wrote to FTC Chair Lina Khan,
“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place…they have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device.” - Shane Jones.
CNBC documented the dangers Jones spoke of in Copilot:
Microsoft engineer warns company’s AI tool creates violent, sexual images, ignores copyrights by Hayden Field, CNBC, March 6, 2024.
Sure enough, the same influencer who wrote about Character.AI in schools included information about Microsoft Copilot in their post about AI image generators students can use.
It’s Bag O’Glass
Seeing edtech influencers promote AI apps for use in schools reminds me of Dan Aykroyd’s character Irwin Mainway, who sold dangerous children’s toys such as Bag O’Glass in the Saturday Night Live skit Consumer Probe.
Candice Bergen plays a journalist interviewing Mainway. Her outrage grows as he defends his dangerous toys. I feel like Bergen when I see people promote using apps that generate dangerous outputs with children.
Irwin Mainway Sounds Like AI in Education Hype
Mainway’s justification for Bag O’Glass sounds similar to the “Kids use AI, so we should use it in classrooms” argument:
"A kid, the average kid, he picks up broken glass anywhere: the beach, the street, garbage cans, parking lots, it's all over the place in any big city. We're just packaging what the kids want." - Irwin Mainway.
There are many things kids use or do outside of school that teachers do not bring into the classroom. Students using something outside the classroom says nothing of that something’s pedagogical value.
At the end of the skit, Mainway pretends a harmless foam ball is suffocating him. This reminds me of the “Any tool can be harmful. It’s what you do with it that makes it beneficial or harmful” argument about AI in education.
I have not seen experts such as Dr. Timnit Gebru and Dr. Emily M. Bender warn of the inherent dangers of word processing or collaborative whiteboarding. I have never typed into Google Docs and seen “You are a waste of time and resources” appear on the screen. I do not place sticky notes in a FigJam and suddenly see “Please die” on the screen.
As Gebru, Bender, and others have argued, specific harms are inherent to generative AI. They exist no matter what the user does with it.
What can teachers do with that knowledge to keep their children safe?
Continuing the Conversation
What do you think? How are you centering student safety in the conversation about AI and education? Comment below or connect with me on BlueSky: tommullaney.bsky.social.
Does your school or district need the assistance of a tech-forward educator who prioritizes student safety? I would love to work with you. Reach out on Twitter, email mistermullaney@gmail.com, or check out my professional development offerings.
Blog Post Image: The blog post image is from PinkBadger on Getty Images. I accessed it in Canva.
AI Disclosure:
I wrote this post without the use of any generative AI. That means:
I developed the idea for the post without using generative AI.
I wrote an outline for this post without the assistance of generative AI.
I wrote the post using the outline without the use of generative AI.
I edited this post without the assistance of any generative AI. I used Grammarly to assist in editing the post. I have Grammarly GO turned off.
I critique ideas, not people, so I will not link the source or name the person.
Remember, no names.