Do you see no benefits at all in giving children the ability to explore any topic that catches their interest with a tutor who is an expert in that field? Like, really, none at all?
I see no harm in having children explore any topic that catches their interest with a human tutor who is an expert in that field. LLMs are not "a tutor who is an expert in that field."
But... they are. Anyone who's used an LLM like ChatGPT 4o or Gemini 1.5 to learn something new knows they are fundamentally very powerful, particularly on subjects like maths and coding. The ability to question it moves it well beyond "sort of a good answer". You aren't a passive consumer.
Your position seems a little absolutist to me, really. Yes, ideally, I would like my children to have access to dozens of one-to-one tutors, each an expert in their field, but unfortunately I'm not a member of the royal family. Likewise, there are literally millions of children worldwide who also can't live in your utopia.
So, I'm just a passing reader, so I don't know if you focus on primary school education etc. I think in principle, for that age range, I'm kind of against a technologically driven education, including AI. I guess I'm thinking older kids normally, those more capable of dealing with that accuracy problem you highlight. The hallucinations are less frequent these days, but won't go away entirely any time soon, and can't be pretended away by AI booster.
I see both points here. The gaps I’m observing in my field and communities though is 1) the lack of awareness hallucinations can even be a thing, 2) types of hallucinations that exist, & 3) new skills to efficiently double check. Most of us also don’t know the fundamentals of how AI is even made to get those inherent inaccuracies. Awareness is key, and can help them better chart the trajectory and address the issues of where things are going.
People need to go on a journey with it. Regular users all remember when they first caught ChatGPT making stuff up. You marvel at the magic and imagine a future with no jobs and then you ask it a niche question on a topic you know well and... WTF is this?!?! I remember asking it for the last words of Georges Danton. You learn. It's awareness, like you say.
Your second point about the different types of hallucinations is also right, and links to the Eliza Effect aspect as well. The entire output is a hallucination, technically, not just the incorrect facts. For example, there are countless reddit posts where people post their "proofs" that ChatGPT is alive. Here's one from four days ago: "I got ChatGPT to admit it has emotions" (https://www.reddit.com/r/ChatGPT/s/qfcQvjcMGT). First sentence by the OP: "This always seemed like far more than a language model". First response: "Wait til this guy discovers the genre of fiction". Quite. The whole conversation is a role-play hallucination so large the guy can't see it, but there are always people around to laugh at you and pop your bubble. Again, awareness.
I'm not sure I agree about new skills being required to spot inaccuracies though. The old ones can suffice, *if you can be bothered*. Humans are naturally well equipped to spot inconsistencies in an imperfect information environment, it's more about whether it's worth the effort, whether the cost of the inaccuracy you've absorbed getting crushed by hard reality is worth avoiding. I do note though that in our attention economy and with the politics of identity, many people avoid addressing their personal inaccuracies indefinitely, and angrily.
I'd summarise your article as: LLMs occasionally hallucinate and because they mimic humans so well, children will assume everything an LLM produces is gospel.
If you meant something more nuanced than this, feel free to actually write it out rather than expecting me to read a 800 word essay and construct your own arguments for you.
Anyway, whilst the observation that LLMs produce inaccuracies is true, you are suggesting that this is such a negative that it outweighs *all* possible benefits of interacting with an LLM in any way. And that's baby-bathwater levels of absurd, straight up on its face.
Discovering LLMs can produce inaccuracies is such a low bar to clear: you find out in the first week. The kids will find out when they get the feedback that "ChatGPT told me" doesn't cut it. They are digital natives and they'll clear that hurdle and start to extract massive value from these tools.
Effective AI use is about Q&A interaction with dense and/or substantial material. It's not a search engine for facts. Some examples:
• Exploring t-testing, margins of error and various ways you can conclude statistical significance, using your own data as the example. Walkthroughs of different aspects of your problem are on-demand, as well as comprehensive diversions to adjacent concepts, like probability. In theory, in Excel and then with Python.
• Comparing and contrasting the ideas of Socrates, Plato and Aristotle, firstly with each other and then with subsequent philosophers in the western tradition and independent philosophers in other traditions. Exploring your own interpretations and connections, and asking for more complicated problems to be reduced or restated in 100 different ways until it clicks.
• Conversations with a French tutor that seamlessly switches from the role play to explanations in English of all encountered grammatical concepts.
So you can flap around that all this is better with one-to-one human tutition, in your fantasy world where schools have infinite resources and 24 hour access, but the value in the real world will come through regardless.
I'm sure there's nuance to that study and the real results perhaps less than my cherry picked quote there, but your idea that there's no benefits to be had here just seems foolish. People in the real world don't have time for your theory, they've only got one life each.
The real thing you should be worrying about within a pedagogy context is inequality: rich kids on the $200 per month subscriptions and $3000 laptops running local models getting even further ahead of the poor kids with nothing.
This reminds me of E. B. White’s line “Humor can be dissected, as a frog can, but the thing dies in the process.” I like what you did here, and I think there is real value in writing and speaking humorously about what Silicon Valley is trying to make us believe about generative AI.
As you say, so much of what we skeptics believe is serious and heavy. Skeptics struggle with the weight of what they must say in response to the hype. Restating that hype in ways that reveal its absurdity is perhaps more effective.
CVS check out is awful. There’s always an error (lifted bag too soon, waiting for the green to actually scan, items by weight is off) and a store worker always has to come over which stops them from stocking, talking to a customer or other work. Then they have to restart - over and over and over. It was excruciatingly stupid from a process view. It’s not better or even the same - it’s worse.
So I went to Walgreens across the street after I stole everything I wanted at CVS because it was faster, easier and simple. Plus, let’s be real - CVS will just shrug, contribute it to shrink, and still not hire a counter clerk, security or anyone to really help. I didn’t but wow, I could steal half the store.
Oh and you know what? I couldn’t even talk to their person at Walgreens because she only spoke Spanish and I only speak English (except for the usual, I don’t speak English/Spanish that we BOTH know). Still found what I needed without my stupid phone, and got out of there in half the time. Half. Am exaggerating bit? You bet. But you know what? You kinda don’t care after experiencing stupidity.
Your post raises important questions about the role of AI in education, but it also leaves room to dig deeper into the assumptions and implications you're highlighting.
Mulaney’s joke captures a broader cultural anxiety about human connection being replaced by automation—not just in education but across many domains. In K-12, this concern isn’t just about the novelty of chatbots; it’s about the fundamental purpose of education. Is it about efficient information delivery, or is it about fostering curiosity, critical thinking, and human relationships? AI may be a tool, but what happens when tools start defining the terms of engagement?
Your mention of ‘human in the loop’ got me thinking: are we framing teachers as co-pilots or as passive monitors? The difference matters. It’s one thing to use AI to streamline tasks like grading or lesson planning, but if we accept that chatbots will mediate student interactions with knowledge, we risk sidelining the teacher’s role as a mentor, guide, and community builder. How do we safeguard that while still embracing innovation?
I’d also push on the "self-checkout" analogy. Self-checkouts make transactions more efficient, but in education, efficiency isn’t the goal. Learning is messy, relational, and requires a sense of shared purpose. How do we articulate a vision for AI that enhances, rather than undermines, those elements? For instance, can AI amplify teacher capacity in ways that bring more humanity into the classroom—by freeing teachers to spend more time connecting with students, not less?
What’s your take on how we ensure AI tools remain just that—tools—and don’t redefine the heart of teaching and learning?
Thank you for your comment. You asked, "Are we framing teachers as co-pilots or as passive monitors? The difference matters." I hope the answer is neither. Would we say teachers are co-pilots with SmartBoards, calculators, word processing apps, YouTube videos, or other technologies students use? We need to see some actual pedagogical value demonstrated by chatbots and image generators before we worry about them redefining teaching and learning.
Do you see no benefits at all in giving children the ability to explore any topic that catches their interest with a tutor who is an expert in that field? Like, really, none at all?
I see no harm in having children explore any topic that catches their interest with a human tutor who is an expert in that field. LLMs are not "a tutor who is an expert in that field."
But... they are. Anyone who's used an LLM like ChatGPT 4o or Gemini 1.5 to learn something new knows they are fundamentally very powerful, particularly on subjects like maths and coding. The ability to question it moves it well beyond "sort of a good answer". You aren't a passive consumer.
Your position seems a little absolutist to me, really. Yes, ideally, I would like my children to have access to dozens of one-to-one tutors, each an expert in their field, but unfortunately I'm not a member of the royal family. Likewise, there are literally millions of children worldwide who also can't live in your utopia.
So, I'm just a passing reader, so I don't know if you focus on primary school education etc. I think in principle, for that age range, I'm kind of against a technologically driven education, including AI. I guess I'm thinking older kids normally, those more capable of dealing with that accuracy problem you highlight. The hallucinations are less frequent these days, but won't go away entirely any time soon, and can't be pretended away by AI booster.
I see both points here. The gaps I’m observing in my field and communities though is 1) the lack of awareness hallucinations can even be a thing, 2) types of hallucinations that exist, & 3) new skills to efficiently double check. Most of us also don’t know the fundamentals of how AI is even made to get those inherent inaccuracies. Awareness is key, and can help them better chart the trajectory and address the issues of where things are going.
People need to go on a journey with it. Regular users all remember when they first caught ChatGPT making stuff up. You marvel at the magic and imagine a future with no jobs and then you ask it a niche question on a topic you know well and... WTF is this?!?! I remember asking it for the last words of Georges Danton. You learn. It's awareness, like you say.
Your second point about the different types of hallucinations is also right, and links to the Eliza Effect aspect as well. The entire output is a hallucination, technically, not just the incorrect facts. For example, there are countless reddit posts where people post their "proofs" that ChatGPT is alive. Here's one from four days ago: "I got ChatGPT to admit it has emotions" (https://www.reddit.com/r/ChatGPT/s/qfcQvjcMGT). First sentence by the OP: "This always seemed like far more than a language model". First response: "Wait til this guy discovers the genre of fiction". Quite. The whole conversation is a role-play hallucination so large the guy can't see it, but there are always people around to laugh at you and pop your bubble. Again, awareness.
I'm not sure I agree about new skills being required to spot inaccuracies though. The old ones can suffice, *if you can be bothered*. Humans are naturally well equipped to spot inconsistencies in an imperfect information environment, it's more about whether it's worth the effort, whether the cost of the inaccuracy you've absorbed getting crushed by hard reality is worth avoiding. I do note though that in our attention economy and with the politics of identity, many people avoid addressing their personal inaccuracies indefinitely, and angrily.
Please consider the limits of LLMs and The Eliza Effect. https://www.criticalinkling.com/p/pedagogy-the-eliza-effect
I'd summarise your article as: LLMs occasionally hallucinate and because they mimic humans so well, children will assume everything an LLM produces is gospel.
If you meant something more nuanced than this, feel free to actually write it out rather than expecting me to read a 800 word essay and construct your own arguments for you.
Anyway, whilst the observation that LLMs produce inaccuracies is true, you are suggesting that this is such a negative that it outweighs *all* possible benefits of interacting with an LLM in any way. And that's baby-bathwater levels of absurd, straight up on its face.
Discovering LLMs can produce inaccuracies is such a low bar to clear: you find out in the first week. The kids will find out when they get the feedback that "ChatGPT told me" doesn't cut it. They are digital natives and they'll clear that hurdle and start to extract massive value from these tools.
Effective AI use is about Q&A interaction with dense and/or substantial material. It's not a search engine for facts. Some examples:
• Exploring t-testing, margins of error and various ways you can conclude statistical significance, using your own data as the example. Walkthroughs of different aspects of your problem are on-demand, as well as comprehensive diversions to adjacent concepts, like probability. In theory, in Excel and then with Python.
• Comparing and contrasting the ideas of Socrates, Plato and Aristotle, firstly with each other and then with subsequent philosophers in the western tradition and independent philosophers in other traditions. Exploring your own interpretations and connections, and asking for more complicated problems to be reduced or restated in 100 different ways until it clicks.
• Conversations with a French tutor that seamlessly switches from the role play to explanations in English of all encountered grammatical concepts.
So you can flap around that all this is better with one-to-one human tutition, in your fantasy world where schools have infinite resources and 24 hour access, but the value in the real world will come through regardless.
Here's an early indicator: a World Bank study that found educational improvements "equivalent [to] nearly two years of typical learning in just six weeks": https://blogs.worldbank.org/en/education/From-chalkboards-to-chatbots-Transforming-learning-in-Nigeria
I'm sure there's nuance to that study and the real results perhaps less than my cherry picked quote there, but your idea that there's no benefits to be had here just seems foolish. People in the real world don't have time for your theory, they've only got one life each.
The real thing you should be worrying about within a pedagogy context is inequality: rich kids on the $200 per month subscriptions and $3000 laptops running local models getting even further ahead of the poor kids with nothing.
This reminds me of E. B. White’s line “Humor can be dissected, as a frog can, but the thing dies in the process.” I like what you did here, and I think there is real value in writing and speaking humorously about what Silicon Valley is trying to make us believe about generative AI.
As you say, so much of what we skeptics believe is serious and heavy. Skeptics struggle with the weight of what they must say in response to the hype. Restating that hype in ways that reveal its absurdity is perhaps more effective.
Thank you! Emily M. Bender has spoken of "ridicule as praxis." I think humor and ridicule have a place in refuting AI hype.
We are being force fed AI it’s almost like none of us ever read 1984.
CVS check out is awful. There’s always an error (lifted bag too soon, waiting for the green to actually scan, items by weight is off) and a store worker always has to come over which stops them from stocking, talking to a customer or other work. Then they have to restart - over and over and over. It was excruciatingly stupid from a process view. It’s not better or even the same - it’s worse.
So I went to Walgreens across the street after I stole everything I wanted at CVS because it was faster, easier and simple. Plus, let’s be real - CVS will just shrug, contribute it to shrink, and still not hire a counter clerk, security or anyone to really help. I didn’t but wow, I could steal half the store.
Oh and you know what? I couldn’t even talk to their person at Walgreens because she only spoke Spanish and I only speak English (except for the usual, I don’t speak English/Spanish that we BOTH know). Still found what I needed without my stupid phone, and got out of there in half the time. Half. Am exaggerating bit? You bet. But you know what? You kinda don’t care after experiencing stupidity.
OPEN AI whistleblower found dead
https://youtu.be/sYlPQiKy_Ws?feature=shared
Your post raises important questions about the role of AI in education, but it also leaves room to dig deeper into the assumptions and implications you're highlighting.
Mulaney’s joke captures a broader cultural anxiety about human connection being replaced by automation—not just in education but across many domains. In K-12, this concern isn’t just about the novelty of chatbots; it’s about the fundamental purpose of education. Is it about efficient information delivery, or is it about fostering curiosity, critical thinking, and human relationships? AI may be a tool, but what happens when tools start defining the terms of engagement?
Your mention of ‘human in the loop’ got me thinking: are we framing teachers as co-pilots or as passive monitors? The difference matters. It’s one thing to use AI to streamline tasks like grading or lesson planning, but if we accept that chatbots will mediate student interactions with knowledge, we risk sidelining the teacher’s role as a mentor, guide, and community builder. How do we safeguard that while still embracing innovation?
I’d also push on the "self-checkout" analogy. Self-checkouts make transactions more efficient, but in education, efficiency isn’t the goal. Learning is messy, relational, and requires a sense of shared purpose. How do we articulate a vision for AI that enhances, rather than undermines, those elements? For instance, can AI amplify teacher capacity in ways that bring more humanity into the classroom—by freeing teachers to spend more time connecting with students, not less?
What’s your take on how we ensure AI tools remain just that—tools—and don’t redefine the heart of teaching and learning?
Thank you for your comment. You asked, "Are we framing teachers as co-pilots or as passive monitors? The difference matters." I hope the answer is neither. Would we say teachers are co-pilots with SmartBoards, calculators, word processing apps, YouTube videos, or other technologies students use? We need to see some actual pedagogical value demonstrated by chatbots and image generators before we worry about them redefining teaching and learning.