I am loving using AI as a thought partner for idea generation. My creativity is off the charts. I'm using @MagicSchool pretty exclusively with a little ChatGPT sprinkled in for non-educational topics. My district has also embraced AI and is using Magic Schoo's Magic Student with students. It meets the strict safety protocol we have in place, and teachers control which tools students can use, when, and how they can use them. Students are using AI as a thought partner as well to generate ideas. It's a great way to get past writer's block. We are also careful to cover ethics with them, so they know the possible downfalls of AI. It's much like how I taught my kids to do research both from books and the library. I teach them to only take bulleted notes instead of word for word unless they plan to use a quote. With bulleted lists, it's practically impossible to plagiarize.
AI is not going away, and we have a responsibility to prepare our students to use it ethically and competently in the work force as well as for their own needs.
I recommend books by John Spencer and Matt Miller on the subject. Education needs to change and adapt to make assignments and assessments that are not as easy to use AI.
Hey Eileen, you have given me much to comment on here. Three things:
1) How is generative AI a thought partner? Don't thought partners need context, understanding, actual intllligence, etc? If we tell students not to anthropomorphize chatbots and then to use them as a "thought partner" isn't that a mixed message? I like to say that I want a thought partner who acknowedges the winner of the 2020 presidential election - something Google Gemini won't do. Enter the prompt "who won the 2020 us presidential election?" into https://gemini.google.com/.
2) I dislike centering the AI ethics conversation around children. As I wrote in my post about AI policy, "Looking at AI policy through the lens of academic integrity puts the responsibility for the inherent harms of generative AI on individual users. It places the most agency and responsibility on the backs of children. Why is holding *children* to account the priority?" https://www.criticalinkling.com/i/148308778/academic-integrity
I am loving using AI as a thought partner for idea generation. My creativity is off the charts. I'm using @MagicSchool pretty exclusively with a little ChatGPT sprinkled in for non-educational topics. My district has also embraced AI and is using Magic Schoo's Magic Student with students. It meets the strict safety protocol we have in place, and teachers control which tools students can use, when, and how they can use them. Students are using AI as a thought partner as well to generate ideas. It's a great way to get past writer's block. We are also careful to cover ethics with them, so they know the possible downfalls of AI. It's much like how I taught my kids to do research both from books and the library. I teach them to only take bulleted notes instead of word for word unless they plan to use a quote. With bulleted lists, it's practically impossible to plagiarize.
AI is not going away, and we have a responsibility to prepare our students to use it ethically and competently in the work force as well as for their own needs.
I recommend books by John Spencer and Matt Miller on the subject. Education needs to change and adapt to make assignments and assessments that are not as easy to use AI.
Hey Eileen, you have given me much to comment on here. Three things:
1) How is generative AI a thought partner? Don't thought partners need context, understanding, actual intllligence, etc? If we tell students not to anthropomorphize chatbots and then to use them as a "thought partner" isn't that a mixed message? I like to say that I want a thought partner who acknowedges the winner of the 2020 presidential election - something Google Gemini won't do. Enter the prompt "who won the 2020 us presidential election?" into https://gemini.google.com/.
2) I dislike centering the AI ethics conversation around children. As I wrote in my post about AI policy, "Looking at AI policy through the lens of academic integrity puts the responsibility for the inherent harms of generative AI on individual users. It places the most agency and responsibility on the backs of children. Why is holding *children* to account the priority?" https://www.criticalinkling.com/i/148308778/academic-integrity
3) I push back when I hear "AI is not going away" about a technology that is environmentally unsustainable, bleeds money with no path to profitability, and still has no killer app. I wrote questions about the future of AI here: https://www.criticalinkling.com/p/teachers-not-time-travelers-ai#:~:text=Critical%20thinking%20about%20the%20future%20of%20%22AI%22%20includes%20questions%20such%20as%3A