I was intrigued by the lecture we had today, and really enjoyed being there, because we discussed the implications of something I feel very uncomfortable by: AI. It took me two years after ChatGPT came out for me to make my first search, and even now I use the technology very sparingly. When I think about AI, my instinct is to run away – but from what? AI is all around us now, and I can choose to opt out of certain softwares but can never fully escape the tentacles it holds society by.

Shortly after starting the PDPP program, I participated in a debate in one of my classes about the role of AI in elementary schools. I was forced to stop running away from AI and face the fact that as a future teacher, I need to take responsibility to learn about AI and its associated benefits and weaknesses. The more that I expose myself to AI, the more equipped I will be to teach my students how to be critical consumers of the software. I hate to admit it, but I was very impressed by the first ChatGPT search I made. It generated so many ideas – instantaneously – for games I could play with Grade 6 students. I can totally see why so many people resort to AI. We are all so busy, and turning to ChatGPT or other AI technologies can be so tempting as a method of getting through all of life’s demands.

Some Reasons I (Very Much) Dislike and Boycott AI:

  1. Environmental Implications. I get the sense that most people I meet have never realized just how much AI is destroying the environment. In class we talked about how in just 24 hours, ChatGPT uses a comparable amount of water to if the entire Belgian population (almost 12,000 people) flushed the toilet at the same time. And that statistic only includes one form of AI! I understand that just about everything we do has an environmental implication, but what frustrates me is that humans have created AI right when the global climate crisis is coming to a head. The only way we can combat this crisis is by making difficult life choices to reduce our environmental impacts, rather than becoming reliant on new technologies that wreak havoc on the environment. I really hope that in the next few years, people become more aware of AI’s environmental impact and put certain safeguards in place (since AI is never going to go away at this point, the best we can hope for is that it is more closely monitored). However, I am not hopeful that these essential changes will happen due to the new American administration’s priorities.

2. AI promotes laziness among students, and makes it difficult for teachers to assess whether a person’s work is actually their own. I got through my undergraduate degree by staying up late into the night writing and proofreading essays, assisted by my friends who would order pizzas to Stauffer Library at 12 in the morning. My five years of undergrad helped me form a positive work ethic that I hope will benefit me for years to come. Long hours studying taught me patience, disappointing grades on assignments I really invested in taught me grit and resilience, and positive academic feedback gave me confidence and self-efficacy. I do not think I would be able to thrive in the real world if my degree had been earned by ChatGPT, yet so many more people are turning to AI to get out of doing their schoolwork. As a teacher, this trend is scary because it means we can never be sure who is putting in the work. I want to trust my students, but it is hard when using AI to complete an assignment is slowly becoming normalized. It is especially important that elementary school teachers educate their students about the benefits and burdens of using AI so that young children grow up understanding how to appropriately use the software to help them.

3. AI has scary negative implications. AI has the potential to drastically escalate the amount of fake news already flying around the Internet. It is easier said than done to look at the media we consume with a critical lens – many people neglect to take these steps in our go-go-go world. As an example, somebody told me in a passing comment once that the singular for ‘rice’ is ‘rouse’. Looking back this statement is quite obviously fake, but I was not really paying attention when I heard this comment and my brain took it as fact rather than stopping to analyze what I had heard. For years, I had this incorrect assumption in my mind just because I was preoccupied when I first heard the information. The same phenomenon can happen when people look at AI-generated sources. While it might be quite apparent that images or text have been artificially fabricated, not everybody will notice – and it is scary to think about what the potential implications could be. I often think about the story one of our guest speakers told us where fake images of a teacher were fabricated to falsely frame them for inappropriate conduct. AI puts everybody in an extremely vulnerable position, and it is sometimes impossible to prove that something is or is not real.

I really liked the way that Michael ran the lecture about AI, talking about both the benefits and costs of using the software. He answered all our questions very well, and I agree with many of the personal opinions he has about different types of AI. My favourite part of the lecture was when we discussed recommendations for teachers living in this AI-driven society. Here are some things that I was thinking about for my future practice:

  1. As I mentioned in class, students always want to understand WHY there are rules in place. I will not tell my students ‘don’t use AI’ without explaining a clear rationale behind why an assignment is important for them to do without technology. If all the work I assign students has a clear benefit for their learning, my hope is that most students will be motivated away from getting somebody (something?) else to do their work.
  2. I also will try and make sure that my assignments are as engaging and fun as possible. If my students are really excited about what they are learning and working on, the chances of them using AI are much lower.
  3. I also want to make sure that my students feel self-efficacy to do the assignment. Many of the times my friends have turned to AI for academic purposes is because they are confused or do not know where to begin. By preparing my students in advance, making myself available for support, and giving clear instructions, I hope that students will feel equipped to do the assignment themselves rather than lean on AI.
  4. To encourage critical consumption of AI, I want to give my students the chance to interact with the technology and see where it can sometimes be incorrect. For instance, if they ask certain questions to AI and the answers don’t seem well thought out or fully true, students will realize that getting AI to do homework for them does not always lead to academic success.
  5. Many of my assignments will be done during class time so that I can monitor my students, make sure they are understanding the material, and answer questions they may have. By doing all their work while they are at school, my students will have fewer opportunities to avoid doing their work by using ChatGPT.

Out of pure curiosity, I pasted my notes from our lecture today into the AI software Notebook.lm. The website generated a 20 minute podcast about the implications of the rise of AI, all from the 2-3 pages of notes I took this morning. I don’t expect anybody to listen to this fully (even I didn’t, to be honest), but I was pretty freaked out to hear AI talking about its own negative implications.

(Update: I can’t seem to attach the podcast, but I’m sure you get the idea!)