Week 3 Blog

We need to be aware of the dangers of AI hindering education in schools. I’ve recently read Dr.Plate’s response to Ted Chiang called “Thinking at a Higher Level”, which argues against the arguments that Ted Chiang proposes against usage of AI in schools. While the piece does make some valid points, it also misses the cautions of using AI in school that we should not ignore. I want to preface this by saying that I am not against using AI as a tool to help write, code, or most things want it to do. Most people when they think of AI immediately think of the negatives such as generative AI videos online, or generative art made that steals from others, but that’s only a small percentage of things that AI is useful for, and generative AI is one of the only forms of AI that has more harm than good. However, while the idea of learning to ask questions is helpful, it ignores the problems that are being created due to students using AI, and I also do not agree with the notion that this level of thinking is “higher”, or that it should be a priority over critical thinking/problem solving. Asking is not knowing. The overall idea of the blog is that being able to ask AI questions and get a response that works, while also being able to interrogate and make sure it holds up. “A student who uses AI to generate prose and then accepts it uncritically has learned nothing. But a student who uses AI to generate prose and then interrogates it—asks whether the argument holds, whether the evidence supports the claims, whether the structure serves the purpose—is doing the kind of thinking that actually matters. They're doing the work that Yegge describes: learning to predict, evaluate, redirect. Learning to ask the right questions.” While asking questions is important for success, this type of learning through asking questions lowers essential things such as critical thinking and problem solving, two things that are a requirement to survive in any field of work. If people are taught how to ask AI for things, instead of being taught how to make things, it ends up risking the possibility of overall lowering critical thinking, problem solving, and independence. Students won’t always be able to use AI in job opportunities, so once there’s a situation they can’t use AI for, they won’t be on the same level of people who don’t use it. However, that also leads into the next segment. Self Control, and student’s lack of it. During 2020, I was in my grade school years and was forced to attend virtual class due to COVID 19. This at first seemed like a huge win, I didn’t have to be driven to school, I could stay at home all day, and I didn’t have to pay attention to class. However, that last thing is the exact problem that me and all of my other virtual classmates faced, that being resisting the urge to not pay attention/skip class. Now with AI, a similar problem is occurring. Normalizing the use of AI, and hoping that students only use it as a help instead of just making it due to the full assignment for them is dangerous and inconsistent at best. Take a study done by insidehighered.com. In their study, they found that about 25% of students admitted to using AI to cheat in their assignments. Not help, not assist, they used it to cheat, and this is more likely a floor of the percentage rather than the ceiling, since it’s easy to assume that some students simply didn’t tell the truth. This number may seem negligible at first, but that is still 1 and 4 students who are cheating using AI, and it’s likely to be more. While cheating has existed far before AI was even thought of, AI makes it far easier to cheat/get away with cheating. However, I will give credit to the pro-AI side of this debate, because instead of asking “how can we get rid of AI and prevent people from cheating”, I think it’s a better question to ask “how can we prevent people from wanting to cheat in the first place?” What should be done about this? To focus on a more positive light though, it is evident that AI can easily be a source of good more than a source of cheating. However, the main issue is the concept of balancing how we regulate AI usage in schools, as opposed to getting rid of it. AI is not going away anytime soon, and the world is constantly integrating it in many different ways, but I think there needs to be more effort into adapting to it. Many schools think a full AI ban is the solution, but I don’t think that way because AI is going to be a factor going into the work field from now on. Instead of a full ban or a full reliance, there simply needs to be a balance. I’d argue that the balance should lean more to the independence side, because being independent should be the ultimate goal of education, but at the same time, we should teach kids how to use AI to think instead of using it to tell.