Using AI in education

0

Generative AI is getting used everywhere, including education. Today’s university students need to learn how to use AI responsibly. But while AI has benefits, it also has limitations.

I interviewed Jim Hall about AI in education. Aside from his work in open source software, Jim teaches technical writing at the University of Minnesota, where he has also researched generative AI in technical communication. I asked Jim about AI in the classroom.

What is your general opinion on the presence of AI in educational settings and on AI technology itself? Do you view it as a positive force in classrooms, or are there concerns that educators and institutions should consider?

I’m split on the benefits of AI in education.

In general, I advise my students to be really careful about how they use AI. You can fall into an “AI trap” pretty easily, just by googling something. If you google the answer to a question, you’ll often find Google provides a quick summary of the answer. This is generated by AI. If you rely on this AI-generated answer, it might be right, but it could be wrong.

Remember that AI doesn’t really understand the topics it’s writing about. ChatGPT, Google Gemini, and Microsoft Copilot are generative AI, which means they generate new output based on the context and based on what’s come before.

It’s not too different from a statistical model. What’s the next number after 1, 2, 4, 8? It’s 16 … and you know that because once you understand the context of what I’m asking, you can use pattern recognition to generate “16.” But maybe you didn’t fully recognize the context, and you guessed “10.” Generative AI is more complex than that, but basically it’s filling in patterns based on context clues.

So that means you need to be really careful about how you use AI. If you’re a student, don’t rely on AI to “get it right” if you ask it to do a homework problem for you. Don’t rely on AI to “get it right” if you ask it to explain a homework assignment (instructions) in a different way. AI might get it right, but it might get it wrong.

The other problem I have with AI in education is that it “skips to the end.” In my teaching style, I like for students to understand what’s happening before they start to use shortcuts. It’s like in math class: you learned how to multiply two 1-digit numbers by hand … then you learned how to multiply a 2-digit number with a 1-digit number … then you learned how to multiply two 2-digit numbers. In other words, you learned how it works. And then once you know the steps to multiply two 2-digit numbers, you don’t have to do it by hand anymore. It’s okay to use a calculator the next time you need to multiply 13 x 20.

Instead, if you skipped that learning process, and you “skipped to the end” by saying “just use a calculator, that’s what everyone does” … now you don’t go through the learning process to understand how multiplication works. You only know how to use the calculator. Now you’re at a disadvantage later on.

So I prefer for my students to learn how to do it before they let AI do it. It sounds like a backwards way to do it (why not help students learn to use AI for tech writing?) but you’ll appreciate it later when you have to write something that AI hasn’t been trained on yet and now you need to do it on your own.

In your experience as an instructor, what impacts have you observed from AI tools like ChatGPT on students’ critical thinking and independent problem-solving skills?

I’ve seen a lot of people (not just students) skipping the process and just blindly relying on AI to come up with the right answer. They assume that AI will get it right.

Here’s a real-life example. A colleague started working for a startup in early 2022. Because it was a startup, he got moved around a lot: two months leading one project, then “now we need you to do XYZ Project.” And it was like that for a year; he got moved around every two months or so.

Then a big tech company bought the startup. When he met with his new boss (from the big tech company) his boss asked him to write a 12-month plan. My colleague didn’t know what to do; he was always getting moved around.

By now, it was January 2023, and ChatGPT had become very popular since its debut in November. So he asked ChatGPT “List the 12-month goals for (my position).”

ChatGPT generated a list, with supporting text for each. He looked it over, thought “that looks okay” (for as much as he knew, because he hadn’t been in that position for more than about two months) and copied/pasted it into a document. He gave it to his boss who signed off on it.

And the danger is: Can he really do all of the things that ChatGPT listed for him? If he can’t do something (because that’s not something his company actually does, or because that’s not something you can do in that field or in this country, or because of funding, or because that’s not the focus for his company, or some other reason) that’s on him. He still owns that 12-month plan. He can’t tell his boss “you can’t hold me to that, ChatGPT gave that list to me.”

What do you see as the primary benefits and challenges of using AI tools in an educational setting, particularly for student development and learning outcomes?

As much as I criticize AI, generative AI is usually pretty good at summarizing things. So if you need to make a summary of something, using AI is not a bad start.

And I give this advice to my students: It’s okay to use AI, to ask it things like “write an article about (topic)” or “write an outline for (paper topic).” If you don’t know how to write about something, if you don’t know how to structure a paper about it, it can help to see what AI generates. Then you can see how to do it. But then you need to do the next step: Put that aside and do it on your own. Don’t use anything that AI generates for you. If you try to re-use any AI-generated text, that’s at best unethical; at worst, your instructor might consider it plagiarism.

How do you feel AI tools should be integrated into educational programs to ensure they support independent learning rather than creating dependency?

I think instructors need to start each semester with an open conversation about what “using AI” means for that course, and what parameters they would put around it. I’m pretty open with my students that it’s okay to ask AI questions, to summarize something for you, to show you a solution. But don’t assume that AI will be correct.

I also like to show my students real-world examples of how AI can be both right and wrong at the same time. For example, every six months or so, I like to ask AI to “List the major characters from the movie Star Wars: Episode IV: A New Hope.” It’s an easy question, but AI both gets it right and wrong at the same time.

I asked ChatGPT this question earlier in Fall, and it gave me a list of major characters from the first Star Wars movie. And importantly: all of those characters appeared in the film. So it got that right.

But, it also got it wrong! ChatGPT said Darth Vader was “a Sith lord and the main antagonist in the movie.” (I’m paraphrasing.) Oops. Vader definitely appears, but the word “Sith” isn’t uttered on-screen until Star Wars: Episode I: The Phantom Menace (1999). But ChatGPT knew that Vader was a Sith lord because we know that now. And even before 1999, we knew Vader was Sith because of all the Expanded Universe books and comics and games and trading cards … they all said “Sith.” But “Sith” was never mentioned in Episode IV.

And if I asked a student “I want you to watch Episode IV and give me a list of the major characters” (for whatever reason, let’s say that’s the assignment) and that student decided “I don’t have time to watch an old movie, I’ll ask ChatGPT” then I’d know from the answers that they faked it.

(ChatGPT injected other examples of things not known in Episode IV, but that’s a really clear example.)

But with that caveat that you need to be careful about what AI generates for you, I also advise my students they can use AI to create outlines, to create sample articles. But once they see how to do it (from the examples that AI generated for them) they need to put that aside and do it on their own. If there’s one repeated refrain about how to use AI in an educational context, that’s it.

From an instructor’s perspective, what policies or guidelines would you recommend universities implement to promote balanced use of AI tools among students?

I think instructors need to be clear about what’s okay and what’s not okay about using AI for their class. For some courses, AI could be fine. In those classes, go for it. In other classes, AI is basically “cheating.” In those classes, the instructor needs to be clear about it.

AI is always evolving, and it’s pretty fast moving. It’s something that instructors need to track year by year, and see how AI has changed. Maybe AI will get significantly better and won’t make stuff up, which will make for a better use case in education.

But I’ll always ask my students to learn how to do things before I let them use a tool to do it for them.

Have you encountered any differences in how students approach assignments or problem-solving with AI assistance compared to before AI tools became widely accessible?

I thought it was interesting when one student approached me and said that he uses AI to summarize and rephrase instructions assignments for each of his classes. (He was asking if it was okay for my class.) I don’t know why he wanted to do that, but he said sometimes an assignment wasn’t very clear, so using AI made it easier for him.

I hadn’t considered using AI in this way, so that was a new application for me. I said if it worked for him, that’s okay with me. But my general advice is that if AI skips a requirement in the instructions then that’s on him.

I was also interested in an example that a fellow instructor shared with me. Google’s NotebookLM system will take whatever you give it and turn it into a 2-person podcast episode. I’ve heard the results, and it really does sound like two people just talking about something. My colleague shared that one student put their readings into the AI, and listened to the “podcast” episode on their train ride home.

I’d be careful with that, with my standard warning that sometimes AI can get it wrong. But if that modality helps students to learn, then I’m for it. In the “podcast” example, my colleague said it was a chemistry topic. And I guess that’s not too different from two people talking about it on This American Life, Science Friday, or some other podcast.

Leave a Reply