Public Media for Central Pennsylvania
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Penn State Professor Shyam Sundar on the use of AI tools in education

 Professor Shyam Sundar sits inside a WPSU recording studio on Thursday, July 27, 2023 in University Park, Pa.
James Engel
/
WPSU
Professor Shyam Sundar sits inside a WPSU recording studio on Thursday, July 27, 2023 in University Park, Pa.

Shyam Sundar is a Penn State professor and the director of the Center for Socially Responsible Artificial Intelligence. He studies the social and psychological aspects of artificial intelligence and the uses and effects of digital media. WPSU intern James Engel spoke with him about the use of AI tools in the classroom and its effects on education.

Here is their conversation:

James Engel
Professor Shyam Sundar, thank you for talking with us.

Shyam Sundar
Thank you for inviting me.

James Engel
It seems like a lot of academic institutions, including Penn State, were caught a little flat-footed by tools like ChatGPT and others when they suddenly became available to students sort of more easily last year. I mean, as a scholar in the area, it might not have been so surprising to you, but what sort of reactions did you see from your colleagues and others who might not have been familiar in the field?

Shyam Sundar
I think there's a general increase in interest in AI tools. You've seen a lot of instructors who otherwise did not ever think about AI suddenly, kind of waking up to the fact that all their students were potentially using these tools. Some of us have been looking at the so called AI writers for a while now, even before GPT. And we've been thinking of different ways in which we should be incorporating that into curricula and into course, syllabi and things like that. There were lots of different applications that preceded ChatGPT that use somewhat the same technology. But they were mostly, you know, for a premium, for a price. And like you said, ChatGPT made it much more accessible to everyone. And then that I think, freaked a lot of people out.

Most, especially the instructors who have courses where they assign term papers, and let the students write up something and send it to them. And that's where the big concern is where they cannot tell who or what are the sources of a particular term paper because our AI has gotten so good at writing very coherent sentences and producing information in a very authoritative sounding manner.

James Engel
So what effect do you think that's had on student integrity or student engagement in the past couple years here?

Shyam Sundar
You know, all intelligent students, students with the greatest academic integrity have now kind of test driven ChatGPT and figured out their own ways to use it. And they are negotiating the technology as are instructors. It's not like instructors completely ban or, you know, abhor the use of ChatGPT and other tools like that. They are encouraging students to more mindfully incorporate this technology in a way that is similar to what we did maybe 30 years ago with spellcheck. When spellcheck first came about a lot of instructors were very concerned that students were cheating by not knowing the spelling themselves, and that they were relying on, you know, word processing programs like Microsoft Word, to correct their spelling. And then they turn in a very nice term paper without any spelling errors. In those days, we used to grade partly for spelling and grammar.

But these days with these tools, we assume that these are not skills that we should evaluate students on. And so our standards for evaluation and the way we think of human intelligence has changed, and it will continue to change with the development of these AI tools that replicate what hitherto were, you know, exclusively human domains of expertise.

James Engel
I know some professors who didn't want AI used in their courses began using AI-detection software last year. Do you think enforcement tools like that have caught up to the current AI capabilities?

Shyam Sundar
I don't think the detection path, the penalty thereof should be the first, you know, line of attack, if you will. The first response should be more education, educating and having a dialogue with the students about the correct use of these tools, and to be transparent about their use of these tools in their submitted work. And also, to be critical consumers of that work. I mean, AI tools for all their, you know, merit also are known to be stunningly wrong. They hallucinate, they make up stuff. And they even make up references, you know, of articles that don't exist. And so, they are in some ways, you know, very easy to figure out if something is done by an AI, if you think about it. As an instructor, if you closely read your students' work, you can tell if it's kind of general banalities, like you'd see coming from a machine or something more specific, something that was discussed in class and students relating to it. So, in some ways, I think instructors need to also come halfway and try to write assignments that are much more specific to their instruction, and not give them general assignments, not assign students to write about things that they could pick up from the internet or from these AI tools.

James Engel
Do you expect Penn State and other institutions to introduce standards for the use of AI? And should they?

Shyam Sundar
Absolutely, I think many already have done this. Universities, including Penn State, have come up with at least guidelines, I wouldn't necessarily call them policy standards yet. But guidelines for how to think about these AI tools, generative AI tools, broadly speaking, and how to use them ethically, in the classroom. Both for instructors and for students how to craft your syllabus to be clear about how to use it, you can either outright ban the use in terms of submitting assignments, or you could ask students to cite AI. And even the standard reference manuals these days have come up with language on how to cite generative-AI-produced material. So, I think, overall, we are seeing an improvement in the communication of how AI should be incorporated. And you'll see that in our course syllabi in our university and in policy documents and so forth.

James Engel
And I imagine those policy documents and syllabi and potential guidelines are going to have to constantly be evolving?

Shyam Sundar
That's right. Yeah. So they will constantly evolve and keep up with the new tools, hopefully. And in many ways we look to students to kind of lead the way, right, because students often come across these tools. And they're the ones who might give us a sense of what might be tools that students are, you know, excited about and are relying on. And then we could craft policies, course policies around that, to be able to better inform the educational and the learning aspects of the course.

James Engel
Alright, Shyam Sundar, I appreciate you speaking with us. Thank you.

Shyam Sundar
Thank you for having me.

Tags
James Engel is WPSU's Election Misinformation Reporter.