LIKE a supernova bursting in the sky, ChatGPT made its debut on the internet in November 2022. Students everywhere, who are quick on their feet (and their fingers) with technology, lost no time testing it.
Writing for The Miami Student newspaper, Sean Scott quoted Justin Klein, a computer science major at Miami University (MU) in Ohio, saying, “During my data structures class, we were studying for the final exams.”
“My friends were talking about how they were putting some of the practice questions into ChatGPT, and it was generating very active responses.”
Since then, AI technology has generated a storm of controversy about the future of education and how teachers should use AI technology.
I am a writing fellow at MU, where I teach one class on Composition and Rhetoric. There is no official policy from the English Department on AI use. I follow what most of my fellow teachers do. I tell the students that they may use AI, the way they use Google and Wikipedia, to do initial research for their topics. It’s like getting a bird’s eye view of an issue, a large picture, as it were. It’s not an endpoint to learning. The student should still write the initial draft.
Tim Lockridge, an associate professor of English at Miami, also includes AI in his syllabi. “It’s a tool, and writing is something we do with tools,” he said. “We have to have tools to write; without tools, there is no writing. Those tools range from pencils to pens to word processors to now tools like these large language models … The advice I give to students is that the tool has to align with the job and your goals for the job.”
Lockridge allows the students to use the generative tool that gives an abstract of a dense and difficult text. But it should not be a final step. Instead, he said “students should assess the summary and use their own analytical skills to complete the reading themselves.”
Some professors have also outright banned the use of AI in any form for their courses. Brenda Quaye, assistant director for academic integrity, said Miami left the decision to individual faculty members rather than creating an AI policy.
“Coming from the academic integrity perspective, it is similar to some faculty members who allow open-book or open-note tests, and some will not,” Quaye said. “Some will allow calculator use. Some will not … Like any of those parameters that instructors set within their courses to meet the goals and the purpose of the assignments, they can make decisions about how and when they use AI or don’t.”
Last semester, Quaye said 35 percent of her office’s 225 academic integrity cases involved potential unauthorized use of AI. Students will get a zero on assignments with unauthorized AI uses, and there may be an additional 5-10 percent reduction in their overall grade in the class, depending on the value of the assignment. In more severe cases, students could fail the class.
Quaye said that AI can make writing worse because it may look unnatural. For students who use AI to help with coding, she said most cases are caught because the code includes elements that haven’t been taught yet, or don’t follow the teacher’s instructions.
For Lockridge, the biggest deterrent to unauthorized AI use is the damage to students’ learning. “If you want to cheat the system, you can cheat the system and get away with it for a certain amount of time,” Lockridge said. “We have to maintain standards, and we have to encourage people to do the right thing, and there’s teaching involved there, but how do you train people to find value in the work? That’s got to be the center of it.”
To be continued…
Danton Remoto has taught English and Literature for the last 38 years. He also sits on the editorial board of The Manila Times.
Source: The Manila Times