ChatGPT Has Colleges in Emergency Mode to Shield Academic Integrity

High schools around the country have held emergency meetings of their honor code councils or other committees that govern student cheating.

The reason: a whole new kind of cheating suddenly possible thanks to a new AI tool called ChatGPT. The technology, which appeared just a few months ago, can answer pretty much any question you type into it, and can adapt those answers to a different style or tone on command. The result is that it generates text that sounds like a person wrote it.

As we explored in an episode of the EdSurge Podcast a few weeks ago, students around the country in schools and colleges have found that they can easily ask ChatGPT to do their homework for them. After all, it is tailored to create the kind of essays that instructors ask for.

So professors have been quick to respond.

At Texas State University, for example, professors across campus began emailing the honor code council crying for help.

“So many professors right now are struggling with burnout and disengagement and so many other things already that even those who embrace paradigm shifts are at a minimum of sighing — uh, that’s another thing for me to be aware of,” says Rachel Davenport , an associate professor of biology at the university who serves as vice chair of the Honor Code Council.

It is among the professors who are open to change, she notes: “The other prevailing mood is terror, thinking, ‘This is throwing everything I do into chaos. How am I going to capture it?’

On this week’s EdSurge Podcast, we bring you part two of our exploration of what ChatGPT means for education. Our focus is on what college honor code councils are doing to respond.

Listen to the episode on Apple Podcasts, Overcast, Spotify, Stitcher, or wherever you get your podcasts, or use the player on this page. Or read a partial transcript below, condensed and edited for clarity.

To get a national perspective, EdSurge recently reached out to Derek Newton, a journalist who runs a weekly Substack newsletter called The Cheat Sheet, about academic integrity and cheating.

“It has been a very loud alarm bell for people in teaching and learning … at all levels,” he said.

In the past, any new approach to cheating has spread slowly, often stealthily in dark corners of the Internet. With ChatGPT, adoption has become widespread in a matter of months.

“I spoke, I think, six separate columns in The New York Times on ChatGPT,” Newton said. “That level of visibility is basically unprecedented for anything but war.”

So back at Texas State, Rachel Davenport noted that one thing she did recently to get up to speed was to try both ChatGPT and a tool designed to detect bot-written writing, called GPTZero. Davenport is a trained scientist, so her impulse was to run her own experiment.

“I ran nine submissions through GPTZero,” she says. “Six of them were human and three of them I had ChatGPT generate. Of those nine, seven of them were correctly identified [by GPTZero]. Of the two that were not correctly identified, they were not misidentified either. It just said they needed more information. And one of them was by a student and the other was by ChatGPT.”

On Monday, the Texas State Honors Council issued a letter about ChatGPT to all faculty members. The subject line is: “Artificial Intelligence (ChatGPT) and the University Honor Code Policy.”

How it starts:

“As we begin the second week of the Spring 2023 semester, we would like to briefly mention the evolving topic of artificial intelligence (AI) and potential Honor Code implications that may arise if used by students in preparation for course deliverables submitted to academic merit. Our institution, teaching and evaluation methods, and the accompanying industry depend on the use of computers to assist with common work tasks every day. But when it is used in place of individual thought, creation and synthesis of knowledge by false submission of a paper written (in whole or in part) as one’s own original work, it results in a violation of academic integrity.”

It goes on to remind faculty of the rules and some basic principles of the Honor Code, and it points to some resources professors can check out to learn more about ChatGPT.

It turns out there are deeper issues to consider when it comes to ChatGPT and this ability for AI to generate human-sounding language. Because it’s possible that we’re at a major inflection point in our broader use of technology, where many real-world scenarios arise where people work with AI to get things done.

This came to light the other day when I spoke to Simon McCallum, a professor who teaches video game design at Victoria University of Wellington in New Zealand.

He told me about how he has started using AI tools with his students that can transform code written in one programming language into code in another language. It’s called GitHub Copilot, and it’s kind of like ChatGPT, but for computers.

“I’ve talked to industry programmers who use the AI ​​codes and advertisers who have used AI to copy for a long time,” McCallum said. “And if the industry uses these tools, if we try to go back to pen and paper … and we try to force people not to use these tools, our assessments become less and less valid.”


Leave a Reply

Your email address will not be published. Required fields are marked *