Share
News

As More Students Admit to Using AI to Complete Assignments, New Study Shows Most Go Undetected

Share

An alarming new study out of the U.K. has found that nearly all university written exam submissions that were created using artificial intelligence escaped detection by school administrators.

The study is heightening existing fears that the increasing use of AI by college and high school students will hamper their ability to master key skills that will be required in the professional world.

The study, published in the journal PLOS ONE at the end of June, “injected 100 percent AI written submissions into the examinations system in five undergraduate modules, across all years of study, for a BSc degree in Psychology at a reputable UK university.” As a result, 94 percent of the submissions were not identified as being generated by AI.

Furthermore, the artificially generated submissions received “on average half a grade boundary higher” than those of actual students, with some submissions achieving a full grade boundary advantage.

In addition, the study noted that “there was an 83.4 percent chance that the AI submissions on a module would outperform a random selection of the same number of real student submissions.”

Trending:
Judge Calls Georgia School Shooting Suspect Back to Courtroom to Correct Death Penalty Misstatement

The advent of free AI-powered online language generators such as ChatGPT becoming available for mass consumption over the last 18 months has shaken the academic world to its core, as school administrators and educators grapple with how to distinguish between AI-generated and human-generated schoolwork.

The issue has become increasingly urgent in light of studies showing that “over 89 percent of students have used ChatGPT to help with a homework assignment.”

Amid the controversy, some students say they have been unfairly caught in the middle of vague school policies on whether or not AI-powered online resources are allowed to be used, and have been accused of cheating as a result.

recent case in Baltimore City involved a high school junior named Hassan Hunt who says he used the “AI writing partner” tool Grammarly to assist him in writing a paper but was accused of “plagiarizing” another person in the paper’s final paragraph, with the teacher claiming that “in this case, the other person is AI.”

Should students have to handwrite their papers?

The teacher stated that AI detection software called GPTZero was used to determine “with 98 percent probability” that the paragraph in question was not written by Hunt. In a statement, the school district admitted that there was no specific policy governing students’ use of AI.

Some tech advocates like Wilson Tsu, an attorney and engineer who founded an AI-powered research assistant tool, say that academia should resign itself to a new world powered by AI. In a recent op-ed for The Hill, Tsu argues that most educators “simply don’t know enough about AI tools” in order to properly create policies that restrict them.

“Rather than try to stay ahead of their students in a sort of technological arms race, I believe educators should redefine cheating, just as they did when students started using calculators or smartphones,” Tsu said. “In the case of generative AI, the basic question is, ‘What input should be coming from the human, and what inputs can come from the AI in order to accomplish the goal of the assignment?’”

Still, many experts are highly concerned with the impact that AI is having on education.

Meg Kilgannon, Family Research Council’s senior fellow for Education Studies, contended that the U.K. study raises a series of disconcerting questions.

Related:
Authors Sue AI Chatbot Creator Anthropic for Copyright Infringement

“This is an interesting problem to consider,” she told The Washington Stand.

“Do the results of the study prove that AI is smart or that humans are less smart? Does it prove the psychology professors lack a level of discernment, intuition, and relationship to their students one might expect in the field? Does it prove that classes are too uniform and ‘inclusive’? We are primed to believe the problem with AI is that students will use it to cheat. But is that the only problem?”

Kilgannon continued, “One definitive statement made by an author of the study was that we are not going back to handwritten tests, which is interesting because that would solve the issue of students using AI to cheat. In a proctored exam room with blue books issued at exam time, AI isn’t going to be able to help you. If psychology course content is devoid of nuance, intuition, and love for the human person so that answers to problems are easily mimicked by technology, we might have a bigger problem than just AI.”

This article appeared originally on The Washington Stand.

Truth and Accuracy

Submit a Correction →



We are committed to truth and accuracy in all of our journalism. Read our editorial standards.

Tags:
, , , , , ,
Share
The Washington Stand is Family Research Council’s outlet for news and commentary from a biblical worldview. The Washington Stand is based in Washington, D.C. and is published by FRC, whose mission is to advance faith, family, and freedom in public policy and the culture from a biblical worldview.




Advertise with The Western Journal and reach millions of highly engaged readers, while supporting our work. Advertise Today.

Conversation