Companies that develop software to detect if artificial intelligence or humans authored an essay or other written assignment are having a windfall moment amid ChatGPT’s wild success.
ChatGPT launched last November and quickly grew to 100 million monthly active users by January, setting a record as the fastest-growing user base ever. The platform has been especially favored by younger generations, including students in middle school through college.
Surveys have found that about 30% of college students reported using ChatGPT for school assignments since the platform launched, while half of college students say using the system is a form of cheating.
AI detection companies, such as Winston AI and Turnitin, are revealing that the wild success for ChatGPT has also benefited the tech detection firms as teachers and employers look to weed out people submitting computer-generated materials as produced by humans.
Winston AI will provide users with a “scale of 0-100, the percentage of odds a copy is generated by a human or AI,” as well as look for potential plagiarism.
Renaud explained that AI-generated materials have “tells” that expose it as computer-generated, including “perplexity” and “burstiness.” Perplexity is defined by the company as tracking language patterns in a writing sample and determining if it follows how an AI system was trained or if it appears to be unique and written by a human.
“So, in the same way that generative AI is trained on large datasets, we trained our detector to identify key patterns in ‘synthetic’ texts through deep learning.”
Renaud said he was initially “very worried” about ChatGPT, but his worries have since eased. AI will always have “tells” that other platforms can detect, he said.
The chief product officer of Turnitin, another company that detects AI-generated materials, recently published a letter to the editor of The Chronicle of Higher Education arguing that AI materials are easily detected.
Turnitin’s Annie Chechitelli responded to an essay published in The Chronicle of Higher Education authored by a student at Columbia University who said, “No professor or software could ever pick up on” materials submitted by students but actually written by a computer.
Similar to Renaud, Chechitelli argued that AI materials will always have “tells” and that tech companies looking to uncover the computer-generated materials have crafted new ways to expose the AI-generated materials.
“We think there will always be a tell,” she told The Guardian. “And we’re seeing other methods to unmask it. We have cases now where teachers want students to do something in person to establish a baseline. And keep in mind that we have 25 years of student data to train our model on.”
Amid concern students will increasingly cheat via AI, some colleges in the U.S. have moved to embrace the revolutionary technology, implementing it into classrooms to assist with teaching and coursework.
Harvard University, for example, announced it will employ AI chatbots this fall to assist teaching a flagship coding class at the school. The chatbots will “support students as we can through software and reallocate the most useful resources — the humans — to help students who need it most,” according to Harvard computer science professor David Malan.