Just over half of a senior agricultural science class at Texas A&M University-Commerce had their diplomas withheld due to erroneous allegations of cheating with ChatGPT.
The professor and campus rodeo instructor, Dr. Jared Mumm, sparked controversy when he emailed the class with allegations of cheating using AI, which subsequently went viral when it was posted to Reddit.
The email detailed how he tried to use ChatGPT itself to determine whether each student’s last 3 assignments were generated using the chatbot.
Texas A&M commerce professor fails entire class of seniors blocking them from graduating- claiming they all use “Chat GTP”
byu/DearKick inChatGPT
Dr. Mumm mistakenly trusted ChatGPT’s confirmation that it had written many of the assignments it was asked to evaluate. The original poster of the email, the fiance of one of the affected students, later clarified that just over half of the class were accused of ChatGPT use.
While the professor’s misunderstanding of ChatGPT is somewhat understandable given the complex nature of large language models (LLMs), commenters argued that he should have double checked his facts before taking such drastic measures.
The email went viral so quickly because LLMs just don’t work that way. They can’t tell you if they wrote any passage whatsoever. In fact, a reddit commenter typed the professor’s email into ChatGPT, asking if it wrote the passage, and it confidently responded: “Yes, I wrote the content you shared.”
Large language models notoriously hallucinate facts and often confidently answer prompts that they have no way of answering correctly, such as determining whether they wrote something or not. LLMs, at least in their current state, have no memory of responses or interactions with other users nor do they possess the ability to tell between AI-written and human-written text.
LLMs are powered by machine learning algorithms that analyze vast amounts of data. They identify patterns in language, predict the probability of a next word in a sentence, and build cohesive and contextually relevant responses. They bear more resemblance to a highly advanced version of your phone’s auto-complete feature than to human intelligence.
A Painful Lesson
Many students were likely wrongly accused and had their diplomas withheld. However, university officials are already investigating the matter, according to a statement released to Rolling Stone on Wednesday.
The publicity this incident has already attracted may actually be a good thing in the end, assuming the innocent students are promptly cleared of wrongdoing. Professors, even those in entirely unrelated fields, must learn how AI works and how to detect it effectively without hurting innocent students.
This debacle comes at a time when professors and teachers around the world are trying to figure out how to combat the use of AI and to make sure students are actually learning. There is no doubt that thousands if not millions of students are cheating themselves out of valuable learning by using AI.
This particular situation was made even more difficult because at least 2 of the students in the class admitted to using the AI on assignments for the class, according to the original Reddit poster.
Despite ChatGPT’s inability to detect AI writing, some tools can, albeit with varying accuracy. However, as LLMs get better, the effectiveness of these tools may worsen, possibly reaching a point where distinguishing between human and AI-generated text becomes virtually impossible
Related Articles:
22 Best Crypto To Buy Now – Which Is The Best Cryptocurrency to Invest in 2023?
Gene-Editing Startup Pairwise is Making Mustard Greens Tastier With CRISPR
What's the Best Crypto to Buy Now?
- B2C Listed the Top Rated Cryptocurrencies for 2023
- Get Early Access to Presales & Private Sales
- KYC Verified & Audited, Public Teams
- Most Voted for Tokens on CoinSniper
- Upcoming Listings on Exchanges, NFT Drops