Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg

Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.

A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.

Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.

The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    21 hours ago

    The metric is “how much does this tool does the assignment for you”. A translator does zero (I’m assuming a translator at classes, though any “translate this” assignment coupled with google translate or similars would make this fit in with AI). Computers made the writing and copy-pasting part easier and faster, but they didn’t do the assignment. Google made the “find the correct stuff” easy, you no longer needed to manually look thru several books or ask people where the answer would be, but you still had to find it. Still, if an assignment asked for something that wasn’t perfectly answered on some page on the internet, like some random equation or specific programming code, you’d still have to work out something.

    With AI, you just throw the prompt and copy the result. “A patient arrived and has complained about severe chest pain. What procedures should be executed in which order?” - Don’t know, don’t care, AI wrote something, so it must be true. Copypaste, send, done.
    That leads to the point where, if the people who are supposed to actually know stuff only pretend to know thanks to AI covering their asses, why should anyone bother with them? Skip the literal middleman and just ask an AI the same thing they’d ask and be done with it. “AI doc, I feel like my heart is burning, what do I do?” - Is the answer right or wrong? Don’t know, don’t care, the person who should didn’t, either.