AI Ghostwriting Is Creeping Into Science—Is That a Bad Thing?

cryptonews.net 09/07/2025 - 22:01 PM

Which Words Give AI Away?

A new study of over 15 million biomedical abstracts on PubMed found that at least 13.5% of scientific papers published in 2024 exhibit signs of AI-assisted writing tools, particularly OpenAI’s ChatGPT.
The research, conducted by Northwestern University and the Hertie Institute for AI in Brain Health at the University of Tübingen, observed a significant increase in 2024 in word patterns associated with AI-generated writing. The study identified both uncommon terms like ‘delves,’ ‘underscores,’ and ‘showcasing’ and familiar ones like ‘potential,’ ‘findings,’ and ‘crucial.’
To assess this change, researchers compared word frequencies in 2024 against baseline data from 2021 and 2022. They discovered 454 words frequently overused by AI models, including ‘encapsulates,’ ‘noteworthy,’ ‘underscore,’ ‘scrutinizing,’ and ‘seamless.’
Experts indicated that word frequency alone isn’t enough to confirm AI use.
> “Language changes over time,” said Stuart Geiger, assistant professor of communication at UC San Diego. “‘Delve’ has skyrocketed, and this word is now in the vocabulary of society, partly because of ChatGPT.”
Geiger emphasized that detecting AI in writing is a technical and ethical challenge.
> “The only way to reasonably detect LLM use is if you’re there, surveilling the writing process,” he noted, mentioning the high moral and logistical costs involved.
He cautioned against jumping to conclusions based solely on surface indicators without understanding the broader context.
> “It could be they’ve just seen a bunch of ChatGPT-generated writing and now think that’s what good writing looks like,” he explained.
As AI text becomes more prevalent, educators are implementing detection tools with varying effectiveness. In October 2024, Decrypt tested leading AI detection tools such as Grammarly, Quillbot, GPTZero, and ZeroGPT, yielding inconsistent results. Some tools indicated that the U.S. Declaration of Independence was predominantly AI-generated, while others assessed it differently.
> “There’s a lot of snake oil being sold,” Geiger said.
Geiger noted that concerns surrounding AI writing tools mirror past debates over spell check, Wikipedia, and CliffsNotes, raising broader issues about writing, authorship, and trust.
> “People are concerned that when you had to write the words yourself, you had to think about them,” he stated.
Rice University Professor Kathleen Perley argued that if AI writing assists researchers without jeopardizing quality, especially for non-native speakers or those with learning challenges, it can be beneficial.
> “If AI helps researchers overcome challenges like language barriers or learning disabilities, and doesn’t compromise the originality or quality of their work, then I don’t see a problem with it,” she stated.
Perley also highlighted a dilemma where individuals might alter their writing style to avoid suspicion of AI use, noting a newfound awareness of words that may be flagged.
While some criticize AI-assisted writing for its lack of personality, Perley believes it can democratize participation in research.
> “Sure, we might get more ‘delves’ and em dashes,” she commented. “But if AI helps people from different backgrounds share important research, I don’t care how polished it sounds—it’s worth it.”




Comments (0)

    Greed and Fear Index

    Note: The data is for reference only.

    index illustration

    Greed

    63