At a certain point, these are going to become Dog Bites Man stories. I used an LLM to assist with my writing at my last job to avoid plagiarism via paraphrasing, not to advance it.

But those were, for the most part, government press releases already technically in the public domain. The ethical concern was claiming someone else’s words as my own via laziness or just a momentary lapse in judgment.

That can and does happen, and LLMs are invaluable for removing most of that risk.

But a fucking book review? Why were you even reading someone else’s before sitting down to write? The point of literary analysis is to bring your expertise to the table, not to find out what others thought.

</rant>

The New York Times has cut ties with a freelance journalist after discovering he used artificial intelligence to help write a book review that echoed elements of a review of the same book in the Guardian.

It came after a New York Times reader flagged similarities between the paper’s January review of Watching Over Her by Jean-Baptiste Andrea, written by author and journalist Alex Preston, and an August review of the same book written by Christobel Kent in the Guardian.

The New York Times launched an investigation, during which Preston admitted that he had used AI to assist writing the review and did not spot the sections that were pulled from the Guardian before submitting it. In a statement to the Guardian on Tuesday, Preston said that he was “hugely embarrassed” and had “made a serious mistake”.

The New York Times alerted the Guardian to the overlap in an email sent on Monday, and added an editor’s note to the review acknowledging the use of AI and linking to the Guardian piece. “A reader recently alerted the Times that this review included language and details similar to those in a review of the same book published in the Guardian,” reads the editor’s note. “We spoke to the author of this piece, a freelancer reviewer, who told us he used an AI tool that incorporated material from the Guardian review into his draft, which he failed to identify and remove. His reliance on AI and his use of unattributed work by another writer are a clear violation of the Times’s standards.”

  • SaveTheTuaHawk@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    You are either a freelance journalist or AI user, can’t be both.

    But this highlights how LLMs are just plagiarizing.

    • Powderhorn@beehaw.orgOPM
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      That’s an incredibly binary view. I have been a freelance journalist (still am, in fact) and have ethically used LLMs. The failure here was the “writer” not bothering to check his work. The first example I can recall of computers just getting things wrong was the FDIV issue on the first crop of Pentiums. People get things wrong every day, but there’s usually some sort of hierarchical structure that stops the error from causing real issues.

      Say, editors at The New York Times. But in this case, plagiarism sailed right through. The absolute last thing you want as an editor is a reader pointing out your fuckup. Politicians complaining? Sources complaining? That’s a day ending in “y.” A reader pointing out international plagiarism is the sort of thing that causes nightmares.

      Think about what the Times had to do here. Acknowledge the malfeasance, contact the original pub with an apology, and then … then, you get to write the editor’s note explaining to everyone how said fuckup happened.

      The LLM is not the issue; the lapse in judgment was.

        • Powderhorn@beehaw.orgOPM
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I’m extremely freelance these days. My last solid gig was covering state and federal grants for renewable-energy projects (and other infrastructure like BEAD) nationwide.

          I was laid off Jan. 20, 2025, shortly after noon EST.

      • Hetare King@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        What is there to check? All the information that wasn’t in the prompt, right or wrong, is information that didn’t come from the user. This is important especially in this case where it’s a book review and most of the information is supposed to be this guy’s opinion, making information that didn’t come from him worthless. So if he did actually write down the information that is valuable, because it would have been in the prompt and whatever parts of the review he wrote himself, what is there to check in the worthless information produced by the LLM?

        Of course, in most cases it’s a journalist’s job to relay information, not to be the source of it. But I’d say that part of that job is understanding that information well enough so that they can relay it responsibly. And one of the most effective ways of both confirming for yourself that you understand it and to deepen that understanding, is to think about how to put it in words, i.e. write it down. It’s not perfect, and many journalists have been screwing up this part long before LLMs were a thing, but it’s certainly better than checking whether whatever an LLM regurgitated matches your shallow understanding.