• FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      10 days ago

      In your personal opinion, and based on the articles you’ve seen describing the damage in dramatic terms that bait the clicks of people who are already predisposed to think negatively of AI.

      And so the echo chamber resonates on.

      • Whostosay@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        That’s a lot of assumptions. I was thinking more in line with the environmental and cognitive impacts being studied. Also, let’s stop calling it AI, because that’s not what it is.

        If anything I’m predisposed to not trust something if the very first thing that comes up, its name, is deceptive.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          9 days ago

          The term “AI” was first coined in 1956 at the Dartmouth workshop and covers a broad range of topics in computer science. Machine learning and language models most certainly do fall under that category.

          Refusing to call LLMs “AI” looks to me like an instance of the AI effect in action, in which anything computers can do is no longer regarded as an example of “real” intelligence. It’s a goalpost shift.

          Used to be that the Turing Test was a big deal. Or being able to beat a human chess grandmaster. Or a Go grandmaster, once chess was reliably being won by computers. Just the other day ChatGPT was able to generate a novel proof for an unsolved Erdos problem that mathematicians are now using as a basis for new discoveries. What’s the next goalpost?

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              0
              ·
              9 days ago

              Writing birthday cards and cheating on school assignments? It can already do that quite well, that’s a goalpost that’s been passed already.