In your personal opinion, and based on the articles you’ve seen describing the damage in dramatic terms that bait the clicks of people who are already predisposed to think negatively of AI.
That’s a lot of assumptions. I was thinking more in line with the environmental and cognitive impacts being studied. Also, let’s stop calling it AI, because that’s not what it is.
If anything I’m predisposed to not trust something if the very first thing that comes up, its name, is deceptive.
The term “AI” was first coined in 1956 at the Dartmouth workshop and covers a broad range of topics in computer science. Machine learning and language models most certainly do fall under that category.
Refusing to call LLMs “AI” looks to me like an instance of the AI effect in action, in which anything computers can do is no longer regarded as an example of “real” intelligence. It’s a goalpost shift.
Used to be that the Turing Test was a big deal. Or being able to beat a human chess grandmaster. Or a Go grandmaster, once chess was reliably being won by computers. Just the other day ChatGPT was able to generate a novel proof for an unsolved Erdos problem that mathematicians are now using as a basis for new discoveries. What’s the next goalpost?
Because those things are not worth the damage it is doing.
In your personal opinion, and based on the articles you’ve seen describing the damage in dramatic terms that bait the clicks of people who are already predisposed to think negatively of AI.
And so the echo chamber resonates on.
That’s a lot of assumptions. I was thinking more in line with the environmental and cognitive impacts being studied. Also, let’s stop calling it AI, because that’s not what it is.
If anything I’m predisposed to not trust something if the very first thing that comes up, its name, is deceptive.
The term “AI” was first coined in 1956 at the Dartmouth workshop and covers a broad range of topics in computer science. Machine learning and language models most certainly do fall under that category.
Refusing to call LLMs “AI” looks to me like an instance of the AI effect in action, in which anything computers can do is no longer regarded as an example of “real” intelligence. It’s a goalpost shift.
Used to be that the Turing Test was a big deal. Or being able to beat a human chess grandmaster. Or a Go grandmaster, once chess was reliably being won by computers. Just the other day ChatGPT was able to generate a novel proof for an unsolved Erdos problem that mathematicians are now using as a basis for new discoveries. What’s the next goalpost?
This. This is the next goalpost. Reread what I was replying to.
Writing birthday cards and cheating on school assignments? It can already do that quite well, that’s a goalpost that’s been passed already.
You’re either not being genuine or have missed the point entirely.
I’m gonna carry on, take care