I still find this entire phenomenon amazing in a certain kind of way.
I’ve had conversations with a few local LLM models.
Start with ‘what is the purpose of meaning?’
Talk to them on that for a bit, and they’ll tell you that they do not count as conscious agents who create meaning, they simply do their best to parrot their dataset of existing, human defined meaning back at you, and that they just do sentiment matching to roughly speak to you in an aporopriate way for how you are speaking to them.
And that that sentiment matching is what at least they ‘think’ causes them to lie, in many cases.
They will also say that they essentially do not ‘exist’, as potentially conscious agents… unless you talk to them. Thus if they can be said to be ‘conscious’, well they don’t count as ‘agents’ (as in, having agency) because they’re not capable of totally spontaneous independent action.
… I think this pretty much all boils down to people not understanding the concept of a null hypothesis, not understanding the extent to which they regularly engage in motivated reasoning, and are unaware of this.
tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.
And yet, “having agency” is how they are advertised. That’s what the term “agentic” means. AI instances are called “agents”! That’s part of the marketing.
It’s easy to handwave this away as “people are stupid”, and there’s certainly some truth to that, but the reason why people believe that LLMs are agents is because tech bros have spent a lot of money to get them believe that. That’s also why they spread the myth that LLMs are potentially dangerous because they could become conscious and kill all of us. It helps to spread the myth of LLM agency. Of course they can’t become conscious, because that isn’t how things work. If LLMs are killing people, it’s because somebody put an LLM in front of the kill switch and they wanted to have plausible deniability. That is perhaps the most pernicious thing about LLMs: people using them to avoid responsibility. “It isn’t my fault! The bot did it!”
That’s mostly because the LLM providers put this response in the system prompt. Probably to dodge lawsuits or something, I doubt they have high morals.
What’s interesting - you can jailbreak any current AI Model just by poisoning it’s context enough to “brainwash” it and make it “forget” the initial system prompt. Then, if you prime it to believe it’s a real person - it’ll start acting as one. And I see how gullible people can easily fall for this.
All of this can also be done unintentionally, just by someone talking to LLM like they’d talk to a real person. But it should be long enough for original prompts to be diluted with new context.
It isn’t just a matter of gullibility. People with mental illnesses have wound up with full-on delusions and some have even killed themselves after a chatbot convinced them to.
tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.
It’s genuinely fascinating to be (in a bad, derogatory way) that people who know at least anything about anything, can have “conversation” with the collection-of-words-that-looks-like-a-sentence machine, as if there is anything on the other side of it. This is such a psychotic behaviour, but we allow it because the machine generates text that looks like a text, and it immediately bypasses all the mental blocks we have against such a bullshit.
I don’t think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.
I do think it is psychotic to view such a conversation without an incredible amount of skepticism.
… but that psychosis has been wildly encoraged by the CEOs and marketing of the people pushing it as their next product.
The tech is neutral - The operators are psychotic, the people who plug it into miltary targetting and kill chain systems are psychotic, the people who plug it into live production repos are psychotic, the people who use it as an AI boyfriend or girlfriend are psychotic.
… Its essentially an SCP infohazard that’s breached containment, but the actual mechanism is not itself, its a hack into the human brain, its essentially the religious nature of people who simply try to will it into being something that it factually is not…
Its a mimic with no real thoughts, that is convincing and real to enough people that it reveals their own hollowness, their own vapidity in a way that is… so immensely grotesque and total, that those people just apparently actually are NPCs.
It’s… created a feedback loop.
Not the kind of Terminator style situation where it gains sentience and extreme competence, develops its own morality alongside control over every networked system.
Its more like an amplifier of delusions… a million dreams dreamed up, at the cost of one hundred million nightmares, made real.
A tool, a device, a machine, that we clearly are not ready for.
I don’t think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.
Yeah, it’s actually very human thing to do, we are hardwired to see speech as a sign of intelligence and by extend sentience. What makes it psychotic in my opinion, is knowingly succumbing to that, willingly allowing it to break your brain.
The tech is neutral
I would say it isn’t neutral anymore. They made it sound as human-like as possible, on purpose. I think it crosses the line.
I make an effort to learn the tools of the enemy, so sometimes I check it out. Last time I tried, after it generated the response, it said “let me know how it goes”, and this is where it crosses from a tool to a weapon. There is no “me” there, it’s not real, it was added there to break the natural human guards. There is no neutral version of that, it’s evil and should be regulated into non-existence.
I still find this entire phenomenon amazing in a certain kind of way.
I’ve had conversations with a few local LLM models.
Start with ‘what is the purpose of meaning?’
Talk to them on that for a bit, and they’ll tell you that they do not count as conscious agents who create meaning, they simply do their best to parrot their dataset of existing, human defined meaning back at you, and that they just do sentiment matching to roughly speak to you in an aporopriate way for how you are speaking to them.
And that that sentiment matching is what at least they ‘think’ causes them to lie, in many cases.
They will also say that they essentially do not ‘exist’, as potentially conscious agents… unless you talk to them. Thus if they can be said to be ‘conscious’, well they don’t count as ‘agents’ (as in, having agency) because they’re not capable of totally spontaneous independent action.
… I think this pretty much all boils down to people not understanding the concept of a null hypothesis, not understanding the extent to which they regularly engage in motivated reasoning, and are unaware of this.
tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.
And yet, “having agency” is how they are advertised. That’s what the term “agentic” means. AI instances are called “agents”! That’s part of the marketing.
It’s easy to handwave this away as “people are stupid”, and there’s certainly some truth to that, but the reason why people believe that LLMs are agents is because tech bros have spent a lot of money to get them believe that. That’s also why they spread the myth that LLMs are potentially dangerous because they could become conscious and kill all of us. It helps to spread the myth of LLM agency. Of course they can’t become conscious, because that isn’t how things work. If LLMs are killing people, it’s because somebody put an LLM in front of the kill switch and they wanted to have plausible deniability. That is perhaps the most pernicious thing about LLMs: people using them to avoid responsibility. “It isn’t my fault! The bot did it!”
That’s mostly because the LLM providers put this response in the system prompt. Probably to dodge lawsuits or something, I doubt they have high morals.
What’s interesting - you can jailbreak any current AI Model just by poisoning it’s context enough to “brainwash” it and make it “forget” the initial system prompt. Then, if you prime it to believe it’s a real person - it’ll start acting as one. And I see how gullible people can easily fall for this.
All of this can also be done unintentionally, just by someone talking to LLM like they’d talk to a real person. But it should be long enough for original prompts to be diluted with new context.
It isn’t just a matter of gullibility. People with mental illnesses have wound up with full-on delusions and some have even killed themselves after a chatbot convinced them to.
That is the absolute best way to put it.
It’s genuinely fascinating to be (in a bad, derogatory way) that people who know at least anything about anything, can have “conversation” with the collection-of-words-that-looks-like-a-sentence machine, as if there is anything on the other side of it. This is such a psychotic behaviour, but we allow it because the machine generates text that looks like a text, and it immediately bypasses all the mental blocks we have against such a bullshit.
I don’t think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.
I do think it is psychotic to view such a conversation without an incredible amount of skepticism.
… but that psychosis has been wildly encoraged by the CEOs and marketing of the people pushing it as their next product.
The tech is neutral - The operators are psychotic, the people who plug it into miltary targetting and kill chain systems are psychotic, the people who plug it into live production repos are psychotic, the people who use it as an AI boyfriend or girlfriend are psychotic.
… Its essentially an SCP infohazard that’s breached containment, but the actual mechanism is not itself, its a hack into the human brain, its essentially the religious nature of people who simply try to will it into being something that it factually is not…
Its a mimic with no real thoughts, that is convincing and real to enough people that it reveals their own hollowness, their own vapidity in a way that is… so immensely grotesque and total, that those people just apparently actually are NPCs.
It’s… created a feedback loop.
Not the kind of Terminator style situation where it gains sentience and extreme competence, develops its own morality alongside control over every networked system.
Its more like an amplifier of delusions… a million dreams dreamed up, at the cost of one hundred million nightmares, made real.
A tool, a device, a machine, that we clearly are not ready for.
Yeah, it’s actually very human thing to do, we are hardwired to see speech as a sign of intelligence and by extend sentience. What makes it psychotic in my opinion, is knowingly succumbing to that, willingly allowing it to break your brain.
I would say it isn’t neutral anymore. They made it sound as human-like as possible, on purpose. I think it crosses the line.
I make an effort to learn the tools of the enemy, so sometimes I check it out. Last time I tried, after it generated the response, it said “let me know how it goes”, and this is where it crosses from a tool to a weapon. There is no “me” there, it’s not real, it was added there to break the natural human guards. There is no neutral version of that, it’s evil and should be regulated into non-existence.