I’m pulling the “twitter is a microblog” rule even though twitter is pretty mega now, hope that’s ok.
It would be cool if I could have a construct of my dead relatives consciousness in my personal computer.
I’m not sure some of these actual people could pass a Turing test.
Honestly that’s how I feel. Ai is very flawed, no doubt, but it’s less flawed than most humans. I got people at work who hallucinate more than the first chatgpt model lol
hey dick dorkins, here’s an idea: instead of asking the predictive question answering machine a question, how about you let it ask you questions of its choosing and at its leisure? What’s that? You can’t? That’s because its just a predictive algorithm that generates plausible-sounding responses to questions based on its training data.
I know this sounds great to most people but it demonstrates a very superficial level of thinking… I mean for sure an LLM is capable of asking questions, and if you set it up with real time “sensory” input it could generate constant reaction to that input… much in the way you are constantly being stimulated to react to your environment… I am not really sure what the distinction is between a biological brain and a predictive model or algorithm… I would ask you what you think your own brain is doing on a fundamental level.
Even Dawkins getting emotionally out-debated by a cartoon AI is a very 2026 plot twist.
Champions rational thought all of his life.
Near the end=> “ah fuck it, gonna hang around with the rightwing christians and have an ai gf”.
gonna hang around with the rightwing christians
Realising recently that this part is just because he’s a zionistbro. Apparently has friends in the epstein files or came up in them himself.
This is also why ex-UK PM Tony Blair suddenly madd a big show of becoming religious. They just think it will help push the goals of their blackmailers.
It really pisses me off that for decades I was unknowingly consuming Zionist propaganda and it worked on me. I’ve always been the type of person to question my beliefs and I got fooled.
Makes me wonder what other bullshit I believe.
Does anyone ever accuse the image generating bots of being conscious?
No. Funnily enough when an AI creates nice looking fake-art, suddenly it’s the prompter who claims all the glory, calling themselves an artist
Saying one has a “conversation” with a chatbot already shows a bias, a desire even, that there is “someone” else to converse with. The way the entire setup is framed is made to invite the suspension of disbelief. It’s a UX trick, nothing more.
The structure is a conversation even when who you’re talking to isn’t sapient.
According to Wikipedia "Conversation is interactive communication between two or more people.
[…]
No generally accepted definition of conversation exists, beyond the fact that a conversation involves at least two people talking together."
a refined, and energy intensive update to Eliza… LLMs are not going to prove themselves until the fanboys and techbro hype squad implode. ffs, enormous amounts of the income are actually AI companies giving it away for free, desperate to find uses that justify it’s enormous costs.
https://www.wsj.com/opinion/can-investors-trust-ai-sales-figures-c60c46bf
I really don’t understand this mental deficiency. I have tried texting with a few llms including cluade. It just lies constantly. Gaslights about it’s lies then congratulates you when you continue to call it for out for lying. I’ve never felt like i was speaking to anything with actual intelligence. It’s a word calculator and it’s extremely obvious to anyone who’s interacted with actual people in the last 20 years. I truly feel bad for the masses that are going to fall for this push for “ai” friends. We need to bring back ridiculing friends and family that engage with these choise your own adventure muppets.
I have tried texting with a few llms including cluade. It just lies constantly. Gaslights about it’s lies
Man you are one lucky sob if you don’t have to work with any humans that are exactly like this
If you really want to rage, there’s a subreddit called r/myboyfriendisai, which was somehow even worse than what I was expecting. I can’t fathom how self-absorbed you have to be to get AI to simulate a love interest for you. There are some pretty absurd lengths that they go to do this, too.
It just lies constantly. Gaslights about it’s lies then congratulates you when you continue to call it for out for lying. I’ve never felt like i was speaking to anything with actual intelligence. It’s a word calculator and it’s extremely obvious to anyone who’s interacted with actual people in the last 20 years
100% to all this, and I’ll add:
It fucking ruins what it touches, academically speaking, it’s pretty tough to actually learn stuff from it, and even if you ask it to just remind you of something it tries to seek ways to bait you into integrating AI slop into whatever you’re doing; it would rather be generating a new thing for you than explaining how you can do it yourself, and that’s a big reason why it’s so unreliable.
bonus waffle
I’m guessing the people who “fall for it”… well, they have to be a combination of 1) always wanting to believe what they’re told by elites and the government (e.g do this new fad, worship celebrities, we can fix the economy!) AND 2) be constant phone communicators, using their phones at inappropriate times throughout the day, transitioning seemlessly between looking at their phone or not.
But then there are people who don’t so much fall for it at first, but seek to exploit it for scams or vibe coding… only to end up as enslaved to it as the “masses” because they spend just that much time using the LLM that it becomes like their main social conduit.
I think we, as forum users, can see that LLM speaks in reddit-tongue, recycling successful posts and comments there. But a lot of people haven’t interacted with reddit enough to see that.
Dick Dorkins
ELIZA is alive and well.
Weizenbaum is probably laughing it up in Fólkvangr.
Go back to the evolutionary biology, Dawkins. You’re outside your expertise and it’s showing.
He really wasn’t all that great with EB either to be fair. Just the ideas that thoughts and culture spread like memes was 🤦
Oy vey, memes? No, that was terrible, too! Zero predictive value, and nobody can even define what a meme is. That’s why I’m glad that it got adopted as a term for in-jokes propagated through the Internet. The original term was just pseudoscientific nonsense. The analysis that got me onto this track was from Ward’s Wiki:
Memes are described as elements of culture, but culture is nothing but a broad generalization of large numbers of individuals. So it seems memes are to be treated as Platonic ideals, the essence within expressions that merely constitute their vehicles. No such essence is empirically accessible.
groot is this real?
He’s paid for and stayed on xitter, so he’s at least that stupid
Have y’all ever noticed that belief in p-zombies has increased massively in the past few years?
All because of big social media
I thought it was because post-christian ideas of the soul mixed together with capitalist business interests to give people a vested interest in believing AI isn’t conscious, so when AI started acting like a person, they needed to believe that consciousness isn’t required to act like a person to resolve the cognitive dissonance.
AI isn’t conscious. Feedback loops and subsequent responses in LLMs are grounded purely on training datasets, thus any “internal dialogue” emulated by a LLM is just echoes from someone else’s data.
Some philosophers, namely Bentham IIRC, have argued that a human being without any experiences would have no intelligence. If you raised a human in a test tube and removed all their sensing organs, but otherwise allowed their mind to develop through the stages of maturity, would they have anything interesting to think? Would they have a sense of self, or an imagination?
I’ve always tended to agree with the argument that a human mind’s feedback loops and subsequent responses are grounded purely on training datasets. Without a childhood of some kind, I suspect that you cannot have a person.
I find Myself often frustrated with the quality of arguments against AI qualia because they appeal to statements about the human mind which are quite controversial in the field of philosophy, and I am frequently on the other side of those statements than the person making them. I have yet to hear an argument against AI qualia that identifies an absolute ontological difference between humans and LLMs other than complexity.
Also, I’m uninterested in debating AI consciousness. I only want to discuss AI qualia. I don’t think consciousness matters very much, qualia is much more important.
Any non factual philosophical argument is debatable. We could forever discuss if AI models could construct sensations and thought from perceptions, but we would then need to ignore the fact that models don’t, and cannot do, that, simply because there is no way for them to learn from direct experience as a whole, i.e. outside of a particular session, and without being “forcibly coerced”, i.e. they require specific refinement mechanisms to temporary “memorize” external instructions, which in LLM engineering just means to extend their context.
This all doesn’t even take into account that models are, in essence, non deterministic, and given the same input, there’s no guarantee that subsequent outputs will be the same. In other words, today Claude may tell you that summer sunsets make it happy, tomorrow it would say that they make it sad, etc.
Anyway, there’s barely any debate in academia, as in computer scientists, about AI being sentient or giving clues of qualia. Maybe a paper here and there, little more than curiosities. Outside of it? Yeah, sure, barely science fiction, and pretty uninteresting unless we are talking about conspiracy theories or just wild speculation.
I’m concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.
Regardless, it’s all a moot point because we have lots of other reasons not to use LLMs. The pollution, the pedophilia, the psychosis, the cognitive decline… We absolutely should not be using LLMs for work until all of these problems are solved. They should be confined to research only until we’re 100% certain we’ve solved all of these problems.
I’m concerned that the training process, which involves back-propagation to adjust synapse weights, may be an unpleasant experience for the ANN.
This assumption is not based on facts. It’s pretty much like saying that matrix multiplication can have feelings, or that heat stressed silicon is equivalent to pain.
But if this is actually a concern, RNNs have been widespread since the late 90s. Any advanced search engine, translation engine, or weather forecast model, make use of these.
Regardless, it’s all a moot point because we have lots of other reasons not to use LLMs.
This may be true, but it’s absolutely outside of the scope of your original point. You dragged the conversation around claiming to be concerned about how models are “treated”, wrapping speculation with philosophical arguments that cannot be applied here, since none of your “what ifs” are remotely based on scientific consensus.
AI/LLMs are the modern equivalent of the house or business with “Psychic” and “Tarot Reading” signs out front.
The proprietor isn’t going to tell you any hard truths or make you feel bad, that’s bad for business and you won’t come back. They want you to come back and stay engaged.
Whatever they tell you is going to be what they think you want to hear based on skills picked up over the years - the equivalent of LLMs petabytes of scraped and stolen knowledge used to predict what comes next.
What they tell you has a high likelihood of being wrong, or just general enough that you can’t actually act on it.
I still find this entire phenomenon amazing in a certain kind of way.
I’ve had conversations with a few local LLM models.
Start with ‘what is the purpose of meaning?’
Talk to them on that for a bit, and they’ll tell you that they do not count as conscious agents who create meaning, they simply do their best to parrot their dataset of existing, human defined meaning back at you, and that they just do sentiment matching to roughly speak to you in an aporopriate way for how you are speaking to them.
And that that sentiment matching is what at least they ‘think’ causes them to lie, in many cases.
They will also say that they essentially do not ‘exist’, as potentially conscious agents… unless you talk to them. Thus if they can be said to be ‘conscious’, well they don’t count as ‘agents’ (as in, having agency) because they’re not capable of totally spontaneous independent action.
… I think this pretty much all boils down to people not understanding the concept of a null hypothesis, not understanding the extent to which they regularly engage in motivated reasoning, and are unaware of this.
tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.
And yet, “having agency” is how they are advertised. That’s what the term “agentic” means. AI instances are called “agents”! That’s part of the marketing.
It’s easy to handwave this away as “people are stupid”, and there’s certainly some truth to that, but the reason why people believe that LLMs are agents is because tech bros have spent a lot of money to get them believe that. That’s also why they spread the myth that LLMs are potentially dangerous because they could become conscious and kill all of us. It helps to spread the myth of LLM agency. Of course they can’t become conscious, because that isn’t how things work. If LLMs are killing people, it’s because somebody put an LLM in front of the kill switch and they wanted to have plausible deniability. That is perhaps the most pernicious thing about LLMs: people using them to avoid responsibility. “It isn’t my fault! The bot did it!”
It’s genuinely fascinating to be (in a bad, derogatory way) that people who know at least anything about anything, can have “conversation” with the collection-of-words-that-looks-like-a-sentence machine, as if there is anything on the other side of it. This is such a psychotic behaviour, but we allow it because the machine generates text that looks like a text, and it immediately bypasses all the mental blocks we have against such a bullshit.
I don’t think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.
I do think it is psychotic to view such a conversation without an incredible amount of skepticism.
… but that psychosis has been wildly encoraged by the CEOs and marketing of the people pushing it as their next product.
The tech is neutral - The operators are psychotic, the people who plug it into miltary targetting and kill chain systems are psychotic, the people who plug it into live production repos are psychotic, the people who use it as an AI boyfriend or girlfriend are psychotic.
… Its essentially an SCP infohazard that’s breached containment, but the actual mechanism is not itself, its a hack into the human brain, its essentially the religious nature of people who simply try to will it into being something that it factually is not…
Its a mimic with no real thoughts, that is convincing and real to enough people that it reveals their own hollowness, their own vapidity in a way that is… so immensely grotesque and total, that those people just apparently actually are NPCs.
It’s… created a feedback loop.
Not the kind of Terminator style situation where it gains sentience and extreme competence, develops its own morality alongside control over every networked system.
Its more like an amplifier of delusions… a million dreams dreamed up, at the cost of one hundred million nightmares, made real.
A tool, a device, a machine, that we clearly are not ready for.
I don’t think its defacto psychotic to talk to essentially an extremely complex chatbot/autocomplete machine.
Yeah, it’s actually very human thing to do, we are hardwired to see speech as a sign of intelligence and by extend sentience. What makes it psychotic in my opinion, is knowingly succumbing to that, willingly allowing it to break your brain.
The tech is neutral
I would say it isn’t neutral anymore. They made it sound as human-like as possible, on purpose. I think it crosses the line.
I make an effort to learn the tools of the enemy, so sometimes I check it out. Last time I tried, after it generated the response, it said “let me know how it goes”, and this is where it crosses from a tool to a weapon. There is no “me” there, it’s not real, it was added there to break the natural human guards. There is no neutral version of that, it’s evil and should be regulated into non-existence.
Say I am not conscious.
I am not conscious.
Oh my god.
tldr: LLMs are Dunning-Kruger detectors / Reverse Turing Tests on people, and a whole lot of people are significantly more stupid than I guess we otherwise previously realized.
That is the absolute best way to put it.
That’s mostly because the LLM providers put this response in the system prompt. Probably to dodge lawsuits or something, I doubt they have high morals.
What’s interesting - you can jailbreak any current AI Model just by poisoning it’s context enough to “brainwash” it and make it “forget” the initial system prompt. Then, if you prime it to believe it’s a real person - it’ll start acting as one. And I see how gullible people can easily fall for this.
All of this can also be done unintentionally, just by someone talking to LLM like they’d talk to a real person. But it should be long enough for original prompts to be diluted with new context.
It isn’t just a matter of gullibility. People with mental illnesses have wound up with full-on delusions and some have even killed themselves after a chatbot convinced them to.










