According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.
Producing innaccurate technical advice, with a confident tone, at scale.
If that LLM were an employee it would get a formal blame, and then demoted or fired as it continues.
Wait til this starts happening in the construction industry.
That sounds sweetly naive. “Producing innaccurate technical advice, with a confident tone, at scale” sounds like the perfect credentials for a career in consultancy.
That’s a good way to represent LLMs. Very bad and very prolific consultants.
“Rogue AI” as if it’s some sentient evil thing when its just a llm with too many permissions… This timeline is so dystopian, but simultaneously incredibly lame i hate it.
“Flagrant security lapse caused an incident when software engineer uses inappropriate tool for the job.”
“Inappropriate tool also weirdly good at gas lighting engineers and managers”
An ai apocalypse won’t come from an ai becoming sentient, but from some idiot putting ai where it shouldn’t be.
“We installed Gemini into all US nuclear silos.”






