Been banned for AI-Slop on a few subs here on Lemmy as well as on Reddit.
I always provide a good amount of technical detail in my posts and i try to be as transparant and communicative about the details. My projects are very complicated and I try to document them well.
my project is pretty cryptography-heavy… the act of me sharing my efforts in an attempt to show transparency… but it is used against my project by calling it AI-slop (undermining Kerkhoff’s principles).
It’s 2026 and most developers are using AI. I have used it to create things like formal proof and verification.
my project is aimed to be a secure messaging app. i have all the bells-and-whistles there along with documentation… but if the conversation cant move past “its AI-generated”… then it seems the cryptography/cybersecurity/privacy community isnt aligned with the fact that using AI is now common practice for developers of all levels.
AI is a tool. you cant (and shouldnt) “trust” AI to do anything without oversight. AI does not replace the due-diligence that has always been needed. i dont “trust” my hammer to bash in a nail… i “use” the hammer. AI is not different in how you need to be responsible for how its used.
i’ve busted my ass on my project for it to be called AI slop. i think its completely fine when it comes from folks in the community. cryptography is a serious subject and my ideas and implementation SHOULD/MUST be scrutinised… but its simply ignorant if mods are banning me for the quality of my work considering the the level of transparency and my engagement on discussions about it.
It’s a bit reductive to call it slop. I think i try harder than most in providing links, code and documentation. Of course I used AI… and it’s clearer for it. (you can find more detail on my profile)
i am of course sour from being banned, but am i wrong to think my code isnt AI slop? Some parts of my project are clearly lazy-ui… but im not sharing on some UI/UX/design sub. the cryptography module has unit tests and formal verification. if that is AI-slop and can result in me being banned, i simply dont have faith in that community to be objective on the reality of where AI can contribute.
while its understandable people dont want to review AI-slop… i think the cryptography/cybersecurity community needs to get on board with the idea of using AI to help in reviewing such code. am i wrong? is the future of cryptography is still people performing manual review of the breathtaking volumes of AI code?


i stated off with a version i created manually without AI. i know how to do this old-school (i tried). that was a different kind of slop.
https://github.com/positive-intentions/chat
i use AI in a way i think is appropriate. i check as much as i can myself too. i post online about details and questions. i can iterate with AI. im may naive to think i know how to inpect what is created, so i share it online. im not sharing slop. this is the best i can do. of couse there are countless points of improvement, but there are only so many hours in the day.
youre sharing a valid opinion, but its difficult for me to quantify my efforts. im sure you dont think i just asked AI something basic (e.g. “verify this code is correct”).
If you can’t write manually and have it not be slop then you can’t program with a LLM effectively.
It doesn’t matter how much instruction you give a LLM it fundamentally cannot evaluate itself. Because to ensure that it’s evaluating correctly either you need to evaluate it or someone else. These are not deterministic machines and will “lie” to reach their goals. And I put that in quotes because it’s not really lying that’s to much personification.
These things are not good for literally anything beyond minor transformation or boilerplate.
Trust me. If you actually spend the time learning to code well written software by hand you will save time and get a better result. LLMs based coding is a anti-pattern.
You’re not getting push back because “programmers a upset their jobs are getting stolen” you’re getting push back because your falling for LLM company propaganda. LLMs just are not there yet.
If more then 20% of your code is written by a LLM your using it wrong.