Been banned for AI-Slop on a few subs here on Lemmy as well as on Reddit.
I always provide a good amount of technical detail in my posts and i try to be as transparant and communicative about the details. My projects are very complicated and I try to document them well.
my project is pretty cryptography-heavy… the act of me sharing my efforts in an attempt to show transparency… but it is used against my project by calling it AI-slop (undermining Kerkhoff’s principles).
It’s 2026 and most developers are using AI. I have used it to create things like formal proof and verification.
my project is aimed to be a secure messaging app. i have all the bells-and-whistles there along with documentation… but if the conversation cant move past “its AI-generated”… then it seems the cryptography/cybersecurity/privacy community isnt aligned with the fact that using AI is now common practice for developers of all levels.
AI is a tool. you cant (and shouldnt) “trust” AI to do anything without oversight. AI does not replace the due-diligence that has always been needed. i dont “trust” my hammer to bash in a nail… i “use” the hammer. AI is not different in how you need to be responsible for how its used.
i’ve busted my ass on my project for it to be called AI slop. i think its completely fine when it comes from folks in the community. cryptography is a serious subject and my ideas and implementation SHOULD/MUST be scrutinised… but its simply ignorant if mods are banning me for the quality of my work considering the the level of transparency and my engagement on discussions about it.
It’s a bit reductive to call it slop. I think i try harder than most in providing links, code and documentation. Of course I used AI… and it’s clearer for it. (you can find more detail on my profile)
i am of course sour from being banned, but am i wrong to think my code isnt AI slop? Some parts of my project are clearly lazy-ui… but im not sharing on some UI/UX/design sub. the cryptography module has unit tests and formal verification. if that is AI-slop and can result in me being banned, i simply dont have faith in that community to be objective on the reality of where AI can contribute.
while its understandable people dont want to review AI-slop… i think the cryptography/cybersecurity community needs to get on board with the idea of using AI to help in reviewing such code. am i wrong? is the future of cryptography is still people performing manual review of the breathtaking volumes of AI code?
I feel you @xoron@programming.dev. At work, my entire team uses AI coding assistants. I see that there are many tasks that benefit from AI assisted coding, and I see the team shipping more as a result.
If you ask me, it’s AI-slop only if it’s low effort, low quality contribution. But you can absolutely use AI tooling to create high quality code, and some of the best engineers in my team do so.
I think there’s a lot of negative sentiment in this community towards AI, and I personally think that denying that AI is a useful tool for writing more and better code at this point in time is somewhat denying reality.
Yes we can. Watch me
In my opinion, slop is slop. AI tends to result in slop, but it doesn’t have to. But to ensure it’s not slop, one has to put in effort and time. Which kind of defeats the purpose of using AI in the first place. So I think it’s obvious why most people default to AI involvement = slop.
AI involvement = slop
thats the part that seems disconnected from reality. im sure there are still people cranking out code manually, but lets be real; it isnt normal anymore.
in cybersec, there is scrutiny than most against the use of AI… i simply cant believe that the folks at Whatsapp, Signal or simpleX are not using AI in their daily workflow.
Why is your App cryptography-heavy if it’s a messanger? Don’t you just have to call msg.encrypt() or similar and then the library handles the rest?
https://github.com/positive-intentions/signal-protocol
that there is just the tip-of-the-iceberg in how im dealing with the cryptography.
i cant get a proper audit, so i use these communities to share my ideas to determine any details im overlooking.
There are some interesting aspects to asynchronous encrypted messengers.
https://youtu.be/9sO2qdTci-s
Not that I would trust some random strangers slop over established projects like signal.completely understandable and so the proactive attempt to get a professional security audit so i can avoid asking to “trust me”.
its completely understandable that you want to use something established. i cant offer more than open source and transparency in the implementation. if “trust” is behind the “paywall” of a security audit, its simply not an option without support.
i used AI to generate an audit. it took several days of my time and effort to get it to where it is. i made a genuine attempt to be objective.
in SWE we already have things in place for this like unit tests. if we dive further into cryptography we have things like formal proofs and verification.
formal verification has tooling to help make sure things work and behave how it should. (without AI) it can take a look at the code and create abstractions that can be used for verification. if we question if AI can be used with such tooling, we start discussing if the tooling we use is good enough (its pretty widely used!).
if the conversation cant move past that i used AI, then we’re not really having a discussion.
Some communities have “no-AI” rules. If you didn’t break any, maybe you’ve been targeted by moderators that partake in cancel culture?
If that’s the case, at least helps to sift through communities. And worse comes worst, maybe start a personal community to share what you make?
Unless you invented some new form of encryption why are you generating so much ai slop?
Just reuse human made cryptography libraries that are battle tested. Then you won’t have to do disastrous things like putting ai to review your ai slop.
You know that it lies, gaslights, writes or deletes production databases, tests etc as it pleases.
your right. my version of what your describing exists here: https://github.com/positive-intentions/chat
not AI slop, but slop of a different kind. purely a webpp and uses audited crypto primitives from the browser. webrtc is already encrypted, but there is a diffie-helman key exchange (you can share public key hashes to guard againt mitm sttacks). i put time and effort there and documented it to seek some kind of open source support. it didnt work out.
my plan was always to beef up the encryption. i wanted to add the signal protocol. i asked on eddit and i couldnt find something suitable.
https://www.reddit.com/r/crypto/comments/1mi4ooa/looking_for_the_signal_protocol_in_javascript
i can ue AI to sweat it out myself: https://www.reddit.com/r/signal/comments/1orsjw2/signal_protocol_in_javascript
there is a great deal of effort that i simply cant quantify.
the cryptography module has unit tests and formal verification.
Did you develop the spec by hand or is AI also involved in the spec development?
As far as I am not pleased with garbage proofs that these AI likes to write, it is still better than garbage spec…
I suspect your formal proof refers to the following files: https://github.com/positive-intentions/signal-protocol/tree/staging/formal-proofs
It contains 6 files each with less than 100 lines of code, and the claim seems to be it almost prove the entire security of the signal protocol.
Unless the formal proof community has advanced so much without me knowing, then I think you can definitely submit a paper to top PL conferences. Since my best known state of the art is Signal* from project everest. It involves tens of components, and years of works for top academics and proof engineers.
- Here is the website: https://project-everest.github.io/
- Here is the paper: https://www.computer.org/csdl/proceedings-article/sp/2019/666000b256/1dlwheTvbEI
- Here is the code: https://github.com/Inria-Prosecco/libsignal-protocol-wasm-fstar
Each file here, like
fstar/Impl.Signal.Core.fstwould already be longer than your entire proof, even just the hints provided to the SMT solvers are longer than your entire proof.So I am interested in what technique did you apply to acheive the almost same effect as this monumental project with less than 5% of the code?
Was the AI you’re using trained like most; scrapping the internet and disregarding the licenses of code?
i used opencode (various models), cursor (claude, composer)
how these models are trained is arguably not ethical. the disregard of licences of code is not something i can influence.
It absolutely is something you can influence.
That’s the entire point. You can not use the slop machine.
boys dont cott anymore? then how about a girlcott?
You might be expecting too much nuance from online communities. It’s easy and fun to oversimplify and dunk on a perceived common enemy. Lemmy has a very AI critical community. I imagine on reddit you might get less backlash, at least depending on the community. You might also find more AI friendly places here. In any case, trying to fight against a community bias is often a fools errand. I’m sure your code isn’t slop, but I don’t think you’ll be able to change the minds of random, biased people on the internet with no incentive to really listen to you anyways.
I’m sure you already know all the reasons why people are against AI and are sick of having to defend yourself. Still, I want to add that even if you use AI as a tool instead of vibe-coding, as a consumer I wouldn’t trust any privacy/security critical software that’s developed with the use of AI. As a layman I can’t check how secure your software is, so I have to rely on simple signifiers to make my judgements. At this point in time, AI is a red flag for me for security reasons alone. I know it’s not “fair” or “accurate”, but I don’t have the time and knowledge to individually check every software to that extend. I know allegedly every programmer now uses AI in some form to code (I personally don’t and most people I know don’t either, but I’m sure it’s just my bubble), but it’s not a sign of quality code in my mind.
Another thing I want to add is that your hammer comparison should probably include how the hammer was produced and how much resources your hammer consumes to function. There is a strong ethical argument against the use of AI for most use cases. I’d include coding and code reviews. Again, that doesn’t make your code slop, but it might help you understand why so many people are ready to dismiss it as that.
I have absolutely no intention of stopping.
Or reading that much begging about it.
I don’t think everything is getting called ai slop, but I would say if any part of your project is ai slop (like your “lazy uis”) I’d also immediately lose trust in the entirety of the project, especially if it’s intended to be around security. I do think most projects that use AI for code generation are slop though, I’ve seen far fewer examples of good use (i.e. where the output looks human written because the operator reviewed and refactored every part of it, or where it was used to write small parts of functions rather than entire functionalities)
Your last sentence I think provides a great argument for why people here (and more and more broadly in engineering) hate on ai generated code in general. It produces such vast quantities of code (and often unnecessarily) that it becomes infeasible for a human to review it, immediately requiring us to place trust in the machine to both generate it and review it, and to continue maintaining it while the human operator probably does not even have full understanding of what’s changing. A machine, that we all know hallucinates and generates often low quality garbage, including severe security vulnerabilities by design. According to GitHub, your project has millions of lines of changes on a weekly basis in the earlier days, that does scream slop to me.
Last, AI is more and more hated due to the increasing number of horrible impacts it has on our world, personally I’d not support AI generated projects just on that principle alone.
in the recent post that got me banned it was a copy of this post here:
i make a point in all my posts to be clear with the caveats. im not promoting this to replace anything. details to find out more is there along with advice to not use it for sensitive data.
for me messaging app, the caveats are similarly mentioned: https://positive-intentions.com/docs/technical/p2p-messaging-technical-breakdown
my projects are reasearch and development projects which i make sure to make clear when i post about them. im fairly consistent with advice around cautious use… knowing full well that it will deter people. im proactively seeking critisism in order to improve it.
It produces such vast quantities of code (and often unnecessarily) that it becomes infeasible for a human to review it, immediately requiring us to place trust in the machine to both generate it and review it, and to continue maintaining it while the human operator probably does not even have full understanding of what’s changing.
bingo!.. youre framing as a negative understandable, but unless im mistaken, that the way its going to have to go. software development broadly speaking (for better or worse) is going to be AI generated. the tooling and methodologies have to keep up.
horrible impacts it has on our world
thats pretty vague, im sure it does some good too. AI is a tool. its easy to talk about how AI is impacting people badly. personally ive been unemployed for the past few months. its a horrible experience to go through countless interview thinking i aced it, but still come up with a rejection because the field has become so competative. but i dont blame AI on that. its a tool that i need to be learn how to use. perhaps others use it better than me.
I don’t know the context around you getting banned, unless there’s some specific rules you violated. I am not in support of that, but it’s also not the focus of my message.
I disagree with development having to go that way. If anything, the hatred towards ai is a sign that it’s actively not sought after, or at least not with LLMs. If they managed to develop actual AI that is on par with senior engineers, maybe? But we don’t have that. What we have is faulty and inherently flawed. Why would we have to push ahead forcefully with it…?
I didn’t include a list of why ai is harmful as the post was already long, but displacing workers is just 1 point.
- massive waste of resources (as in water, electricity) for tasks which can already be achieved without AI for a fraction of the compute cost (think, search engines as an example). Also consider the environmental impact here in a society where a lot of our power still comes from burning fossil fuels.
- a war on consume hardware (all compute components “sold out” for 1-2 years ahead making everything expensive for average people)
- destruction of the workforce pipeline (even if only junior roles got displaced by ai, we will simply not have a pipeline of new staff to step in once seniors had enough, in any industry this is catastrophic, especially when the machine doing this is not actually able to fully replace staff)
- building a dependence of closed source subscription based tooling or end up locked out of your own codebase because it’s infeasible to do it without once you started
- theft of intellectual property ignoring all licensing for training data, or companies selling individual contributions
- the entire thing being funded by imaginary money propped up by a circle of loans driving us towards yet another financial collapse across the modern world
I’m sure there are even more.
Not all of these are the fault of the technology, but I’m more than happy to throw the entire technology and everything around it under the bus if it means it makes it easier for people to unite against these companies - which I think it does.
Saying “it’s a tool and provides value” is like saying “force feeding chickens in a tiny cage” is a tool that provides value. True? Yes. Valid? No.
So many critical bugs and security holes have been made from an oversight of the people handling the code.
Now you want to tell me that instead of having people write code that tries to make sense, and then review it (sometimes a bit too late), you want to have an hallucination machine produce some code randomly, then have people “fix” it, then review it?
This is just a recipe for disaster.
AIs are not “AIs”, they’re just bullshit generators that everyone is falling for. Technical debt and lack of code reliability were the main problems of software dev, and AIs are sacrificing those two specifically, just in exchange for the illusion of speed.
If you train monkeys to pile up bricks, it doesn’t make a house, it makes a disaster waiting to happen. And monkeys, unlike AIs, are actually intelligent and sentient, which would make them more reliable still.
I think you need to speak to a mental health specialist, because AI psychosis can be really destructive. We all have problems, but using chat bots to make us feel better is dangerous for you and those around you, even if it feels good in the moment. These bots are designed to tell you exactly what you want to hear so that you become addicted to them.
I’m going to guess you didn’t accomplish much as a software engineer before AI? The personal deficiencies at the core of that are still there even if you use AI to tell you otherwise. I won’t speculate what those deficiencies are, but I just want you to engage in some honest introspection. Absolutely nobody will trust someone like you to handle such a sensitive topic like cryptography. Stop wasting your short time on this earth on something so stupid. Go make literally anything else.
wow thats deep analysis and advice. i generally think i do well.
i work on my project and cryptography because its interesting. i worked with cryptography long before AI… but like a “regular” developer on a sideproject, im going to use AI.
i actively seek advice about the code in my project. i only share my work after ive put what i think is enough time and effort. it clearly isnt enough that the project “works”. in cybersec its important for code to be audited or reviewed, that fundamentally isnt an option on a project like mine unless i share something that is described as “AI-slop”. that feedback is fine. it’s important that its open source.
it might not be fun for most, but this is something i work on because its enjoyable to me. its open source for transparency and critisism. i just want to take “AI” as a critisism, off the table because i cant quantify my involvement… which is a understandably wild thing to ask so i try to approach it with caution.
i work on several project that interest me. many but not all are open source. they exist because i woke up some day and decided i wanted to create something.
i generally think i do well.
What are some of your engineering or research accomplishments? Where is your linkedin or github profile showing projects before ~2022?
i worked with cryptography long before AI
What kind of work did you do with cryptography? It couldn’t have been much if you don’t see what’s wrong with what you’re doing. “I set up LetsEncrypt on a web server” doesn’t count as experience.
Any answer you provide to these questions are worthless unless you’re willing to reveal your identity here. That’s the only way to build any credibility, and without credibility nobody should trust you with something like this.
this is something i work on because its enjoyable to me
No, this is something you’re working on because you’re hoping to make money from it. I remember you posting about this project some months ago and you mentioned as much. If it isn’t AI psychosis, then it’s a grift and you’re a snake oil salesman. Idk what you’re expecting to hear? This is a programming community; it’s probably the last place you’ll get positive feedback for this obvious trainwreck.
The rest of your reply aside, I do disagree with one point in particular.
Where is your linkedin or github profile showing projects before ~2022?
A github public profile and linkedin history are not reliable indicators of comparative programming competence.
I.e it’s entirely possible to be a competent programmer and also not want to participate in self marketing or promotion.
They are sometimes indicative of the soft skills that go along with being a programmer.
the only project relevent here is: https://positive-intentions.com/
the parts i want open source are on github. my project wasnt always open source. i created this without AI agents. then i open sourced it thinking it would gain more trust with users… and it did, but a key observation is that there are folks like yourself that will never be satisfied. if open source code, docs and my communication isnt enough… i have no delusuion that identifying myself would benefit the project in any way… its simply a vector by which people will highlight why im not qualified to work on the project.
critisism in cybersec is common and expected. my ideas should be challenged. but the code is right there. feel free to ignore any details you think might not be up to your quality standard. you linked my previsous post which is more technical about how my app works. you can ask for further clarity on those details… but your critisism on previous posts suggest to me, that you dont actually want clarity because you alrealy already have the references to find out more.
the project is enjoyable for me. its why i still work on it. would it be wild for me to want to make money from it? im trying to be more transparent about my process. the post here highlight my AI usage and how im using it to create high-effort work. “high-effort” is hardly quantifyable, but i see many reponses are along the lines that “AI cant be trusted to do things perfectly”… as if i dont also agree to that. you linked my previsous post which i would hope made it clear that my AI prompt wasnt “create me a messaging app”.
a key and worrying observation is that mentioning that i use AI is the only thing that makes a different in feedback about the project (as per the subject of this post). you can see that in my previous post was significantly better recieved compared to this current post. that is the project where im using AI… because duh! it is a game changer.
the point im making on the OP still stands that people cant see past my project after i mention i used an AI. human effort has never been easy to quantify… the best you got is storypoints and thats hardly meaningful.
Are you even reading the criticisms people are making, or just asking an LLM to generate responses for you? Or have you developed brain damage as a result of so much LLM usage? Those are the most charitable explanations for your behavior here, because you seem to be incapable of understanding the criticisms. This has nothing to do with “effort”.
When I said you should seek a mental health professional, that wasn’t a joke. Some people might say that as a joke, but I wasn’t. AI psychosis is a serious thing we don’t fully understand yet. From my perspective, you’re just an annoying Lemmy user/possible troll, which is easy to ignore. But if you’re not just trolling, then you’re probably damaging your health and/or relationships for a very stupid reason.
I avoid slop code like yours because typically the user of the slop generator has no real idea of how things actually work, the slop is over-“engineered”, and it’s likely full of security issues. Further, it also wastes tons of resources just for poorly written slop.
I especially wouldn’t ever touch your cryptographic slop.
Cryptography is notoriously easy to get wrong. If you don’t know enough about it - you should not offload it to the hallucination machine, because you will not be able to verify it properly, and those who can - will not bother to.
This is not what a real audit looks like and it should not be presented as such. This “audit” is, in fact, slop.
Auditor: Security Analysis (Automated + Manual Review)
Do you not see the problem in this line?
The implementation uses real cryptographic primitives
Or this?
perfect. you get it. you understand that generating an AI audit is wild!
the AI audit comes after a long time of to-and-fro from the various communities that asked for an audit… of course they asked for a professional one… but those that ask, must know that they are all prohibitively expensive. especially for a solo vibecoding dev like myself.
i also understand that people would prefer a project with a team of experts… sorry to break it to you, a team of experts are not going to hire themselves on an unfunded project like this.
while the security audit, unit test, formal proofs and verification are not good enough when its done with AI, my hope was that it could serve as a starting point for anyone like ROS to perform an actual review. i cant offer more transparancy that open source, documented and discussions.
of course they asked for a professional one… but those that ask, must know that they are all prohibitively expensive. especially for a solo vibecoding dev like myself
then… vibe-code something else?.. why do you think that you should be making something you are not an expert in, that can potentially put your users into danger and make you liable for it? if it’s a learning project - great, go wild. but if it’s intended to be used, then sorry - this is just an irresponsible approach that should not be entertained by anyone. I get that you have “positive intentions” but pick some other venue that you can get right. or contribute to an existing project (being mindful of contribution guidelines).
i vibecode a lot of things. my project is not inherently dangerous. people can use any software irresponsible. in my project and all my communications about it, i make it clear to users to use it cautiously and that its presented for testing and demo purpose. its mentioned in all of my post and i also have terms and condition within my projects the explain as much.
nobody is being tricked into sharing sensitive information… in fact i made a proactive attempt to create something that doesnt need any personal information.
dont tell me what i should and shouldnt be coding. i put time and effort into testing and verifying. this is the issue about mentioning AI is that it undermines all other efforts. its the low-hanging-fruit of critisism.
“I vibecoded a lot of things, my projects is not inherently dangerous”
except it is dangerous. the fact that you declare yourself as a vibe coder implies to me you don’t know what’s going on in the system you’re developing. Correct me if i’m wrong but everything that you know this point on in your projects is strictly a function of what generative AI Is outputting. Do you see how malicious that is, where all your knowledge comes from a single source and you believe it.
The reason I’m pointing this out is thaf these AI models make mistake due to text compression. Much like JPEG, when you compress a file, data will be lost. The so called “hallucinations” of the AI models is caused by this data loss from compressing texts.
Now whether engineers are trying to optimize that or not does not matter, it is a fundamentally flawed system, and to use it as a framework for developing codebases you wouldn’t be able to tell if the code it actually generates is implementing the logic or is fit for commercial use; You involuntarily agree with the code it generates and don’t question anything about it.
This doesn’t even take into account the errors that I’ve observered that are abstract for humans to understand. Things like time complexity or code relating to physical hardware or microcontrollers that AI tries to generate functionally does nothing. For someone who cannot unit test something abstract like time complexity until it runs, I wouldn’t know.
what I’m trying to point out is, you have to have a level of skepticism on the code it generates. learn the subject (in whatever you do), that allows you to question what it generates, and use it for verification of your code (that you write) over it writing code for you.
then what is the point of it existing, if it can’t be used seriously? why should people spend their time on it, when there isn’t a solid base to build on? if you want to do something useful - contribute to an existing project. if you just wanna hack away at something - sure, do that, just don’t be surprised if other people happen to hate it when you try to present it as a serious project. nobody would bat an eye if you presented it as “I wanted a to try and implement Signal protocol, this is what I’ve learned and how far I’ve gotten”.
“I wanted a to try and implement Signal protocol, this is what I’ve learned and how far I’ve gotten”
way ahead of you: https://www.reddit.com/r/signal/comments/1orsjw2/signal_protocol_in_javascript
im banned for AI slop, but that there is a small fraction of the complexity of my project.
i always try to post in a way so its clear the project is far from finished.
my project is not inherently dangerous.
It is not “your” project - it was generated by a glorified chatbot. Since you lack the experience to judge its output, I cannot trust you to verify the security of the project.








