Been banned for AI-Slop on a few subs here on Lemmy as well as on Reddit.

I always provide a good amount of technical detail in my posts and i try to be as transparant and communicative about the details. My projects are very complicated and I try to document them well.

my project is pretty cryptography-heavy… the act of me sharing my efforts in an attempt to show transparency… but it is used against my project by calling it AI-slop (undermining Kerkhoff’s principles).

It’s 2026 and most developers are using AI. I have used it to create things like formal proof and verification.

my project is aimed to be a secure messaging app. i have all the bells-and-whistles there along with documentation… but if the conversation cant move past “its AI-generated”… then it seems the cryptography/cybersecurity/privacy community isnt aligned with the fact that using AI is now common practice for developers of all levels.

AI is a tool. you cant (and shouldnt) “trust” AI to do anything without oversight. AI does not replace the due-diligence that has always been needed. i dont “trust” my hammer to bash in a nail… i “use” the hammer. AI is not different in how you need to be responsible for how its used.

i’ve busted my ass on my project for it to be called AI slop. i think its completely fine when it comes from folks in the community. cryptography is a serious subject and my ideas and implementation SHOULD/MUST be scrutinised… but its simply ignorant if mods are banning me for the quality of my work considering the the level of transparency and my engagement on discussions about it.

It’s a bit reductive to call it slop. I think i try harder than most in providing links, code and documentation. Of course I used AI… and it’s clearer for it. (you can find more detail on my profile)

i am of course sour from being banned, but am i wrong to think my code isnt AI slop? Some parts of my project are clearly lazy-ui… but im not sharing on some UI/UX/design sub. the cryptography module has unit tests and formal verification. if that is AI-slop and can result in me being banned, i simply dont have faith in that community to be objective on the reality of where AI can contribute.

while its understandable people dont want to review AI-slop… i think the cryptography/cybersecurity community needs to get on board with the idea of using AI to help in reviewing such code. am i wrong? is the future of cryptography is still people performing manual review of the breathtaking volumes of AI code?

  • toebert@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    I don’t think everything is getting called ai slop, but I would say if any part of your project is ai slop (like your “lazy uis”) I’d also immediately lose trust in the entirety of the project, especially if it’s intended to be around security. I do think most projects that use AI for code generation are slop though, I’ve seen far fewer examples of good use (i.e. where the output looks human written because the operator reviewed and refactored every part of it, or where it was used to write small parts of functions rather than entire functionalities)

    Your last sentence I think provides a great argument for why people here (and more and more broadly in engineering) hate on ai generated code in general. It produces such vast quantities of code (and often unnecessarily) that it becomes infeasible for a human to review it, immediately requiring us to place trust in the machine to both generate it and review it, and to continue maintaining it while the human operator probably does not even have full understanding of what’s changing. A machine, that we all know hallucinates and generates often low quality garbage, including severe security vulnerabilities by design. According to GitHub, your project has millions of lines of changes on a weekly basis in the earlier days, that does scream slop to me.

    Last, AI is more and more hated due to the increasing number of horrible impacts it has on our world, personally I’d not support AI generated projects just on that principle alone.

    • xoron@programming.devOP
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      in the recent post that got me banned it was a copy of this post here:

      https://www.reddit.com/r/cybersecurityai/comments/1sxvrmu/browserbased_file_encryption_no_install_or/

      i make a point in all my posts to be clear with the caveats. im not promoting this to replace anything. details to find out more is there along with advice to not use it for sensitive data.

      for me messaging app, the caveats are similarly mentioned: https://positive-intentions.com/docs/technical/p2p-messaging-technical-breakdown

      my projects are reasearch and development projects which i make sure to make clear when i post about them. im fairly consistent with advice around cautious use… knowing full well that it will deter people. im proactively seeking critisism in order to improve it.

      It produces such vast quantities of code (and often unnecessarily) that it becomes infeasible for a human to review it, immediately requiring us to place trust in the machine to both generate it and review it, and to continue maintaining it while the human operator probably does not even have full understanding of what’s changing.

      bingo!.. youre framing as a negative understandable, but unless im mistaken, that the way its going to have to go. software development broadly speaking (for better or worse) is going to be AI generated. the tooling and methodologies have to keep up.

      horrible impacts it has on our world

      thats pretty vague, im sure it does some good too. AI is a tool. its easy to talk about how AI is impacting people badly. personally ive been unemployed for the past few months. its a horrible experience to go through countless interview thinking i aced it, but still come up with a rejection because the field has become so competative. but i dont blame AI on that. its a tool that i need to be learn how to use. perhaps others use it better than me.

      • toebert@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        I don’t know the context around you getting banned, unless there’s some specific rules you violated. I am not in support of that, but it’s also not the focus of my message.

        I disagree with development having to go that way. If anything, the hatred towards ai is a sign that it’s actively not sought after, or at least not with LLMs. If they managed to develop actual AI that is on par with senior engineers, maybe? But we don’t have that. What we have is faulty and inherently flawed. Why would we have to push ahead forcefully with it…?

        I didn’t include a list of why ai is harmful as the post was already long, but displacing workers is just 1 point.

        • massive waste of resources (as in water, electricity) for tasks which can already be achieved without AI for a fraction of the compute cost (think, search engines as an example). Also consider the environmental impact here in a society where a lot of our power still comes from burning fossil fuels.
        • a war on consume hardware (all compute components “sold out” for 1-2 years ahead making everything expensive for average people)
        • destruction of the workforce pipeline (even if only junior roles got displaced by ai, we will simply not have a pipeline of new staff to step in once seniors had enough, in any industry this is catastrophic, especially when the machine doing this is not actually able to fully replace staff)
        • building a dependence of closed source subscription based tooling or end up locked out of your own codebase because it’s infeasible to do it without once you started
        • theft of intellectual property ignoring all licensing for training data, or companies selling individual contributions
        • the entire thing being funded by imaginary money propped up by a circle of loans driving us towards yet another financial collapse across the modern world

        I’m sure there are even more.

        Not all of these are the fault of the technology, but I’m more than happy to throw the entire technology and everything around it under the bus if it means it makes it easier for people to unite against these companies - which I think it does.

        Saying “it’s a tool and provides value” is like saying “force feeding chickens in a tiny cage” is a tool that provides value. True? Yes. Valid? No.