• null@lemmy.org
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    2
    ·
    4 days ago

    This automated system analyzed information across 305 internal servers, rapidly producing 2,597 structured intelligence reports. By automating the data analysis phase, a single operator successfully processed an intelligence volume that would traditionally require an entire team.

    That’s a great ad, not gonna lie.

      • redsand@infosec.pub
        link
        fedilink
        English
        arrow-up
        14
        ·
        3 days ago

        They have lots of use cases for red team. Recon, enumeration, exploit chaining, fuzzing. It doesn’t matter if the error rate is 10-20% a shell is a shell

          • redsand@infosec.pub
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 days ago

            It can help you write the patch. Identify threats in a SIEM or SOAR setup. But I can’t think of much else. Defense has to be correct. If your .htaccess file is 99% correct that’s a problem

      • cowboykermit@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        3 days ago

        Can see a few people disagree with you

        Does anyone have a good litmus test for when the perspective might shift? TurboQuant making it easier to have larger context windows for local models give me a pinch of hope and I’m really holding out for a decent Open-Weights model I can self host for Home Automation.

        I’m fully aware LLMs are just predictive text on roids and we haven’t achieved real AGI, but do we know of anything that will help us filter through the marketing?

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          You can reason from a few principles:

          • At its core, the math functions being optimized by these AI tools and their specialized hardware is that they can perform inference and pattern recognition at huge scales across enormous data sets.
          • Inferring a rule set for pattern also allows generation of new data that fits that pattern.
          • Some portion of human cognitive work falls within the general framework of finding patterns or finding new data that fits an old pattern.

          So when people start making claims about things with clear, objective definitions (a win condition in chess, the fastest route to take through a maze, a highest lossless compression algorithm for real world text), it’s reasonable to believe that the current AI infrastructure can lead to breakthroughs on that front. So image recognition, voice recognition, and things like that were largely solved a decade ago. Text generation with clear and simple definitions of good or bad (simple summaries, basic code that accomplishes a clearly defined goal) is what LLMs have been doing well.

          On things that have much more fuzzy or even internally inconsistent definitions, the AI world gets much more controversial.

          But I happen to believe that finding and exploiting bugs or security vulnerabilities falls more into the well defined problem with well defined successes and failures. So I take it seriously when people claim that AI tools are helpful for developing certain exploits.

        • Jakeroxs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 days ago

          You can do it locally now pretty easily depending on your use ass and hardware, huggingface has all the models you’d need and use something like llama-swap

        • Evotech@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          There’s millions of YouTube videos on this subject.

          Qwen3.5 is very capable and you can run it on any hardware you have. Just depends on the model size

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          3D artist here, generative AI models are great at making work that looks super impressive while being completely unuseable for most applications, I suspect this is what most tech workers find too.