The Qwen3.5 models are still the best local models I’ve used, so I’m excited to see how this updated version performs.

  • venusaur@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 days ago

    Thanks! That sounds expensive. Hopefully 24GB VRAM gets cheaper or models get more efficient soon.

      • venusaur@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        20 days ago

        Thanks! I’m hoping to run at least 20B. Idk if I can do that fast enough without 24GB. Seems to be the sweet spot.