TheCornCollector@piefed.zip to LocalLLaMA@sh.itjust.worksEnglish · 21 days agoQwen3.6-35B-A3B releasedhuggingface.coexternal-linkmessage-square20linkfedilinkarrow-up11arrow-down10file-text
arrow-up11arrow-down1external-linkQwen3.6-35B-A3B releasedhuggingface.coTheCornCollector@piefed.zip to LocalLLaMA@sh.itjust.worksEnglish · 21 days agomessage-square20linkfedilinkfile-text
The Qwen3.5 models are still the best local models I’ve used, so I’m excited to see how this updated version performs.
minus-squarevenusaur@lemmy.worldlinkfedilinkEnglisharrow-up0·21 days agoThanks! That sounds expensive. Hopefully 24GB VRAM gets cheaper or models get more efficient soon.
minus-squareJakeroxs@sh.itjust.workslinkfedilinkEnglisharrow-up0·21 days agoYou would want to wait till smaller models for 3.6 are released, I’d assume it’ll be soon
minus-squarevenusaur@lemmy.worldlinkfedilinkEnglisharrow-up0·20 days agoThanks! I’m hoping to run at least 20B. Idk if I can do that fast enough without 24GB. Seems to be the sweet spot.
Thanks! That sounds expensive. Hopefully 24GB VRAM gets cheaper or models get more efficient soon.
You would want to wait till smaller models for 3.6 are released, I’d assume it’ll be soon
Thanks! I’m hoping to run at least 20B. Idk if I can do that fast enough without 24GB. Seems to be the sweet spot.