

I’ve used Gemma4-31B for agentic coding and its actually very good as far as local models go. Its less verbose than qwen3.5 so it ends up being faster too. Gemma4-26B can do agentic but its noticeably worse so you have to go slow with it. I haven’t had any coherence issues like other commenters mention but I’ve only been using higher quality quants from unsloth on llama.cpp





I find llama.cpp with Vulkan EXTREMELY reliable. I can have it running for days at once without a problem. As far as tokens/sec that’s that’s a complicated question because it depends on model, quant, sepculative, kv quant, context length, and card distribution. Generally:
Models’ typical speeds at deep context for agentic use. Simple chats will be faster