

Ok, I really really appreciate the depth you’ve put into your answers.
I always look at these grading rubrics people post for models and I’ve never seen an example of how they get ranked.
At this point I don’t think I’ll be ranking models myself, I’m not an enthusiast (yet) just running some ~30B models at home for various things and trying to stay afloat in what is a significantly more complicated ecosystem than I had imagined when I started.
But I really appreciate what you’ve written and I’m going to save all this.
Last questions - I see that you used Claude to come up with your test questions, right? How do you even validate the anchor answers if you’re not an expert in the field?
Do you do this professionally?



What are your use cases for LLMs? Lots of effort to rate models if you’re not seeking a specific strength or outcome in your work.