Ask HN: How do you choose a model for a task?

How do you decide a model is good enough for a given task? Right now I use Opus for planning and harder tasks and switch to sonnet for more defined tasks. But I feel like sonnet is kind of stupid and is introducing issues because it can’t grasp the larger context? Is there some definitive way to say a model is good enough for a task? Or is it all vibes?

8 points | by bix6 20 hours ago

10 comments

  • wontopos 1 hour ago
    Mostly vibes, but you can make the vibes more reliable. I set a “error budget” per task type - if I have to correct the model’s output more than once every ~5 runs, it’s not good enough for that task. Cheap to track, and it forces you to notice degradation instead of just feeling it.
  • PaulHoule 19 hours ago
    Evaluation is harder than you think because of statistics.

    Like if you want to accurately know if one model is better than another you have to test it on hundreds if not thousands of examples which are carefully graded in difficulty, not in the training sets, etc.

    Practically you might try model A and model B and use each one 2-3 times on different tasks and walk out with the impression that A is really good and B sux, but it could be model A got lucky because you asked it to do things it is good at or maybe it just got lucky and got the right answer anyway.

    See https://arxiv.org/html/2410.12972v1 and https://arxiv.org/pdf/2505.14810 -- those papers are considering a general space of tasks but you could totally do the same kind of eval for the tasks you care about.

    • bix6 18 hours ago
      Have you implemented any of this in practice? Eg are you benchmarking models?
      • PaulHoule 16 hours ago
        I've done some for classification, ranking, and other sorts of non-generative tasks.
  • freedomben 19 hours ago
    This is a hard problem for me as well. Right now I've just been using the best model available (like Opus, or GPT 5.5, or Gemin Pro) but it's not ideal. My problem is anytime I step down the results are subtlely worse and sometimes I don't notice immediately depending on what I'm doing.

    As far as Opus vs. GPT 5.5 etc, I generally decide with:

    1. Code? -> Opus

    2. Docs? -> GPT

    3. Real-time or recent information needed? -> Gemini

    It's far from perfect though. Would love to hear others thoughts.

    • bix6 18 hours ago
      Opus eats tokens so fast so I try to minimize it but compared to Sonnet I definitely see fewer issues in my larger projects. Sonnet has gone off the rails a few times.
  • mikejulietbravo 13 hours ago
    The short answer is that it depends how well you define the boundaries of the task and the relative complexity. For example, smaller model is usually fine for something like summarization, but an "easier" coding task might still actually be quite difficult unless you eval it heavily like @paulhoule said
  • noashavit 17 hours ago
    Gemini for recent search and google workspace automation

    Perplexity for deep research

    Claude Opus for coding, Sonnet for writing

    Gemma4 for local AI overviews and analysis

    Qwen coder for local prototyping

  • shouvik12 19 hours ago
    for short, stateless stuff,definitions, formatting, quick lookups I have never noticed a meaningful difference between models. But anything that requires reasoning across a lot of prior context, it's usually claude sonet or opus. But feels like the vibe will soon take me to codex
  • journal 4 hours ago
    Use that model in production that gives you acceptable answer 1000 times in a row.
  • warren455 8 hours ago
    [flagged]
  • OutrageousTea 19 hours ago
    [flagged]
  • jabeer 19 hours ago
    [flagged]