Nemotron-Cascade 2: Post-Training LLMs with Cascade RL

(research.nvidia.com)

1 points | by daureg 2 hours ago

1 comments

  • daureg 2 hours ago
    The arxiv paper (https://arxiv.org/abs/2603.19220) was already submitted to HN (https://news.ycombinator.com/item?id=47530052) but it sounds like a nice local model:

    > Despite its compact size (30B MoE model with 3B activated parameters), its mathematical and coding reasoning performance approaches that of frontier open models. It is the second open-weight LLM, after DeepSeek-V3.2-Speciale-671B-A37B, to achieve Gold Medal-level performance in the 2025 International Mathematical Olympiad (IMO), the International Olympiad in Informatics (IOI), and the ICPC World Finals.