Additionally, they show a counter-intuitive scaling Restrict: their reasoning effort and hard work improves with problem complexity approximately a point, then declines despite acquiring an suitable token budget. By comparing LRMs with their normal LLM counterparts less than equivalent inference compute, we establish 3 performance regimes: (1) very low-complexity https://www.youtube.com/watch?v=snr3is5MTiU