Decoding as Optimisation on the Probability Simplex: From Top-K to Top-P (Nucleus) to Best-of-K Samplers
Abstract
Decoding is reinterpreted as a principled optimization layer that balances model scores with structural preferences, recovering existing methods as special cases and enabling the creation of new decoders like Best-of-K that improve accuracy in mathematical reasoning tasks.
Decoding sits between a language model and everything we do with it, yet it is still treated as a heuristic knob-tuning exercise. We argue decoding should be understood as a principled optimisation layer: at each token, we solve a regularised problem over the probability simplex that trades off model score against structural preferences and constraints. This single template recovers greedy decoding, Softmax sampling, Top-K, Top-P, and Sparsemax-style sparsity as special cases, and explains their common structure through optimality conditions. More importantly, the framework makes it easy to invent new decoders without folklore. We demonstrate this by designing Best-of-K (BoK), a KL-anchored coverage objective aimed at multi-sample pipelines (self-consistency, reranking, verifier selection). BoK targets the probability of covering good alternatives within a fixed K-sample budget and improves empirical performance. We show that such samples can improve accuracy by, for example, +18.6% for Qwen2.5-Math-7B on MATH500 at high sampling temperatures.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Geometry-Aware Decoding with Wasserstein-Regularized Truncation and Mass Penalties for Large Language Models (2026)
- Decoding in Geometry: Alleviating Embedding-Space Crowding for Complex Reasoning (2026)
- Entropy-Aligned Decoding of LMs for Better Writing and Reasoning (2026)
- Attention in Constant Time: Vashista Sparse Attention for Long-Context Decoding with Exponential Guarantees (2026)
- Spend Search Where It Pays: Value-Guided Structured Sampling and Optimization for Generative Recommendation (2026)
- Near-Oracle KV Selection via Pre-hoc Sparsity for Long-Context Inference (2026)
- OPUS: Towards Efficient and Principled Data Selection in Large Language Model Pre-training in Every Iteration (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper


