Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Collections

Discover the best community collections!

Collections including paper arxiv:2412.11231
Efficient Inference
Collection by
Dec 22, 2024
3
  • Prompt Cache: Modular Attention Reuse for Low-Latency Inference

    Paper • 2311.04934 • Published Nov 7, 2023 • 33
  • Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models

    Paper • 2311.08692 • Published Nov 15, 2023 • 13
  • Exponentially Faster Language Modelling

    Paper • 2311.10770 • Published Nov 15, 2023 • 119
  • Memory Augmented Language Models through Mixture of Word Experts

    Paper • 2311.10768 • Published Nov 15, 2023 • 19
Efficient Inference
Collection by
Dec 22, 2024
3
  • Prompt Cache: Modular Attention Reuse for Low-Latency Inference

    Paper • 2311.04934 • Published Nov 7, 2023 • 33
  • Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models

    Paper • 2311.08692 • Published Nov 15, 2023 • 13
  • Exponentially Faster Language Modelling

    Paper • 2311.10770 • Published Nov 15, 2023 • 119
  • Memory Augmented Language Models through Mixture of Word Experts

    Paper • 2311.10768 • Published Nov 15, 2023 • 19
  • Previous
  • 1
  • 2
  • Next
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs