Compute-efficient-inference FlashDecoding++: Faster Large Language Model Inference on GPUs Paper • 2311.01282 • Published Nov 2, 2023 • 37 Exponentially Faster Language Modelling Paper • 2311.10770 • Published Nov 15, 2023 • 119 Neural Network Diffusion Paper • 2402.13144 • Published Feb 20, 2024 • 100
FlashDecoding++: Faster Large Language Model Inference on GPUs Paper • 2311.01282 • Published Nov 2, 2023 • 37
Disruptive Neural Network Diffusion Paper • 2402.13144 • Published Feb 20, 2024 • 100 Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention Paper • 2502.11089 • Published Feb 16, 2025 • 169
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention Paper • 2502.11089 • Published Feb 16, 2025 • 169
Compute-efficient-inference FlashDecoding++: Faster Large Language Model Inference on GPUs Paper • 2311.01282 • Published Nov 2, 2023 • 37 Exponentially Faster Language Modelling Paper • 2311.10770 • Published Nov 15, 2023 • 119 Neural Network Diffusion Paper • 2402.13144 • Published Feb 20, 2024 • 100
FlashDecoding++: Faster Large Language Model Inference on GPUs Paper • 2311.01282 • Published Nov 2, 2023 • 37
Disruptive Neural Network Diffusion Paper • 2402.13144 • Published Feb 20, 2024 • 100 Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention Paper • 2502.11089 • Published Feb 16, 2025 • 169
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention Paper • 2502.11089 • Published Feb 16, 2025 • 169