RuvLTRA Small

License HuggingFace GGUF

πŸ“± Compact Model Optimized for Edge Devices

Quick Start β€’ Use Cases β€’ Integration


Overview

RuvLTRA Small is a compact 0.5B parameter model designed for edge deployment. Perfect for mobile apps, IoT devices, and resource-constrained environments.

Model Card

Property Value
Parameters 0.5 Billion
Quantization Q4_K_M
Context 4,096 tokens
Size ~398 MB
Min RAM 1 GB

πŸš€ Quick Start

# Download
wget https://huggingface.co/ruv/ruvltra-small/resolve/main/ruvltra-0.5b-q4_k_m.gguf

# Run with llama.cpp
./llama-cli -m ruvltra-0.5b-q4_k_m.gguf -p "Hello, I am" -n 64

πŸ’‘ Use Cases

  • Mobile Apps: On-device AI assistant
  • IoT: Smart home device intelligence
  • Edge Computing: Local inference without cloud
  • Prototyping: Quick model experimentation

πŸ”§ Integration

Rust (RuvLLM)

use ruvllm::hub::ModelDownloader;

let path = ModelDownloader::new()
    .download("ruv/ruvltra-small", None)
    .await?;

Python

from huggingface_hub import hf_hub_download

model = hf_hub_download("ruv/ruvltra-small", "ruvltra-0.5b-q4_k_m.gguf")

Hardware Support

  • βœ… Apple Silicon (M1/M2/M3)
  • βœ… NVIDIA CUDA
  • βœ… CPU (x86/ARM)
  • βœ… Raspberry Pi 4/5

License: Apache 2.0 | GitHub: ruvnet/ruvector

Downloads last month
12
GGUF
Model size
0.5B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support