FunctionGemma 270M - Mobile Actions (LiteRT-LM Ready)
A fine-tuned FunctionGemma 270M model optimized for on-device function calling on Android devices using Google AI Edge Gallery and LiteRT-LM runtime.
🎯 Features
- ✅ Ready-to-use: Pre-converted
.litertlmformat for immediate deployment - ✅ On-device function calling: Runs entirely on Android devices without internet
- ✅ Optimized: INT8 quantization (~271 MB) for efficient mobile deployment
- ✅ Mobile Actions: Supports 6 native Android functions
- ✅ Low latency: Optimized with extended KV cache (1024 tokens)
📱 Supported Mobile Actions
The model can execute the following Android functions via natural language:
| Function | Example Prompt |
|---|---|
| Flashlight | "Turn on the flashlight" |
| Contacts | "Create a contact for John Doe with phone 555-1234" |
| "Send email to john@example.com" | |
| Maps | "Show Times Square on the map" |
| WiFi | "Turn off WiFi" |
| Calendar | "Create a calendar event for Team Meeting tomorrow at 2 PM" |
🚀 Quick Start
Download the Model
wget https://huggingface.co/Yagna1/functiongemma-270m-mobile-actions/resolve/main/mobile-actions_q8_ekv1024.litertlm
Or use Python:
from huggingface_hub import hf_hub_download
model_path = hf_hub_download(
repo_id="Yagna1/functiongemma-270m-mobile-actions",
filename="mobile-actions_q8_ekv1024.litertlm"
)
print(f"Downloaded to: {model_path}")
Use in Google AI Edge Gallery App
- Install the Google AI Edge Gallery Android app
- Import the
mobile-actions_q8_ekv1024.litertlmfile into the app - Navigate to "Mobile Actions" feature
- Test with natural language prompts like:
- "Turn on flashlight"
- "Create contact John Smith"
- "Show Central Park on map"
🏗️ Model Architecture
- Base Model: google/functiongemma-270m-it
- Architecture: Gemma 3 (270M parameters)
- Quantization: INT8 (Dynamic)
- KV Cache: Extended to 1024 tokens for longer conversations
- Runtime: LiteRT-LM (Google's on-device inference engine)
📊 Model Details
| Property | Value |
|---|---|
| Parameters | 270M |
| Quantization | INT8 |
| Model Size | 271 MB |
| Format | .litertlm (LiteRT-LM) |
| Context Length | 1024 tokens |
| Target Device | Android (ARM) |
🔧 Function Calling Format
The model uses LiteRT-LM's native function calling format:
<start_function_call>call:function_name{param1:value1,param2:value2}<end_function_call>
Example outputs:
User: "Turn on the flashlight"
Model: <start_function_call>call:enableFlashlight{}<end_function_call>
User: "Create contact John Doe with phone 555-1234"
Model: <start_function_call>call:createContact{contactName:John Doe,phoneNumber:555-1234}<end_function_call>
📚 Training Details
This model was fine-tuned on synthetic Mobile Actions data designed to match LiteRT-LM's expected function calling format. The training focused on:
- Natural language → function call mapping
- Parameter extraction from user queries
- Handling edge cases and variations
- Multi-turn conversation support
⚠️ Limitations
- Limited to 6 pre-defined Android functions
- English language only
- Requires Android device with ARMv8-A or newer
- May not handle complex multi-step actions
- Function parameters must match expected schema
🤝 Credits
Original Model: This is a mirror/re-upload of JackJ1/functiongemma-270m-it-mobile-actions-litertlm
Thanks to:
- JackJ1 for the original fine-tuning work
- Google for FunctionGemma base model and LiteRT-LM runtime
- Google AI Edge Team for the Gallery app and tools
📄 License
Apache 2.0 (same as base FunctionGemma model)
🔗 Resources
📞 Contact
For issues or questions about this model mirror, please open an issue on the repository.
Note: This model is specifically formatted for the Google AI Edge Gallery app and requires the LiteRT-LM runtime. For general-purpose inference, use the base model or convert to standard formats.
- Downloads last month
- 86
Model tree for Yagna1/functiongemma-270m-mobile-actions
Base model
google/functiongemma-270m-it