Home-FunctionGemma-270m

The "Home" model is a fine tuning of the FunctionGemma model from Google. The model is able to control devices in the user's house via the "Assist" API, as well as perform basic question answering about the provided home's state.

The model is quantized using Lama.cpp in order to enable running the model in super low resource environments that are common with Home Assistant installations such as Rapsberry Pis.

Training

Built with Axolotl

Datasets

Home Assistant Requests V2 - https://huggingface.co/datasets/acon96/Home-Assistant-Requests-V2

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 32
  • total_eval_batch_size: 2
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 59
  • training_steps: 597

License

The model is licensed under the Gemma license as it is a fine-tuning of the FunctionGemma model.

Downloads last month
423
Safetensors
Model size
0.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for acon96/Home-FunctionGemma-270m

Finetuned
(89)
this model
Quantizations
1 model

Dataset used to train acon96/Home-FunctionGemma-270m