agnivamaiti/KokLLaMA-3.2-3B-Instruct
This is KokLLaMA v2, a fine-tuned version of Llama 3.2 3B optimized for the Kokborok language.
Model Details
- Base Model: meta-llama/Llama-3.2-3B-Instruct
- Training Method: QLoRA (Rank=32, Alpha=64)
- Target Modules: All Linear Layers (Knowledge + Syntax)
- Dataset: Cleaned Kokborok-English Instruction pairs
How to Use
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = "meta-llama/Llama-3.2-3B-Instruct"
adapter_model = "agnivamaiti/KokLLaMA-3.2-3B-Instruct"
# Load Base
model = AutoModelForCausalLM.from_pretrained(base_model, device_map="auto")
# Load Adapter
model = PeftModel.from_pretrained(model, adapter_model)
tokenizer = AutoTokenizer.from_pretrained(adapter_model)
inputs = tokenizer("Kokborok language hwnwi tamo?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- 31
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for agnivamaiti/KokLLaMA-3.2-3B-Instruct
Base model
meta-llama/Llama-3.2-3B-Instruct