LLaMA-3-8B-Instruct Fine-Tuned for Mental Health Counseling
Model Overview
This is a fine-tuned version of unsloth/llama-3-8b-Instruct-bnb-4bit, adapted for mental health counseling applications. It is designed to provide thoughtful, relevant, and compassionate responses.
Dataset
- Amod/mental_health_counseling_conversations (cleaned version:
arafatanam/Mental-Health-Counseling) - 2752 rows - chillies/student-mental-health-counseling-vn (translated version:
arafatanam/Student-Mental-Health-Counseling-10K) - 7500 rows - Total dataset size: 10,252 rows
Training Details
- Hardware: Kaggle Notebooks (GPU T4 x2)
- Fine-tuning framework:
UnslothwithLoRA - Training settings:
max_seq_length = 512batch_size = 8gradient_accumulation_steps = 4num_train_epochs = 2learning_rate = 5e-5optimizer = adamw_8bitlr_scheduler = cosine
Training Results
- Final training loss:
1.2433 - Total steps:
640 - Trainable parameters:
0.52%of the model` - Validation loss:
1.182 - Evaluation metric (perplexity):
3.15
Usage
This model can be applied to:
- AI-driven mental health chatbots
- Personalized therapy assistance
- Generating mental health support content
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for arafatanam/Student-Guide-llama-3-8b-Instruct-bnb-4bit
Base model
unsloth/llama-3-8b-Instruct-bnb-4bit