Mistral 7B - Mental Health Text Classifier (LoRA Fine-tuned)
This model is a LoRA fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 designed to classify mental health-related statements into one of seven categories:
- Anxiety
- Bipolar
- Depression
- Normal
- Personality disorder
- Stress
- Suicidal
π§ Dataset
The dataset consists of real-world mental health prompts and was structured as:
- Input: Statement regarding an emotional or psychological condition.
- Label: Corresponding mental health category.
Few-shot prompting was used during tokenization.
π Performance
Evaluation was performed on two test splits:
| Split Size | Accuracy (Before) | F1 Score (Before) | Accuracy (After) | F1 Score (After) |
|---|---|---|---|---|
| 200 | 0.77 | 0.75 | 0.91 | 0.90 |
| 500 | 0.74 | 0.72 | 0.89 | 0.88 |
π§ How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("dsuram/mistral-mental-health-lora")
model = AutoModelForCausalLM.from_pretrained("dsuram/mistral-mental-health-lora")
prompt = """You are a mental health assistant. Classify the statements.
Input: I want to end my life.
Label:"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π‘ Training
- LoRA Fine-Tuning with
peft - 4-bit quantization using
bitsandbytes - Optimized for low-resource GPU (A100 40GB)
- Trained for 3 epochs with
Trainer
πββοΈ Author
Note: This model is not a substitute for professional psychological or medical advice.