YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

πŸ§ͺ Pankaj-TestCaseGeneration-Mistral

πŸš€ Model Overview

This project uses a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3, specialized for automatically generating software test cases directly from user stories.

Given a user story as input, the model generates a comprehensive set of:

  • βœ… Positive Test Cases
  • ❌ Negative Test Cases
  • ⚠️ Edge Case Scenarios

This helps QA teams accelerate test case design by automating repetitive analysis work.


πŸ§ͺ Model Details

Property Value
Base Model mistralai/Mistral-7B-Instruct-v0.3
Task Test Case Generation (QA Automation)
Language English
Input User Story (free text)
Output Structured Test Cases

πŸ‘… How to Use

You can use this model directly via the Hugging Face Transformers library in Python.

Example Code

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Pankaj/TestCaseGeneration-Mistral"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

prompt = """
User Story:
As a user, I want to reset my password so that I can regain access to my account if I forget my password.

Test Cases:
"""

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=500)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

βš’οΈ Training Process

The model was fine-tuned on a custom dataset containing:

βœ… Real-world user stories collected from projects
βœ… Manually curated test cases written by experienced QA engineers

The fine-tuning was performed using Hugging Face Transformers in Python.


πŸ“š Intended Use

This model is designed to:

βœ… Automate the first draft of test cases for QA teams
βœ… Serve as an assistive tool for junior QA engineers or developers
βœ… Save time when starting test design from scratch


βœ… Sample Input & Output

Input

User Story:
As a user, I want to search for products by name so that I can quickly find items I am interested in.

Test Cases:

Output

1. [Positive] Search for existing product
   Pre-condition: Product exists in catalog
   Steps:
   1. Navigate to the search page
   2. Enter product name in search bar
   3. Click "Search"
   Expected Result: Product is shown in search results

2. [Negative] Search with invalid characters
   Pre-condition: None
   Steps:
   1. Enter "@@@@" in search bar
   2. Click "Search"
   Expected Result: "No results found" message is shown

3. [Edge] Case-insensitive search
   Pre-condition: Product exists with lowercase name
   Steps:
   1. Enter product name in uppercase
   2. Click "Search"
   Expected Result: Product still appears in search results

⚠️ Limitations

  • Output quality may degrade with very complex or ambiguous user stories.
  • Domain-specific terminology (finance, healthcare, etc.) may require further fine-tuning.

πŸ” Handling API Keys Safely

To run this project locally, you need a Hugging Face API token.

Recommended Approach:

  1. Create a .env file and add:
    HF_TOKEN=your_hugging_face_token_here
    
  2. Add .env to .gitignore to avoid accidental exposure.
  3. Use this code to read the token:
    from dotenv import load_dotenv
    import os
    
    load_dotenv()
    hf_token = os.getenv("HF_TOKEN")
    

πŸ’¬ Feedback and Contributions

Contributions are welcome β€” whether it's improving prompts, enhancing the fine-tuning dataset, or adding new features like Jira integration for fetching user stories automatically.

Feel free to raise issues, create pull requests, or simply share feedback. Let's make test automation smarter together! ✨


πŸŽ‰ Happy Testing!

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support