Chronos 1.5B - Quantum-Classical hybrid model

chronos_logo1

A hybrid quantum-classical model combining VibeThinker-1.5B with quantum kernel methods

License: MIT Python 3.8+ Transformers

Overview

Chronos 1.5B is an experimental quantum-enhanced language model that combines:

  • VibeThinker-1.5B as the base transformer model for embedding extraction
  • Quantum Kernel Methods for similarity computation
  • 2-qubit quantum circuits for enhanced feature space representation

This model demonstrates a proof-of-concept for hybrid quantum-classical machine learning.

Quantum Component Details

Feature Implementation
Real quantum training Quantum rotation angles were optimized on IBM Heron r2 (ibm_fez) in 2025
Saved quantum parameters quantum_kernel.pkl — trained 2-qubit gate angles (pickle)
Quantum circuit definition Available in k_train_quantum.npy / k_test_quantum.npy (future use)
Current inference Classical simulation using the trained quantum angles (via cosine similarity)
True quantum execution (optional) Possible by loading quantum_kernel.pkl + circuit files and running on IBM Quantum (example scripts will be added)

Architecture

chrn11

Model Details

  • Base Model: WeiboAI/VibeThinker-1.5B
  • Architecture: Qwen2ForCausalLM
  • Parameters: ~1.5B
  • Context Length: 131,072 tokens
  • Embedding Dimension: 1536
  • Quantum Component: 2-qubit kernel
  • Training Data: 8 quantum layers

Performance

Base VibeThinker-1.5B Benchmarks

bench

Benchmark Results

Model Accuracy Type
Classical (Linear SVM) 100% Baseline
Quantum Hybrid 75% Experimental

chronos_o1_results_english

chronos_o1_results

Note: Performance varies with dataset size and quantum simulation parameters. This is a proof-of-concept demonstrating quantum-classical integration.

🧬 Also take a look at The Hypnos Family

Model Parameters Quantum Sources Best For Status
Hypnos-i2-32B 32B 3 (Matter + Light + Nucleus) Production, Research ✅ Available
Hypnos-i1-8B 8B 1 (Matter only) Edge, Experiments ✅ 10k+ Downloads

Start with Hypnos-i1-8B for lightweight quantum-regularized AI!

Installation

Requirements

pip install torch transformers numpy scikit-learn

Usage

Python Inference

from transformers import AutoModel, AutoTokenizer
import torch
import numpy as np
from sklearn.preprocessing import normalize
from sklearn.metrics.pairwise import cosine_similarity

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

tokenizer = AutoTokenizer.from_pretrained("squ11z1/Chronos-1.5B")
model = AutoModel.from_pretrained(
    "squ11z1/Chronos-1.5B",
    torch_dtype=torch.float16
).to(device).eval()

def predict_sentiment(text):
    inputs = tokenizer(text, return_tensors="pt",
                      padding=True, truncation=True,
                      max_length=128).to(device)

    with torch.no_grad():
        outputs = model(**inputs)
        embedding = outputs.last_hidden_state.mean(dim=1).cpu().numpy()[0]

    embedding = normalize([embedding])[0]

    return sentiment

Quick Start Script

python inference.py

This will start an interactive session where you can enter text for sentiment analysis.

Example Output

Input text: 'Random text!'
[1/3] VibeThinker embedding: 1536D (normalized)
[2/3] Quantum similarity computed
[3/3] Classification: POSITIVE
Confidence: 87.3%
Positive avg: 0.756, Negative avg: 0.128
Time: 0.42s

Quantum Kernel Details

The quantum component uses a simplified kernel approach:

  1. Extract 1536D embeddings from VibeThinker
  2. Normalize using L2 normalization
  3. Compute cosine similarity against training examples
  4. Apply quantum-inspired weighted voting
  5. Return sentiment with confidence score

Note: This implementation uses classical simulation. For true quantum execution, integration with IBM Quantum or similar platforms is required.

Training Data

The model uses 8 quantum layers for demonstration:

  • 4 positive examples
  • 4 negative examples

For production use, retrain with larger datasets.

Limitations

  • Small training set (8 examples)
  • Quantum kernel is simulated, not executed on real quantum hardware
  • Performance may vary significantly with different inputs
  • Designed for English text

Future Improvements

  1. Expand training dataset to 100+ examples
  2. Implement true quantum kernel execution on IBM Quantum
  3. Increase quantum circuit complexity (3-4 qubits)
  4. Add error mitigation for quantum noise
  5. Support multi-language analysis
  6. Fine-tune on domain-specific data

Citation

If you use this model in your research, please cite:

@misc{chronos-1.5b,
  title={Chronos 1.5B: Quantum-Enhanced Sentiment Analysis},
  author={squ11z1},
  year={2025},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/squ11z1/Chronos-1.5b}}
}

Acknowledgments

  • Base model: VibeThinker-1.5B by WeiboAI
  • Quantum computing framework: Qiskit
  • Inspired by quantum machine learning research

License

MIT License - See LICENSE file for details


Disclaimer: This is an experimental proof-of-concept model. Performance and accuracy are not guaranteed for production use cases. The quantum component is currently does not provide quantum advantage over classical methods.

Downloads last month
28
GGUF
Model size
2B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for squ11z1/Chronos-1.5B

Base model

Qwen/Qwen2.5-1.5B
Quantized
(30)
this model

Collection including squ11z1/Chronos-1.5B