πŸ“° News Summarizer (DistilBART-Based)

A lightweight, fast, and highly efficient news summarization model built on top of the distilled BART architecture.
This model converts long news articles into clear, concise, and human-like summaries, making it perfect for:

  • News aggregation platforms
  • Research workflows
  • Content automation
  • Browser extensions
  • Educational tools
  • AI agents & chatbots

πŸš€ Features

βœ” High-quality abstractive summaries

Not extractive β€” the model generates a new summary in natural language.

βœ” Fast & lightweight

Based on the 12-6 distilled BART variant, giving strong performance at a fraction of the size.

βœ” Trained on real news sources

Understands journalistic writing, factual structure, headlines, and key-point extraction.

βœ” Ideal for production & APIs

Minimal latency, optimized for cloud/server use.


πŸ“¦ How to Use

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_name = "Sachin21112004/news-summarizer"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

article = """
Your long news article text here...
"""

inputs = tokenizer(article, return_tensors="pt", max_length=1024, truncation=True)
summary_ids = model.generate(
    inputs["input_ids"],
    max_length=150,
    min_length=40,
    no_repeat_ngram_size=3,
    length_penalty=2.0,
    num_beams=4,
    early_stopping=True
)

print(tokenizer.decode(summary_ids[0], skip_special_tokens=True))
Downloads last month
41
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Sachin21112004/distilbart-news-summarizer

Finetuned
(41)
this model

Datasets used to train Sachin21112004/distilbart-news-summarizer