The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Tigre Wikipedia Corpus (tigwiki)
Overview
This repository houses the Tigre Wikipedia Corpus, a foundational linguistic resource containing all non-template articles from https://tig.wikipedia.org.
Tigre is an under-resourced South Semitic language within the Afro-Asiatic family. This dataset serves as a critical component for bridging the digital divide, facilitating the development of Natural Language Processing (NLP) models—including Language Models (LMs), Machine Translation (MT) systems, and text generation tools—specifically tailored for the Tigre community.
Background & Scope
The Tigre language Wikipedia was officially approved and launched in December 2024, marking a significant milestone for the language's digital presence. This corpus represents the collective effort of the Tigre diaspora community, with contributions from over twenty dedicated volunteers who actively build and maintain this comprehensive digital knowledge base.
Included Data & Coverage
Data Modalities
This repository contains Monolingual Text extracted directly from Wikipedia articles.
Domain Coverage
The corpus encompasses a diverse range of topics, reflecting the broad scope of the encyclopedia. Key sections include:
- Culture & Heritage: Art, Culture, Food, and Drinks
- Geography & Travel: "Let's explore our country," Tourism
- STEM: Science, Technology, Health
- Humanities: History, Politics, Biography, Literature (Books)
- General Interest: Sport, Entertainment ("Fun"), Miscellaneous
Dataset Structure
The corpus is provided as a single, compressed JSON Lines file (tigre_wikipedia.jsonl.gz). This format is efficient for streaming and compatibility with standard NLP libraries.
tigre-data-wikipedia/
├── README.md
├── build_corpus.py
└── tigre_wikipedia.jsonl.gz
Data Fields
| Field | Type | Description |
|---|---|---|
| id | string | Unique ID of the article |
| title | string | Article title |
| text | string | Cleaned plain-text content |
Data Provenance & Methodology
Data Generation Pipeline
The data is generated from official Wikimedia XML dumps using a custom SAX-based streaming parser. Steps include:
- Source: Downloads raw XML dumps.
- Extraction: Processes compressed files efficiently.
- Filtering: Removes non-main namespaces (User:, Talk:, Template:).
- Output: Serializes clean text into JSONL format.
Bias, Risks & Known Limitations
- Community Bias: Overrepresentation of diaspora-relevant topics.
- Domain Bias: Overemphasis on formal encyclopedic style.
- Size Limitations: As a newer Wikipedia (Dec 2024), content volume is still growing.
How to Use
Loading via Hugging Face
from datasets import load_dataset
dataset = load_dataset("BeitTigreAI/tigre-data-wikipedia", split="train")
print(dataset[0]["title"])
print(dataset[0]["text"][:200])
Reproducibility: Build Latest Version
1. Install Requirements
pip install requests beautifulsoup4
2. Run Script
python build_corpus.py
Licensing
Licensed under CC BY-SA 4.0.
Citation
@misc{tigre-wikipedia-corpus,
author = {BeitTigreAI},
title = {Tigre Wikipedia (tigwiki) Corpus},
year = {2025},
publisher = {Hugging Face},
howpublished = {https://huggingface.co/datasets/BeitTigreAI/tigre-data-wikipedia}
}
- Downloads last month
- 16