The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 642, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 457, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1847, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 661, in finalize
self._build_writer(self.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 457, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
null | _format_kwargs
dict | _format_type
null | _output_all_columns
bool | _split
null |
|---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
3adf7bcf27db5693
| null |
{}
| null | false
| null |
Tigre Low-Resource Language Resource Collection
Overview
This repository introduces the Monolingual Text component of the Tigre language resource collection. Tigre is an under-resourced South Semitic language within the Afro-Asiatic family. This dataset provides a large, clean text corpus essential for training foundational models such as Language Models (LMs) and word embeddings. The goal of Tigre-Data 1.0 is to accelerate research in low-resource NLP and morphologically rich language modeling.
Included Data & Statistics
Data Modalities
This repository contains only the Monolingual Text data modality.
Dataset Statistics
The corpus was tokenized using a simple whitespace tokenizer to determine the core metrics below.
| Statistic | Value |
|---|---|
| Total Number of Examples (Rows) | 490,032 |
| Total Number of Tokens | 14,700,960 |
| Vocabulary Size (Unique Tokens) | 760,384 |
| Average Example Length | 30.00 tokens/row |
Dataset Structure
The dataset is provided in the Parquet format, which is easily streamed and loaded using the Hugging Face datasets library.
tigre-data-monolingual-text/
├── README.md
├── data.parquet
└── arrow_format/
└── train/
├── data-00000-of-00001.arrow
├── dataset_info.json
└── state.json
Data Provenance & Methodology
Sources
The monolingual text corpus was compiled from diverse sources to maximize coverage:
- Books
- News articles
- Web content
- Wikipedia
Data Curation & Preprocessing
- Preprocessing: The data underwent a light cleanup of data to remove non text binaries.
- Orthographic Normalization: The original corpus was normalized to ensure consistent Ge'ez script usage.
- Text Cleaning: Steps such as deduplication and boilerplate removal were applied to improve corpus quality (details available in the associated data paper).
Bias, Risks & Known Limitations
The data collection process was designed to be broad; however, inherited biases from the original sources are present:
- Domain Bias: The sources (news articles, history books, poems, culture-related texts) mean the corpus may overrepresent formal and historical language and underrepresent informal or conversational Tigre.
- Linguistic Bias: Any inherent orthographic variation or dialectal representation present in the original source materials is inherited by this dataset.
How to Download & Load the Dataset
The dataset can be easily loaded using the Hugging Face Hub client library:
from datasets import load_dataset
dataset_name = "BeitTigreAI/tigre-data-monolingual-text"
# Load the full dataset (the default split is 'train')
ds = load_dataset(dataset_name, split="train")
# Example: Display the number of rows and the first example
print(f"Total rows loaded: {len(ds)}")
print(ds[0])
```python
## Licensing
CC-BY-SA-4.0
## Citation
If you use this resource in your work, please cite the repository by referencing its Hugging Face entry:
### Recommended Citation Format:
- Repository Name: Tigre Monolingual Text Dataset
- Organization: BeitTigreAI
- URL: https://huggingface.co/datasets/BeitTigreAI/tigre-data-monolingual-text
- Downloads last month
- 18