Datasets:
dataset_info:
features:
- name: id
dtype: int64
- name: latin
dtype: string
- name: german
dtype: string
- name: source
dtype: string
- name: tag
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 154924781
num_examples: 406011
download_size: 91837604
dataset_size: 154924781
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- translation
language:
- la
- de
size_categories:
- 100K<n<1M
pretty_name: Latin-German-Textcorpus
📜 Latin-German Textcorpus
This dataset consists of 406,011 Latin-German parallel sentences (sentence pairs). Each entry contains a Latin sentence and its corresponding German translation. The sentence pairs were collected and processed from various websites and online sources.
📄 Dataset Schema
The dataset contains the following columns:
- id: A unique identifier for each entry.
- latin: The sentence in Latin.
- german: The German translation.
- source: The origin or reference from where the entry was taken.
- tag: A thematic category or label indicating the entry's historical period. Possible values are: 'ANCIENT', 'MEDIEVAL', 'MODERN', or 'UNKNOWN'.
- score: A numerical relevance or similarity score. A value of -1 indicates the score is undefined, while any value >= 1 represents a valid score.
📚 Citation / References
Falls Sie dieses Modell in Ihrer Forschung verwenden, bitten wir Sie, die zugrundeliegende Masterarbeit wie folgt zu zitieren:
Masterarbeit (Zenodo DOI):
Wenzel, M. (2025). Translatio ex Machina: Neuronale Maschinelle Übersetzung vom Lateinischen ins Deutsche [Zenodo]. Unveröffentlichte Masterarbeit, Fachhochschule Südwestfalen
💻 Usage
Understanding the score Feature
The score column indicates the method and quality of the sentence alignment:
score= -1: This value signifies that the Latin and German sentences were manually aligned or by a special tooling.score>= 1: This value indicates that the alignment was calculated by an automated alignment tool.- Interpretation: A higher score suggests a better alignment quality.
- Recommendation: For high-confidence automated alignments, we recommend using only entries where the score is >= 1.2.
Loading and Filtering the Dataset
You can easily filter the dataset to select only high-quality alignments (manual alignments OR high-scoring automated alignments) using the filter() method:
from datasets import load_dataset, DatasetDict
# 1. Load the initial dataset (contains only the "train" split)
dataset = load_dataset("fhswf/latin-german-parallel")
train_dataset = dataset["train"]
# 2. Filter the dataset to include only high-quality alignments:
# - Entries with score == -1 (manual alignment)
# - Entries with score >= 1.2 (high-confidence automated alignment)
def filter_by_score(example):
return example["score"] == -1 or example["score"] >= 1.2
high_quality_train = train_dataset.filter(filter_by_score)
# Optional: Proceed with splitting the high-quality data
temp_splits = high_quality_train.train_test_split(test_size=0.01, seed=42)
test_validation_splits = temp_splits["test"].train_test_split(test_size=0.5, seed=42)
dataset = DatasetDict({
"train": temp_splits["train"],
"validation": test_validation_splits["train"],
"test": test_validation_splits["test"],
})