Parallel Monolingual Corpora/nl: Difference between revisions
No edit summary |
(Created page with "'''Manueel gecreëerde datasets'''") |
||
Line 20: | Line 20: | ||
==[[Simplification_Data/nl|Simplificatiedata]]== | ==[[Simplification_Data/nl|Simplificatiedata]]== | ||
'''Manueel gecreëerde datasets''' | |||
''' | |||
<div lang="en" dir="ltr" class="mw-content-ltr"> | <div lang="en" dir="ltr" class="mw-content-ltr"> |
Revision as of 14:43, 11 June 2024
DAESO-corpus
Het DAESO Corpus is een parallelle monolinguale treebank van Nederlandse teksten. Het corpus bevat ruim 2,1 miljoen woorden parallelle en vergelijkbare tekst. Ongeveer 678.000 woorden werden handmatig gealigneerd en ongeveer 1,5 miljoen woorden werden automatisch gealigneerd. Er is een semantische relatie toegevoegd aan de gealigneerde woorden/zinnen.
- 92.5 MB
- versie 1.0 (2010)
- Downloadpagina
Bijbelcorpus
Een diachroon en synchroon parallel corpus van bijbelvertalingen in het Nederlands, Engels, Duits en Zweeds, met teksten van de 14e eeuw tot nu.
Simplificatiedata
Manueel gecreëerde datasets
The Dutch municipal corpus is a parallel monolingual corpus for the evaluation of sentence-level simplification in the Dutch municipal domain created by Amsterdan Intelligence. It contains 1,311 translated parallel sentence pairs, automatically aligned from 50 documents from the Communications Department of the City of Amsterdam that were manually simplified to evaluate simplification for Dutch.
- 265 KB
- Download dataset (CSV file)
Automatically created datasets
1) The first dataset is created by Bram Vanroy for Dutch text simplification tasks using text-to-text transfer transformers.It comprises Dutch source sentences along with their corresponding simplified sentences, generated with ChatGPT.
- Training = 1013 sentences (262 KB)
- Validation = 126 sentences (32.6 KB)
- Test = 128 sentences (33 KB)
2) The second dataset is created by UWV Nederland as part of the "Leesplank" project to ensure ethical and legal soundness. It comprises 2.87 million paragraphs and its simplified text as corresponding result. The paragraphs are based on the Dutch Wikipedia extract from Gigacorpus. The text was filtered and cleaned by using GPT-4 1106 preview.
- 3.02 MB
- Download page
A more extended version of this dataset was made by Michiel Buisman and Bram Vanroy. This datasets contains a first, small set of variations of Wikipedia paragraphs in different styles (jargon, official, archaïsche_taal, technical, academic, and poetic).
- 3.02 MB
- Download page
3) The third dataset is the comparable corpus created by Nick Vanackere. It contains a comparable corpus of 12,687 Wablieft articles between 2012-2017 from 206,466 De Standaard articles from 2013-2017. To ensure comparability, only articles from 08/01/2013 till 16/11/2017 were considered, resulting in 8,744 Wablieft articles and 202,284 De Standaard articles. The difference in the number of articles is due to the publication frequency, with Wablieft being weekly and De Standaard daily.
- 17.5 MB
Translated datasets
1 The first translated dataset is created by Netherlands Forensic Institute using Meta's No Language Left Behind model. It comprises 167,000 aligned sentence pairs and serves as a Dutch translation of the SimpleWiki dataset.
- 8.67 MB
- Download dataset
2 The second translated dataset is created by Theresa Seidl in the context of Controllable sentence simplification in Dutch. This is a synthetic dataset which is a combination of the first 10,000 rows of the parallel WikiLarge dataset, and ASSET (Abstractive Sentence Simplification Evaluation and Tuning) dataset. By combining these two datasets, Theresa translated them to Dutch using Google Neural Machine Translation.