Parallel Monolingual Corpora: Difference between revisions
No edit summary |
(Marked this version for translation) |
||
Line 1: | Line 1: | ||
<translate> | <translate> | ||
==DAESO Corpus== | ==DAESO Corpus== <!--T:1--> | ||
<!--T:2--> | |||
The DAESO Corpus is a parallel monolingual treebank of Dutch texts and the corpus contains more than 2.1 million words of parallel and comparable text. About 678,000 words were lined up manually and about 1.5 million words were automatically aligned. A semantic relation was added to the aligned words / phrases. | The DAESO Corpus is a parallel monolingual treebank of Dutch texts and the corpus contains more than 2.1 million words of parallel and comparable text. About 678,000 words were lined up manually and about 1.5 million words were automatically aligned. A semantic relation was added to the aligned words / phrases. | ||
<!--T:3--> | |||
*92.5 MB | *92.5 MB | ||
*version 1.0 (2010) | *version 1.0 (2010) | ||
*[http://hdl.handle.net/10032/tm-a2-h9 Download page] | *[http://hdl.handle.net/10032/tm-a2-h9 Download page] | ||
==Bible Corpus== | ==Bible Corpus== <!--T:4--> | ||
<!--T:5--> | |||
A diachronically and synchronically parallel corpus of Bible translations in Dutch, English, German and Swedish, with texts from the 14th century until today. | A diachronically and synchronically parallel corpus of Bible translations in Dutch, English, German and Swedish, with texts from the 14th century until today. | ||
<!--T:6--> | |||
*[https://spraakbanken.gu.se/en/resources/openedges OpenEdges Download] | *[https://spraakbanken.gu.se/en/resources/openedges OpenEdges Download] | ||
==[[Simplification Data]]== | ==[[Simplification Data]]== <!--T:7--> | ||
<!--T:8--> | |||
'''Manually created datasets''' | '''Manually created datasets''' | ||
<!--T:9--> | |||
The Dutch municipal corpus is a parallel monolingual corpus for the evaluation of sentence-level simplification in the Dutch municipal domain created by Amsterdan Intelligence. It contains 1,311 translated parallel sentence pairs, automatically aligned from 50 documents from the Communications Department of the City of Amsterdam that were manually simplified to evaluate simplification for Dutch. | The Dutch municipal corpus is a parallel monolingual corpus for the evaluation of sentence-level simplification in the Dutch municipal domain created by Amsterdan Intelligence. It contains 1,311 translated parallel sentence pairs, automatically aligned from 50 documents from the Communications Department of the City of Amsterdam that were manually simplified to evaluate simplification for Dutch. | ||
<!--T:10--> | |||
* 265 KB | * 265 KB | ||
* [https://github.com/Amsterdam-AI-Team/dutch-municipal-text-simplification/tree/master/complex-simple-sentences Download dataset (CSV file)] | * [https://github.com/Amsterdam-AI-Team/dutch-municipal-text-simplification/tree/master/complex-simple-sentences Download dataset (CSV file)] | ||
<!--T:11--> | |||
'''Automatically created datasets''' | '''Automatically created datasets''' | ||
<!--T:12--> | |||
1) The first dataset is created by Bram Vanroy for Dutch text simplification tasks using text-to-text transfer transformers.It comprises Dutch source sentences along with their corresponding simplified sentences, generated with ChatGPT. | 1) The first dataset is created by Bram Vanroy for Dutch text simplification tasks using text-to-text transfer transformers.It comprises Dutch source sentences along with their corresponding simplified sentences, generated with ChatGPT. | ||
<!--T:13--> | |||
# Training = 1013 sentences (262 KB) | # Training = 1013 sentences (262 KB) | ||
# Validation = 126 sentences (32.6 KB) | # Validation = 126 sentences (32.6 KB) | ||
# Test = 128 sentences (33 KB) | # Test = 128 sentences (33 KB) | ||
<!--T:14--> | |||
* [https://huggingface.co/datasets/BramVanroy/chatgpt-dutch-simplification Download page (CSV files)] | * [https://huggingface.co/datasets/BramVanroy/chatgpt-dutch-simplification Download page (CSV files)] | ||
<!--T:15--> | |||
2) The second dataset is created by UWV Nederland as part of the "Leesplank" project to ensure ethical and legal soundness. It comprises 2.87 million paragraphs and its simplified text as corresponding result. The paragraphs are based on the Dutch Wikipedia extract from [http://gigacorpus.nl/ Gigacorpus]. The text was filtered and cleaned by [https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new using GPT-4 1106 preview]. | 2) The second dataset is created by UWV Nederland as part of the "Leesplank" project to ensure ethical and legal soundness. It comprises 2.87 million paragraphs and its simplified text as corresponding result. The paragraphs are based on the Dutch Wikipedia extract from [http://gigacorpus.nl/ Gigacorpus]. The text was filtered and cleaned by [https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython-new using GPT-4 1106 preview]. | ||
<!--T:16--> | |||
* 3.02 MB | * 3.02 MB | ||
* [https://huggingface.co/datasets/UWV/Leesplank_NL_wikipedia_simplifications Download page] | * [https://huggingface.co/datasets/UWV/Leesplank_NL_wikipedia_simplifications Download page] | ||
<!--T:17--> | |||
A more extended version of this dataset was made by Michiel Buisman and Bram Vanroy. This datasets contains a first, small set of variations of Wikipedia paragraphs in different styles (jargon, official, archaïsche_taal, technical, academic, and poetic). | A more extended version of this dataset was made by Michiel Buisman and Bram Vanroy. This datasets contains a first, small set of variations of Wikipedia paragraphs in different styles (jargon, official, archaïsche_taal, technical, academic, and poetic). | ||
<!--T:18--> | |||
* 3.02 MB | * 3.02 MB | ||
* [https://huggingface.co/datasets/UWV/veringewikkelderingen Download page] | * [https://huggingface.co/datasets/UWV/veringewikkelderingen Download page] | ||
<!--T:19--> | |||
3) The third dataset is the comparable corpus created by Nick Vanackere. It contains a comparable corpus of 12,687 Wablieft articles between 2012-2017 from 206,466 De Standaard articles from 2013-2017. To ensure comparability, only articles from 08/01/2013 till 16/11/2017 were considered, resulting in 8,744 Wablieft articles and 202,284 De Standaard articles. The difference in the number of articles is due to the publication frequency, with Wablieft being weekly and De Standaard daily. | 3) The third dataset is the comparable corpus created by Nick Vanackere. It contains a comparable corpus of 12,687 Wablieft articles between 2012-2017 from 206,466 De Standaard articles from 2013-2017. To ensure comparability, only articles from 08/01/2013 till 16/11/2017 were considered, resulting in 8,744 Wablieft articles and 202,284 De Standaard articles. The difference in the number of articles is due to the publication frequency, with Wablieft being weekly and De Standaard daily. | ||
<!--T:20--> | |||
* 17.5 MB | * 17.5 MB | ||
<!--T:21--> | |||
* [https://github.com/nivack/comparable_corpus_Wablieft_deStandaard/blob/main/comparable_corpus_Wablieft_DeStandaard.txt Download page] | * [https://github.com/nivack/comparable_corpus_Wablieft_deStandaard/blob/main/comparable_corpus_Wablieft_DeStandaard.txt Download page] | ||
<!--T:22--> | |||
'''Translated datasets''' | '''Translated datasets''' | ||
<!--T:23--> | |||
1 The first translated dataset is created by Netherlands Forensic Institute using Meta's [https://ai.meta.com/research/no-language-left-behind/ No Language Left Behind model]. It comprises 167,000 aligned sentence pairs and serves as a Dutch translation of the SimpleWiki [https://cs.pomona.edu/~dkauchak/simplification/ dataset]. | 1 The first translated dataset is created by Netherlands Forensic Institute using Meta's [https://ai.meta.com/research/no-language-left-behind/ No Language Left Behind model]. It comprises 167,000 aligned sentence pairs and serves as a Dutch translation of the SimpleWiki [https://cs.pomona.edu/~dkauchak/simplification/ dataset]. | ||
<!--T:24--> | |||
* 8.67 MB | * 8.67 MB | ||
* [https://huggingface.co/datasets/NetherlandsForensicInstitute/simplewiki-translated-nl Download dataset] | * [https://huggingface.co/datasets/NetherlandsForensicInstitute/simplewiki-translated-nl Download dataset] | ||
<!--T:25--> | |||
2 The second translated dataset is created by Theresa Seidl in the context of Controllable sentence simplification in Dutch. This is a synthetic dataset which is a combination of the first 10,000 rows of the parallel [https://github.com/XingxingZhang/dress WikiLarge dataset], and [https://github.com/facebookresearch ASSET (Abstractive Sentence Simplification Evaluation and Tuning) dataset]. By combining these two datasets, Theresa translated them to Dutch using [https://arxiv.org/pdf/1609.08144 Google Neural Machine Translation]. | 2 The second translated dataset is created by Theresa Seidl in the context of Controllable sentence simplification in Dutch. This is a synthetic dataset which is a combination of the first 10,000 rows of the parallel [https://github.com/XingxingZhang/dress WikiLarge dataset], and [https://github.com/facebookresearch ASSET (Abstractive Sentence Simplification Evaluation and Tuning) dataset]. By combining these two datasets, Theresa translated them to Dutch using [https://arxiv.org/pdf/1609.08144 Google Neural Machine Translation]. | ||
<!--T:26--> | |||
* [https://github.com/tsei902/simplify_dutch/tree/main/resources/datasets Download page] | * [https://github.com/tsei902/simplify_dutch/tree/main/resources/datasets Download page] | ||
</translate> | </translate> |
Revision as of 14:01, 11 June 2024
DAESO Corpus
The DAESO Corpus is a parallel monolingual treebank of Dutch texts and the corpus contains more than 2.1 million words of parallel and comparable text. About 678,000 words were lined up manually and about 1.5 million words were automatically aligned. A semantic relation was added to the aligned words / phrases.
- 92.5 MB
- version 1.0 (2010)
- Download page
Bible Corpus
A diachronically and synchronically parallel corpus of Bible translations in Dutch, English, German and Swedish, with texts from the 14th century until today.
Simplification Data
Manually created datasets
The Dutch municipal corpus is a parallel monolingual corpus for the evaluation of sentence-level simplification in the Dutch municipal domain created by Amsterdan Intelligence. It contains 1,311 translated parallel sentence pairs, automatically aligned from 50 documents from the Communications Department of the City of Amsterdam that were manually simplified to evaluate simplification for Dutch.
- 265 KB
- Download dataset (CSV file)
Automatically created datasets
1) The first dataset is created by Bram Vanroy for Dutch text simplification tasks using text-to-text transfer transformers.It comprises Dutch source sentences along with their corresponding simplified sentences, generated with ChatGPT.
- Training = 1013 sentences (262 KB)
- Validation = 126 sentences (32.6 KB)
- Test = 128 sentences (33 KB)
2) The second dataset is created by UWV Nederland as part of the "Leesplank" project to ensure ethical and legal soundness. It comprises 2.87 million paragraphs and its simplified text as corresponding result. The paragraphs are based on the Dutch Wikipedia extract from Gigacorpus. The text was filtered and cleaned by using GPT-4 1106 preview.
- 3.02 MB
- Download page
A more extended version of this dataset was made by Michiel Buisman and Bram Vanroy. This datasets contains a first, small set of variations of Wikipedia paragraphs in different styles (jargon, official, archaïsche_taal, technical, academic, and poetic).
- 3.02 MB
- Download page
3) The third dataset is the comparable corpus created by Nick Vanackere. It contains a comparable corpus of 12,687 Wablieft articles between 2012-2017 from 206,466 De Standaard articles from 2013-2017. To ensure comparability, only articles from 08/01/2013 till 16/11/2017 were considered, resulting in 8,744 Wablieft articles and 202,284 De Standaard articles. The difference in the number of articles is due to the publication frequency, with Wablieft being weekly and De Standaard daily.
- 17.5 MB
Translated datasets
1 The first translated dataset is created by Netherlands Forensic Institute using Meta's No Language Left Behind model. It comprises 167,000 aligned sentence pairs and serves as a Dutch translation of the SimpleWiki dataset.
- 8.67 MB
- Download dataset
2 The second translated dataset is created by Theresa Seidl in the context of Controllable sentence simplification in Dutch. This is a synthetic dataset which is a combination of the first 10,000 rows of the parallel WikiLarge dataset, and ASSET (Abstractive Sentence Simplification Evaluation and Tuning) dataset. By combining these two datasets, Theresa translated them to Dutch using Google Neural Machine Translation.