Simplification Data

From Clarin K-Centre
Revision as of 12:07, 6 August 2024 by Griet (talk | contribs)
Jump to navigation Jump to search
Other languages:

Automatically translated datasets

ASSET Simplification Corpus

The Abstractive Sentence Simplification Evaluation and Tuning (ASSET) Dataset (Alva-Manchego et al, 2020) was automatically translated to Dutch (Seidl et al., 2023), and is freely available.

  • Github download
  • Alva-Manchego, F., Martin, L., Bordes, A., Scarton, C., Sagot, B., & Specia, L. (2020). ASSET: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations. arXiv preprint arXiv:2005.00481.
  • Seidl, T., Vandeghinste, V., & Van de Cruys, T. (2023). Controllable Sentence Simplification in Dutch. KU Leuven. Faculteit Ingenieurswetenschappen.

Wikilarge Dataset

Automatic translation of the Wikilarge dataset, useful for automatic simplification (Seidl et al., 2023), freely available. Original dataset from Zhang & Lapata

  • Github download
  • Seidl, T., Vandeghinste, V., & Van de Cruys, T. (2023). Controllable Sentence Simplification in Dutch. KU Leuven. Faculteit Ingenieurswetenschappen.
  • Zhang, X. & Lapata, M. (2017). Sentence Simplification with Deep Reinforcement Learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584–594, Copenhagen, Denmark. Association for Computational Linguistics.

NFI SimpleWiki dataset

Translated dataset created by Netherlands Forensic Institute using Meta's No Language Left Behind model. It comprises 167,000 aligned sentence pairs and serves as a Dutch translation of the SimpleWiki dataset.

Comparable Corpus Wablieft De Standaard

Corpus created by Nick Vanackere. It contains a comparable corpus of 12,687 Wablieft articles between 2012-2017 from 206,466 De Standaard articles from 2013-2017. To ensure comparability, only articles from 08/01/2013 till 16/11/2017 were considered, resulting in 8,744 Wablieft articles and 202,284 De Standaard articles. The difference in the number of articles is due to the publication frequency, with Wablieft being weekly and De Standaard daily.

Synthetic datasets

UWV Leesplank NL wikipedia

The set contains 2,391,206 pragraphs of prompt/result combinations, where the prompt is a paragraph from Dutch Wikipedia and the result is a simplified text, which could include more than one paragraph. This dataset was created by UWV, as a part of project "Leesplank", an effort to generate datasets that are ethically and legally sound.

A more extended version of this dataset was made by Michiel Buisman and Bram Vanroy. This datasets contains a first, small set of variations of Wikipedia paragraphs in different styles (jargon, official, archaïc language, technical, academic, and poetic).

ChatGPT generated dataset by Van de Velde

Created in light of a master thesis by Charlotte Van de Velde. The dataset contains Dutch source sentences and aligned simplified sentences, generated with ChatGPT. All splits combined, the dataset consists of 1267 entries.

  1. Training = 1013 sentences (262 KB)
  2. Validation = 126 sentences (32.6 KB)
  3. Test = 128 sentences (33 KB)


Manually simplified

Dutch municipal data

The Dutch municipal corpus is a parallel monolingual corpus for the evaluation of sentence-level simplification in the Dutch municipal domain. The corpus was created by Amsterdam Intelligence. It contains 1,311 translated parallel sentence pairs that were automatically aligned. The sentence pairs originate from 50 documents from the Communications Department of the City of Amsterdam that were manually simplified to evaluate simplification for Dutch.