Jump to content

Language modeling

From Clarin K-Centre
Revision as of 20:42, 19 August 2025 by AnneVerbeke (talk | contribs)

Large Language Models

  • Hugging Face Dutch Models
  • BERTje: A Dutch pre-trained BERT model developed at the University of Groningen. Compared to the multilingual BERT model, which includes Dutch but is only based on Wikipedia text, BERTje is based on a large and diverse dataset of 2.4 billion tokens. The vocabulary size of BERTje has changed in 2021. Original paper: http://arxiv.org/abs/1912.09582
  • RobBERT-2023: A Dutch RoBERTa-based Language Model, an update of RobBERT-2022, which in its turn was an update of RobBERT, a RoBERTa-based state-of-the-art Dutch language model, which was trained in 2019. Original papers: RobBERT: http://arXiv:1907.11692v1 and RobBERT-2022: http://arXiv:2211.08192v1
  • Fietje 2: An open and efficient LLM for Dutch
  • Tweety
  • GPT-NL: A Dutch language model currently being developed by non-profit parties TNO, NFI and SURF, funded by the Dutch Ministry of Economic Affairs and Climate Policy. It is currently being trained and the first version is expected to be available in Q1 25.
  • GEITje: A Large Open Language Model. This model is no longer available: The end of GEITje 1 | GoingDutch.ai

Multilingual Language Models including Dutch

  • MBart: Multilingual Denoising Pre-training for Neural Machine Translation
  • mT5: mT5: A massively multilingual pre-trained text-to-text transformer
  • NLLB: No Language Left Behind

SpaCy

spaCy is a free open-source library for Natural Language Processing in Python.

Language Modeling Benchmarks

DUMB

DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Please read the paper for more details.

LLM Leaderboard

This is a leaderboard for Dutch benchmarks for large language models.

Scandeval Dutch NLG

n-gram modeling

Colibri core is an NLP tool as well as a C++ and Python library for working with basic linguistic constructions such as n-grams and skipgrams (i.e. patterns with one or more gaps, either of fixed or dynamic size) in a quick and memory-efficient way. At the core is the tool colibri-patternmodeller which allows you to build, view, manipulate and query pattern models.