Language modeling: Difference between revisions
AnneVerbeke (talk | contribs) Tweety: text added |
AnneVerbeke (talk | contribs) No edit summary |
||
| Line 8: | Line 8: | ||
* [https://huggingface.co/DTAI-KULeuven/robbert-2023-dutch-large RobBERT-2023]: A Dutch RoBERTa-based Language Model, an update of RobBERT-2022, which in its turn was an update of RobBERT, a RoBERTa-based state-of-the-art Dutch language model, which was trained in 2019. Original papers: RobBERT: https://arxiv.org/abs/1907.11692v1 and RobBERT-2022: https://arxiv.org/abs/2211.08192v1 | * [https://huggingface.co/DTAI-KULeuven/robbert-2023-dutch-large RobBERT-2023]: A Dutch RoBERTa-based Language Model, an update of RobBERT-2022, which in its turn was an update of RobBERT, a RoBERTa-based state-of-the-art Dutch language model, which was trained in 2019. Original papers: RobBERT: https://arxiv.org/abs/1907.11692v1 and RobBERT-2022: https://arxiv.org/abs/2211.08192v1 | ||
* [https://github.com/iPieter/robbertje RobBERTje]: A collection of distilled versions of the state-of-the-art Dutch RobBERT model. There are multiple models with different sizes and different training settings. Original paper: https://arxiv.org/abs/2204.13511v1 | * [https://github.com/iPieter/robbertje RobBERTje]: A collection of distilled versions of the state-of-the-art Dutch RobBERT model. There are multiple models with different sizes and different training settings. Original paper: https://arxiv.org/abs/2204.13511v1 | ||
* [https://github.com/ChocoLlamaModel/ChocoLlama | * [https://github.com/ChocoLlamaModel/ChocoLlama ChocoLlama]: A set of six Llama-2/3 based open models adapted to Dutch. Original paper: https://arxiv.org/html/2412.07633v1 | ||
*[https://github.com/BramVanroy/fietje-2 Fietje 2]: Fietje is a family of small open language models (SLMs) specifically designed for the Dutch language. The model is based on Phi 2, an English-centric model of 2.7 billion parameters. The fietje-2b-chat model is the one that is best suited as an assistant. Original paper: https://arxiv.org/abs/2412.15450 | *[https://github.com/BramVanroy/fietje-2 Fietje 2]: Fietje is a family of small open language models (SLMs) specifically designed for the Dutch language. The model is based on Phi 2, an English-centric model of 2.7 billion parameters. The fietje-2b-chat model is the one that is best suited as an assistant. Original paper: https://arxiv.org/abs/2412.15450 | ||
*[https://huggingface.co/Tweeties/tweety-7b-dutch-v24a Tweety-7b-dutch]: This is a foundation model with a focus on the Dutch language, incorporating a Dutch tokenizer for better understanding and generation of Dutch text. It is built on the mistral architecture. Original paper: https://arxiv.org/abs/2408.04303 | *[https://huggingface.co/Tweeties/tweety-7b-dutch-v24a Tweety-7b-dutch]: This is a foundation model with a focus on the Dutch language, incorporating a Dutch tokenizer for better understanding and generation of Dutch text. It is built on the mistral architecture. Original paper: https://arxiv.org/abs/2408.04303 | ||
Revision as of 22:34, 19 August 2025
Dutch Language Models
- Hugging Face Dutch Models
- BERTje: A Dutch pre-trained BERT model developed at the University of Groningen. Compared to the multilingual BERT model, which includes Dutch but is only based on Wikipedia text, BERTje is based on a large and diverse dataset of 2.4 billion tokens. The vocabulary size of BERTje has changed in 2021. Original paper: http://arxiv.org/abs/1912.09582
- RobBERT-2023: A Dutch RoBERTa-based Language Model, an update of RobBERT-2022, which in its turn was an update of RobBERT, a RoBERTa-based state-of-the-art Dutch language model, which was trained in 2019. Original papers: RobBERT: https://arxiv.org/abs/1907.11692v1 and RobBERT-2022: https://arxiv.org/abs/2211.08192v1
- RobBERTje: A collection of distilled versions of the state-of-the-art Dutch RobBERT model. There are multiple models with different sizes and different training settings. Original paper: https://arxiv.org/abs/2204.13511v1
- ChocoLlama: A set of six Llama-2/3 based open models adapted to Dutch. Original paper: https://arxiv.org/html/2412.07633v1
- Fietje 2: Fietje is a family of small open language models (SLMs) specifically designed for the Dutch language. The model is based on Phi 2, an English-centric model of 2.7 billion parameters. The fietje-2b-chat model is the one that is best suited as an assistant. Original paper: https://arxiv.org/abs/2412.15450
- Tweety-7b-dutch: This is a foundation model with a focus on the Dutch language, incorporating a Dutch tokenizer for better understanding and generation of Dutch text. It is built on the mistral architecture. Original paper: https://arxiv.org/abs/2408.04303
- Reynaerde 7B Chat: An open conversational model for Dutch, based on Mistral v0.3 Instruct. This model is a fine-tuned version of ReBatch/Reynaerde-7B-Instruct on ReBatch/ultrafeedback_nl.
- GPT-NL: A Dutch language model currently being developed by non-profit parties TNO, NFI and SURF, funded by the Dutch Ministry of Economic Affairs and Climate Policy. It is currently being trained and the first version is expected to be available in Q1 26.
- GEITje: A Large Open Language Model. This model is no longer available: https://goingdutch.ai/en/posts/geitje-takedown
Multilingual Language Models including Dutch
- MBart: Multilingual Denoising Pre-training for Neural Machine Translation
- mT5: mT5: A massively multilingual pre-trained text-to-text transformer
- NLLB: No Language Left Behind
SpaCy
spaCy is a free open-source library for Natural Language Processing in Python.
Language Modeling Benchmarks
DUMB
DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Please read the paper for more details.
LLM Leaderboard
This is a leaderboard for Dutch benchmarks for large language models.
Scandeval Dutch NLG
n-gram modeling
Colibri core is an NLP tool as well as a C++ and Python library for working with basic linguistic constructions such as n-grams and skipgrams (i.e. patterns with one or more gaps, either of fixed or dynamic size) in a quick and memory-efficient way. At the core is the tool colibri-patternmodeller which allows you to build, view, manipulate and query pattern models.