Jump to content

Language modeling: Difference between revisions

From Clarin K-Centre
AnneVerbeke (talk | contribs)
GEITje: no longer available
AnneVerbeke (talk | contribs)
Replaced RobBERT by RobBERT-2023
Line 5: Line 5:
<!--T:12-->
<!--T:12-->
* [https://huggingface.co/models?search=dutch Hugging Face Dutch Models]
* [https://huggingface.co/models?search=dutch Hugging Face Dutch Models]
* [https://people.cs.kuleuven.be/~pieter.delobelle/robbert/ RobBERT]: A Dutch RoBERTa-based Language Model
* [https://huggingface.co/DTAI-KULeuven/robbert-2023-dutch-large RobBERT-2023]: A Dutch RoBERTa-based Language Model
* [https://github.com/wietsedv/bertje BERTje]: A Dutch BERT model
* [https://github.com/wietsedv/bertje BERTje]: A Dutch BERT model
* [https://github.com/Rijgersberg/GEITje GEITje]: A Large Open Language Model. This model is no longer available: The end of GEITje 1 | GoingDutch.ai
* [https://github.com/Rijgersberg/GEITje GEITje]: A Large Open Language Model. This model is no longer available: The end of GEITje 1 | GoingDutch.ai

Revision as of 15:05, 16 August 2025

Large Language Models

  • Hugging Face Dutch Models
  • RobBERT-2023: A Dutch RoBERTa-based Language Model
  • BERTje: A Dutch BERT model
  • GEITje: A Large Open Language Model. This model is no longer available: The end of GEITje 1 | GoingDutch.ai
  • Fietje 2: An open and efficient LLM for Dutch
  • Tweety
  • GPT-NL: A Dutch language model currently being developed by non-profit parties TNO, NFI and SURF, funded by the Dutch Ministry of Economic Affairs and Climate Policy. It is currently being trained and the first version is expected to be available in Q1 25.

Multilingual Language Models including Dutch

  • MBart: Multilingual Denoising Pre-training for Neural Machine Translation
  • mT5: mT5: A massively multilingual pre-trained text-to-text transformer
  • NLLB: No Language Left Behind

SpaCy

spaCy is a free open-source library for Natural Language Processing in Python.

Language Modeling Benchmarks

DUMB

DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Please read the paper for more details.

LLM Leaderboard

This is a leaderboard for Dutch benchmarks for large language models.

Scandeval Dutch NLG

n-gram modeling

Colibri core is an NLP tool as well as a C++ and Python library for working with basic linguistic constructions such as n-grams and skipgrams (i.e. patterns with one or more gaps, either of fixed or dynamic size) in a quick and memory-efficient way. At the core is the tool colibri-patternmodeller which allows you to build, view, manipulate and query pattern models.