Language modeling: Difference between revisions
Line 20: | Line 20: | ||
== Language Modeling Benchmarks == | == Language Modeling Benchmarks == | ||
===DUMB=== | |||
DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Please read the paper for more details. | DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Please read the paper for more details. | ||
* [https://dumbench.nl/ DuMB] | * [https://dumbench.nl/ DuMB] | ||
===LLM Leaderboard=== | |||
This is a leaderboard for Dutch benchmarks for large language models. | |||
* [https://huggingface.co/spaces/BramVanroy/open_dutch_llm_leaderboard] |
Revision as of 09:28, 19 January 2024
n-gram modeling
Colibri core is an NLP tool as well as a C++ and Python library for working with basic linguistic constructions such as n-grams and skipgrams (i.e. patterns with one or more gaps, either of fixed or dynamic size) in a quick and memory-efficient way. At the core is the tool colibri-patternmodeller which allows you to build, view, manipulate and query pattern models.
Large Language Models
- Hugging Face Dutch Models
- RobBERT: A Dutch RoBERTa-based Language Model
- BERTje: A Dutch BERT model
- GEITje: A Large Open Language Model
Multilingual Language Models including Dutch
SpaCy
spaCy is a free open-source library for Natural Language Processing in Python.
Language Modeling Benchmarks
DUMB
DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Please read the paper for more details.
LLM Leaderboard
This is a leaderboard for Dutch benchmarks for large language models.