Language modeling: Difference between revisions
No edit summary |
No edit summary |
||
Line 3: | Line 3: | ||
<!--T:1--> | <!--T:1--> | ||
==n-gram modeling== | ==n-gram modeling== | ||
Colibri core is an NLP tool as well as a C++ and Python library for working with basic linguistic constructions such as n-grams and skipgrams (i.e. patterns with one or more gaps, either of fixed or dynamic size) in a quick and memory-efficient way. At the core is the tool colibri-patternmodeller which allows you to build, view, manipulate and query pattern models. | Colibri core is an NLP tool as well as a C++ and Python library for working with basic linguistic constructions such as n-grams and skipgrams (i.e. patterns with one or more gaps, either of fixed or dynamic size) in a quick and memory-efficient way. At the core is the tool colibri-patternmodeller which allows you to build, view, manipulate and query pattern models. | ||
Line 10: | Line 11: | ||
<!--T:3--> | <!--T:3--> | ||
==Large Language Models== | ==Large Language Models== | ||
* [https://huggingface.co/models?search=dutch Hugging Face Dutch Models] | * [https://huggingface.co/models?search=dutch Hugging Face Dutch Models] | ||
* [https://people.cs.kuleuven.be/~pieter.delobelle/robbert/ RobBERT]: A Dutch RoBERTa-based Language Model | * [https://people.cs.kuleuven.be/~pieter.delobelle/robbert/ RobBERT]: A Dutch RoBERTa-based Language Model | ||
Line 17: | Line 19: | ||
<!--T:4--> | <!--T:4--> | ||
==Multilingual Language Models including Dutch== | ==Multilingual Language Models including Dutch== | ||
* [https://openai.com/ GPT-3] | * [https://openai.com/ GPT-3] | ||
* [https://huggingface.co/docs/transformers/model_doc/mbart MBart] | * [https://huggingface.co/docs/transformers/model_doc/mbart MBart] | ||
Line 22: | Line 25: | ||
<!--T:5--> | <!--T:5--> | ||
==SpaCy== | ==SpaCy== | ||
spaCy is a free open-source library for Natural Language Processing in Python. | spaCy is a free open-source library for Natural Language Processing in Python. | ||
Line 29: | Line 33: | ||
<!--T:7--> | <!--T:7--> | ||
== Language Modeling Benchmarks == | == Language Modeling Benchmarks == | ||
===DUMB=== | ===DUMB=== | ||
DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Please read the paper for more details. | DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Please read the paper for more details. | ||
Line 37: | Line 43: | ||
<!--T:9--> | <!--T:9--> | ||
===LLM Leaderboard=== | ===LLM Leaderboard=== | ||
This is a leaderboard for Dutch benchmarks for large language models. | This is a leaderboard for Dutch benchmarks for large language models. | ||
Revision as of 13:52, 30 May 2024
n-gram modeling
Colibri core is an NLP tool as well as a C++ and Python library for working with basic linguistic constructions such as n-grams and skipgrams (i.e. patterns with one or more gaps, either of fixed or dynamic size) in a quick and memory-efficient way. At the core is the tool colibri-patternmodeller which allows you to build, view, manipulate and query pattern models.
Large Language Models
- Hugging Face Dutch Models
- RobBERT: A Dutch RoBERTa-based Language Model
- BERTje: A Dutch BERT model
- GEITje: A Large Open Language Model
Multilingual Language Models including Dutch
SpaCy
spaCy is a free open-source library for Natural Language Processing in Python.
Language Modeling Benchmarks
DUMB
DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Please read the paper for more details.
LLM Leaderboard
This is a leaderboard for Dutch benchmarks for large language models.