Language modeling: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
|||
Line 7: | Line 7: | ||
* [https://huggingface.co/models?search=dutch Hugging Face Dutch Models] | * [https://huggingface.co/models?search=dutch Hugging Face Dutch Models] | ||
* [https://people.cs.kuleuven.be/~pieter.delobelle/robbert/ RobBERT]: A Dutch RoBERTa-based Language Model | * [https://people.cs.kuleuven.be/~pieter.delobelle/robbert/ RobBERT]: A Dutch RoBERTa-based Language Model | ||
==Multilingual Language Models including Dutch== | |||
* [https://openai.com/ GPT-3] | |||
* [https://huggingface.co/docs/transformers/model_doc/mbart MBart] | |||
==SpaCy== | ==SpaCy== | ||
* [https://spacy.io/models/nl Dutch models] | * [https://spacy.io/models/nl Dutch models] |
Revision as of 08:46, 20 January 2022
n-gram modeling
Colibri core is an NLP tool as well as a C++ and Python library for working with basic linguistic constructions such as n-grams and skipgrams (i.e. patterns with one or more gaps, either of fixed or dynamic size) in a quick and memory-efficient way. At the core is the tool colibri-patternmodeller which allows you to build, view, manipulate and query pattern models.
BERT-like models
- Hugging Face Dutch Models
- RobBERT: A Dutch RoBERTa-based Language Model