Jump to content

Translations:Language modeling/16/en: Difference between revisions

From Clarin K-Centre
FuzzyBot (talk | contribs)
m FuzzyBot moved page Translations:Language Modeling/16/en to Translations:Language modeling/16/en without leaving a redirect: Part of translatable page "Language Modeling"
FuzzyBot (talk | contribs)
Importing a new version from external source
 
Line 1: Line 1:
DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Please read the paper for more details.
DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Original paper: https://arxiv.org/abs/2305.13026

Latest revision as of 14:59, 13 November 2025

Information about message (contribute)
This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Language modeling)
DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Original paper: https://arxiv.org/abs/2305.13026

DUMB is a benchmark for evaluating the quality of language models for Dutch NLP tasks. The set of tasks is designed to be diverse and challenging, to test the limits of current language models. The specific datasets and formats are particularly suitable for fine-tuning encoder models, and applicability for large generative models is yet to be determined. Original paper: https://arxiv.org/abs/2305.13026