Jump to content

Embeddings/en: Difference between revisions

From Clarin K-Centre
FuzzyBot (talk | contribs)
Updating to match new version of source page
 
FuzzyBot (talk | contribs)
Updating to match new version of source page
Line 1: Line 1:
<languages/>
For Large Language Models, we refer to [[Language_Modeling]].
For Large Language Models, we refer to [[Language_Modeling]].


Line 16: Line 18:


==GeenStijl.nl embeddings ==
==GeenStijl.nl embeddings ==
GeenStijl.nl embeddings contains over 8M messages from the controversial Dutch websites GeenStijl and Dumpert to train a word embedding model that captures the toxic language representations contained in the dataset. The trained word embeddings (±150MB) are released for free and may be useful for further study on toxic online discourse.
GeenStijl.nl embeddings contains over 8M messages from the controversial Dutch websites GeenStijl and Dumpert to train a word embedding model that captures the toxic language representations contained in the dataset. The trained word embeddings (±150MB) are released for free and may be useful for further study on toxic online discourse.


*[https://www.textgain.com/portfolio/geenstijl-embeddings/ Project page]
*[https://www.textgain.com/resources/publications/#geenstijl Report]
*[https://www.textgain.com/wp-content/uploads/2021/06/TGTR4-geenstijl.pdf Report]
*[https://www.textgain.com/resources/publications/geenstijl-nl-embeddings-tgtr-4/ Available upon request]
*[https://www.textgain.com/projects/geenstijl/geenstijl_embeddings.zip Download page]


==NLPL Word Embeddings Repository==
==NLPL Word Embeddings Repository==
Made by the University of Oslo. Models trained with clearly stated hyperparametes, on clearly described and linguistically pre-processed corpora.
Made by the University of Oslo. Models trained with clearly stated hyperparameters, on clearly described and linguistically pre-processed corpora.


For Dutch, Word2Vec and ELMO embeddings are available.
For Dutch, Word2Vec and ELMO embeddings are available.


*[http://vectors.nlpl.eu/repository/ Repository page]
*[http://vectors.nlpl.eu/repository/ Repository page]

Revision as of 14:32, 7 May 2025

For Large Language Models, we refer to Language_Modeling.

Word2Vec embeddings

Repository for the word embeddings described in Evaluating Unsupervised Dutch Word Embeddings as a Linguistic Resource, presented at LREC 2016.

FastText embeddings

Word vectors in 157 languages trained on CommonCrawl and Wikipedia corpora.

Coosto embeddings

This repository contains a Word2Vec model trained on a large Dutch corpus, comprised of social media messages and posts from Dutch news, blog and fora.

GeenStijl.nl embeddings

GeenStijl.nl embeddings contains over 8M messages from the controversial Dutch websites GeenStijl and Dumpert to train a word embedding model that captures the toxic language representations contained in the dataset. The trained word embeddings (±150MB) are released for free and may be useful for further study on toxic online discourse.

NLPL Word Embeddings Repository

Made by the University of Oslo. Models trained with clearly stated hyperparameters, on clearly described and linguistically pre-processed corpora.

For Dutch, Word2Vec and ELMO embeddings are available.