Jump to content

Basic language processing: Difference between revisions

From Clarin K-Centre
No edit summary
avobmat toegevoegd, deepfrog verwijdered
 
(9 intermediate revisions by 2 users not shown)
Line 4: Line 4:
Under basis language processing, we understand part-of-speech tagging, lemmatization, named entity recognition, chunking and similar tasks which label individual words.
Under basis language processing, we understand part-of-speech tagging, lemmatization, named entity recognition, chunking and similar tasks which label individual words.


== Frog == <!--T:2-->
==Contemporary Dutch== <!--T:23-->
 
=== TextLens ===
 
TextLens is a web-based corpus processing environment that allows processing texts in different languages. The user can choose between different taggers and different download formats. It requires a CLARIN login. It contains Stanza, SpaCy and Lets taggers.
 
* [https://portal.clarin.ivdnt.org/textlens/ Website]
 
=== Frog === <!--T:2-->


<!--T:15-->
<!--T:15-->
Line 10: Line 18:


<!--T:3-->
<!--T:3-->
*[https://webservices.cls.ru.nl/frog Online version]
*[https://github.com/proycon/frog_webservice Code webservice]
*[https://languagemachines.github.io/frog/ Project website]
*[https://languagemachines.github.io/frog/ Project website]


== DeepFrog == <!--T:4-->
=== Avobmat ===


<!--T:16-->
DeepFrog aims to be a (partial) successor of the Dutch-NLP suite Frog. Whereas the various NLP modules in Frog were built on k-NN classifiers, DeepFrog builds on deep learning techniques and can use a variety of neural transformers.


<!--T:5-->
AVOBMAT (Analysis and Visualization of Bibliographic Metadata and Text) is a multilingual text mining service created in close collaboration with researchers. It empowers scholars, educators, and students to explore large collections of textual and bibliographic data—without programming skills or costly hardware.
*[https://huggingface.co/proycon/robbert-pos-cased-deepfrog-nld POS tagger]
For Dutch it includes SpaCy.


<!--T:6-->
[https://avobmat.hu/ Website]
The system is not yet officially released.
*[https://github.com/proycon/deepfrog Github page]


== LeTs == <!--T:7-->
=== LeTs === <!--T:7-->


<!--T:17-->
<!--T:17-->
LeTs is preprocessor that can be used for Dutch, German, English and French.
LeTs is preprocessor that can be used for Dutch, German, English and French. It is included in TextLens.


<!--T:8-->
<!--T:8-->
Line 34: Line 38:
*[https://lt3.ugent.be/lets-demo/ Demo]
*[https://lt3.ugent.be/lets-demo/ Demo]


== Spacy == <!--T:9-->
=== Spacy === <!--T:9-->
 
<!--T:24-->
Components: tok2vec, morphologizer, tagger, parser, lemmatizer (trainable_lemmatizer), senter, ner. It is included in TextLens.


<!--T:10-->
<!--T:10-->
[https://spacy.io/models/nl Dutch models]
[https://spacy.io/models/nl Dutch models]


== Stanza - A Python NLP Package for Many Human Languages == <!--T:11-->
=== Stanza - A Python NLP Package for Many Human Languages === <!--T:11-->


<!--T:18-->
<!--T:18-->
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages.
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. It is included in TextLens.


<!--T:12-->
<!--T:12-->
* [https://stanfordnlp.github.io/stanza/#stanza--a-python-nlp-package-for-many-human-languages Stanza github pages]
* [https://stanfordnlp.github.io/stanza/#stanza--a-python-nlp-package-for-many-human-languages Stanza github pages]


==Trankit==
===Trankit=== <!--T:20-->


<!--T:21-->
Trankit is a light-weight Transformer-based Python Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 downloadable pretrained pipelines for 56 languages.
Trankit is a light-weight Transformer-based Python Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 downloadable pretrained pipelines for 56 languages.


<!--T:22-->
*[https://github.com/nlp-uoregon/trankit Trankit Github pages]
*[https://github.com/nlp-uoregon/trankit Trankit Github pages]


==Adelheid Tagger-Lemmatizer:  A Distributed Lemmatizer for Historical Dutch== <!--T:13-->
==Historical Dutch== <!--T:25-->
 
===GaLaHaD:  Generating Linguistic Annotations for Historical Dutch=== <!--T:26-->
 
<!--T:27-->
GaLAHaD serves two purposes. One is to make annotation and tool evaluation easily accessible to researchers, the other to make it easy for developers to contribute their tools and models to the platform, and thus compare them to other tools with gold standard material.
 
<!--T:28-->
*[https://portal.clarin.ivdnt.org/galahad Webservice]
 
===Adelheid Tagger-Lemmatizer:  A Distributed Lemmatizer for Historical Dutch=== <!--T:13-->


<!--T:19-->
<!--T:19-->
With this web-application an end user can have historical Dutch texts tokenized, lemmatized and part-of-speech tagged, using the most appropriate resources (such as lexica) for the text in question. For each specific text, the user can select the best resources from those available in CLARIN, wherever they might reside, and where necessary supplemented by own lexica.  
With this web-application an end user can have historical Dutch texts tokenized, lemmatized and part-of-speech tagged, using the most appropriate resources (such as lexica) for the text in question. For each specific text, the user can select the best resources from those available in CLARIN, wherever they might reside, and where necessary supplemented by own lexica.  


<!--T:14-->
<!--T:29-->
Unfortunately, we are not aware of any functioning instance of this application.
 
<!--T:30-->
<!--
*[http://portal.clarin.nl/node/1918 CLAPOP page]
*[http://portal.clarin.nl/node/1918 CLAPOP page]
*No working version found
*No working version found -->
</translate>
</translate>

Latest revision as of 09:46, 29 January 2026

Under basis language processing, we understand part-of-speech tagging, lemmatization, named entity recognition, chunking and similar tasks which label individual words.

Contemporary Dutch

TextLens

TextLens is a web-based corpus processing environment that allows processing texts in different languages. The user can choose between different taggers and different download formats. It requires a CLARIN login. It contains Stanza, SpaCy and Lets taggers.

Frog

Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. Frog's current version will tokenize, tag, lemmatize, and morphologically segment word tokens in Dutch text files, will assign a dependency graph to each sentence, will identify the base phrase chunks in the sentence, and will attempt to find and label all named entities.

Avobmat

AVOBMAT (Analysis and Visualization of Bibliographic Metadata and Text) is a multilingual text mining service created in close collaboration with researchers. It empowers scholars, educators, and students to explore large collections of textual and bibliographic data—without programming skills or costly hardware. For Dutch it includes SpaCy.

Website

LeTs

LeTs is preprocessor that can be used for Dutch, German, English and French. It is included in TextLens.

Spacy

Components: tok2vec, morphologizer, tagger, parser, lemmatizer (trainable_lemmatizer), senter, ner. It is included in TextLens.

Dutch models

Stanza - A Python NLP Package for Many Human Languages

Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. It is included in TextLens.

Trankit

Trankit is a light-weight Transformer-based Python Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 downloadable pretrained pipelines for 56 languages.

Historical Dutch

GaLaHaD: Generating Linguistic Annotations for Historical Dutch

GaLAHaD serves two purposes. One is to make annotation and tool evaluation easily accessible to researchers, the other to make it easy for developers to contribute their tools and models to the platform, and thus compare them to other tools with gold standard material.

Adelheid Tagger-Lemmatizer: A Distributed Lemmatizer for Historical Dutch

With this web-application an end user can have historical Dutch texts tokenized, lemmatized and part-of-speech tagged, using the most appropriate resources (such as lexica) for the text in question. For each specific text, the user can select the best resources from those available in CLARIN, wherever they might reside, and where necessary supplemented by own lexica.

Unfortunately, we are not aware of any functioning instance of this application.