Basic language processing: Difference between revisions
No edit summary |
avobmat toegevoegd, deepfrog verwijdered |
||
| (13 intermediate revisions by 2 users not shown) | |||
| Line 4: | Line 4: | ||
Under basis language processing, we understand part-of-speech tagging, lemmatization, named entity recognition, chunking and similar tasks which label individual words. | Under basis language processing, we understand part-of-speech tagging, lemmatization, named entity recognition, chunking and similar tasks which label individual words. | ||
<!--T: | ==Contemporary Dutch== <!--T:23--> | ||
== Frog == | |||
=== TextLens === | |||
TextLens is a web-based corpus processing environment that allows processing texts in different languages. The user can choose between different taggers and different download formats. It requires a CLARIN login. It contains Stanza, SpaCy and Lets taggers. | |||
* [https://portal.clarin.ivdnt.org/textlens/ Website] | |||
=== Frog === <!--T:2--> | |||
<!--T:15--> | |||
Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. Frog's current version will tokenize, tag, lemmatize, and morphologically segment word tokens in Dutch text files, will assign a dependency graph to each sentence, will identify the base phrase chunks in the sentence, and will attempt to find and label all named entities. | Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. Frog's current version will tokenize, tag, lemmatize, and morphologically segment word tokens in Dutch text files, will assign a dependency graph to each sentence, will identify the base phrase chunks in the sentence, and will attempt to find and label all named entities. | ||
<!--T:3--> | <!--T:3--> | ||
*[https:// | *[https://github.com/proycon/frog_webservice Code webservice] | ||
*[https://languagemachines.github.io/frog/ Project website] | *[https://languagemachines.github.io/frog/ Project website] | ||
=== Avobmat === | |||
== | |||
AVOBMAT (Analysis and Visualization of Bibliographic Metadata and Text) is a multilingual text mining service created in close collaboration with researchers. It empowers scholars, educators, and students to explore large collections of textual and bibliographic data—without programming skills or costly hardware. | |||
For Dutch it includes SpaCy. | |||
<!--T:7--> | [https://avobmat.hu/ Website] | ||
LeTs is preprocessor that can be used for Dutch, German, English and French. | === LeTs === <!--T:7--> | ||
<!--T:17--> | |||
LeTs is preprocessor that can be used for Dutch, German, English and French. It is included in TextLens. | |||
<!--T:8--> | <!--T:8--> | ||
| Line 31: | Line 38: | ||
*[https://lt3.ugent.be/lets-demo/ Demo] | *[https://lt3.ugent.be/lets-demo/ Demo] | ||
== Spacy == <!--T:9--> | === Spacy === <!--T:9--> | ||
<!--T:24--> | |||
Components: tok2vec, morphologizer, tagger, parser, lemmatizer (trainable_lemmatizer), senter, ner. It is included in TextLens. | |||
<!--T:10--> | <!--T:10--> | ||
[https://spacy.io/models/nl Dutch models] | [https://spacy.io/models/nl Dutch models] | ||
<!--T:11--> | === Stanza - A Python NLP Package for Many Human Languages === <!--T:11--> | ||
<!--T:18--> | |||
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. It is included in TextLens. | |||
<!--T:12--> | <!--T:12--> | ||
* [https://stanfordnlp.github.io/stanza/#stanza--a-python-nlp-package-for-many-human-languages Stanza github pages] | * [https://stanfordnlp.github.io/stanza/#stanza--a-python-nlp-package-for-many-human-languages Stanza github pages] | ||
<!--T: | ===Trankit=== <!--T:20--> | ||
==Adelheid Tagger-Lemmatizer: A Distributed Lemmatizer for Historical Dutch== | |||
<!--T:21--> | |||
Trankit is a light-weight Transformer-based Python Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 downloadable pretrained pipelines for 56 languages. | |||
<!--T:22--> | |||
*[https://github.com/nlp-uoregon/trankit Trankit Github pages] | |||
==Historical Dutch== <!--T:25--> | |||
===GaLaHaD: Generating Linguistic Annotations for Historical Dutch=== <!--T:26--> | |||
<!--T:27--> | |||
GaLAHaD serves two purposes. One is to make annotation and tool evaluation easily accessible to researchers, the other to make it easy for developers to contribute their tools and models to the platform, and thus compare them to other tools with gold standard material. | |||
<!--T:28--> | |||
*[https://portal.clarin.ivdnt.org/galahad Webservice] | |||
===Adelheid Tagger-Lemmatizer: A Distributed Lemmatizer for Historical Dutch=== <!--T:13--> | |||
<!--T:19--> | |||
With this web-application an end user can have historical Dutch texts tokenized, lemmatized and part-of-speech tagged, using the most appropriate resources (such as lexica) for the text in question. For each specific text, the user can select the best resources from those available in CLARIN, wherever they might reside, and where necessary supplemented by own lexica. | With this web-application an end user can have historical Dutch texts tokenized, lemmatized and part-of-speech tagged, using the most appropriate resources (such as lexica) for the text in question. For each specific text, the user can select the best resources from those available in CLARIN, wherever they might reside, and where necessary supplemented by own lexica. | ||
<!--T: | <!--T:29--> | ||
Unfortunately, we are not aware of any functioning instance of this application. | |||
<!--T:30--> | |||
<!-- | |||
*[http://portal.clarin.nl/node/1918 CLAPOP page] | *[http://portal.clarin.nl/node/1918 CLAPOP page] | ||
*No working version found | *No working version found --> | ||
</translate> | </translate> | ||
Latest revision as of 09:46, 29 January 2026
Under basis language processing, we understand part-of-speech tagging, lemmatization, named entity recognition, chunking and similar tasks which label individual words.
Contemporary Dutch
TextLens
TextLens is a web-based corpus processing environment that allows processing texts in different languages. The user can choose between different taggers and different download formats. It requires a CLARIN login. It contains Stanza, SpaCy and Lets taggers.
Frog
Frog is an integration of memory-based natural language processing (NLP) modules developed for Dutch. Frog's current version will tokenize, tag, lemmatize, and morphologically segment word tokens in Dutch text files, will assign a dependency graph to each sentence, will identify the base phrase chunks in the sentence, and will attempt to find and label all named entities.
Avobmat
AVOBMAT (Analysis and Visualization of Bibliographic Metadata and Text) is a multilingual text mining service created in close collaboration with researchers. It empowers scholars, educators, and students to explore large collections of textual and bibliographic data—without programming skills or costly hardware. For Dutch it includes SpaCy.
LeTs
LeTs is preprocessor that can be used for Dutch, German, English and French. It is included in TextLens.
Spacy
Components: tok2vec, morphologizer, tagger, parser, lemmatizer (trainable_lemmatizer), senter, ner. It is included in TextLens.
Stanza - A Python NLP Package for Many Human Languages
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. It is included in TextLens.
Trankit
Trankit is a light-weight Transformer-based Python Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 downloadable pretrained pipelines for 56 languages.
Historical Dutch
GaLaHaD: Generating Linguistic Annotations for Historical Dutch
GaLAHaD serves two purposes. One is to make annotation and tool evaluation easily accessible to researchers, the other to make it easy for developers to contribute their tools and models to the platform, and thus compare them to other tools with gold standard material.
Adelheid Tagger-Lemmatizer: A Distributed Lemmatizer for Historical Dutch
With this web-application an end user can have historical Dutch texts tokenized, lemmatized and part-of-speech tagged, using the most appropriate resources (such as lexica) for the text in question. For each specific text, the user can select the best resources from those available in CLARIN, wherever they might reside, and where necessary supplemented by own lexica.
Unfortunately, we are not aware of any functioning instance of this application.