Q&A: Difference between revisions

From Clarin K-Centre
Jump to navigation Jump to search
Line 111: Line 111:


==How can I calculate the readability of Dutch text==
==How can I calculate the readability of Dutch text==
There is a tool called [https://webservices.cls.ru.nl/tscan T-scan] that may be helpful there.
There is a tool called [https://tscan.hum.uu.nl/tscan/ T-scan] that may be helpful there.


==How can I calculate Flesch-Douma for Dutch==
==How can I calculate Flesch-Douma for Dutch==

Revision as of 12:57, 8 February 2024

This page lists the questions we received.

How can I get access to CLARIN tools and resources, without an academic account?

It is possible to ask for an account at the CLARIN Account Registration page.

Do you have any domain specific corpora?

On the main page you find a listing of different types of corpora we have. Domain specific corpora are the Parliamentary corpora and the Corpora of academic texts. Under the Parallel corpora there are also domain specific corpora.

Are there literary texts available?

From the Public Domain Page you can find a link to the downloadable public domain files in DBNL.

Is there a speech recognition engine available for Belgian Dutch?

Since April 2022, there is a new ASR engine available, specifically suited for speech recognition for Belgian Dutch.

Check also the page dedicated to Speech_recognition systems

Which corpora are available for Automatic Simplification for Dutch?

There are currently no parallel corpora available in which regular Dutch has been simplified, so this makes it impossible to straightforwardly treat this as a machine translation problem.

If you would consider to develop a form of unsupervised simplification, there are, however, a number of corpora available which can be considered to be in a form of easy Dutch. These corpora are the Wablieft-corpus (Easy Belgian Dutch), the Basilex-corpus (Texts for children in Dutch primary schools), and WAI-NOT (Very easy Belgian Dutch).

Are there any corpora that contain dialogues between two or more people?

There are a number of dialog components in CGN (Spoken Dutch Corpus).

  • a. Spontane conversaties ('face-to-face')
  • c. Telefoondialogen opgenomen m.b.v. platform
  • d. Telefoondialogen opgenomen m.b.v. minidiskrecorder
  • e. Zakelijke onderhandelingen

There is also the IFA Dialog Video corpus.

"A collection of annotated video recordings of friendly Face-to-Face dialogs. It is modelled on the Face-to-Face dialogs in the Spoken Dutch Corpus (CGN). The procedures and design of the corpus were adapted to make this corpus useful for other researchers of Dutch speech. For this corpus 20 dialog conversations of 15 minutes were recorded and annotated, in total 5 hours of speech."

Advice for finding financial support for compiling a medical comparable corpus English-Dutch

We are looking into whether this is fundable by CLARIN Resource Families Project Funding. The site indicates that it is best to first submit the idea informally to the CLARIN office, so they can advise us ("In view of the flexible nature of this call, applicants are encouraged to send in a project idea beforehand, in order to allow CLARIN Office to give additional guidelines and assess the eligibility of plans.")

We would need to be clear though as to whether this is a parallel corpus, which is one of the categories in the Resource Families, or whether it is a comparable corpus, which is not one of the categories. We might ask the CLARIN office whether they think it would be useful to add such a category.

We would have to identify a number of potential data sources, and make sure we can make the collected data publicly available for research, without GDPR or IP issues.

We should be aware of the EMEA corpus in OPUS

Is it possible to automate finding of word conversions for specific corpora?

Do you think it is possible to draw up a list of conversion pairs of Dutch, i.e. words that can be used in more than 1 part of speech, on the basis of corpora (or possibly treebanks)? I am particularly concerned with the parts of speech nomen, adjective, and verb. So, for example, the search algorithm should be able to identify the bold words in the following examples as conversion pairs:

  • ik douche / ik neem een douche
  • wij geloven in iets / zijn geloof in iets
  • de crimineel zweert zijn criminele gedrag af
  • wij onderhielden contacten / het onderhoud van het huis
  • wij droogden het droge laken
  • we trommelden op de trommel

Answer: The e-Lex lexicon allows you to search word forms with multiple POS tags, as you ask. This lexicon is based on CGN. But your question goes a little further, I think. The verb geloven has the lemma "geloven", and the conversion to noun has the lemma geloof. So we should see whether the noun's lemma also occurs as a verb form, idem ditto for adjectives. A perl script was written that extracts the requested sets from the lexicon file -- results were sent to the requester.

I am looking for spoken and written corpora for a contrastive study German/Dutch in which I can find actual word forms

We refer you to OpenSonar which is the only search engine for both the Spoken Dutch Corpus (CGN) and the SoNaR reference corpus and is available with CLARIN login. An alternative can be the [ http://chn.ivdnt.org/ Corpus Hedendaags Nederlands (CHN) website] which is the online search engine for the Corpus of Contemporary Dutch (CHN). If you need more recent data, at INT we have a monitor corpus with weekly newspaper dumps at our disposal in which we can launch searches for you -- unfortunately we cannot make this monitor corpus available due to IP restrictions.

I want to find all Dutch lemmas in which there is double derivation, can you help me?

The eLex lexicon contains as its third data field the morphology of lemmas. We extracted all rows in the data in which the sign for derivation (|) occurs twice in a row and provided our user with a detailed list of entries and how often they occur in the e-Lex.

(ing)[N|V.](s)[N|N.N]	9344
(heid)[N|A.](s)[N|N.N]	1470
(er)[N|V.](s)[N|N.N]	1230
(ig)[A|N.](heid)[N|A.]	937
(atie)[N|V.](ief)[A|N.]	769
(iseer)[V|N.](atie)[N|V.]	603
...

Can I get a distribution of the suffixes on Dutch adjectives?

The eLex lexicon contains as its third data field the morphology of lemmas. We counted, per lemmaid that is an adjective the frequency of the last suffix. For no morphology, we assigned category '0'.

0	11781
(ig)	1781
(achtig)	507
(baar)	473
(isch)	431
(elijk)	392
(end)	367
(en)	292
(s)	278
(lijk)	237
(erig)	229
(ief)	168
(aal)	155
(loos)	138
(d)	116
...

I want to use the CGN wave files, but found a dead link on the original website

There is a permanent link for the CGN wave file download page: http://hdl.handle.net/10032/tm-a2-k6

Which treebanks are available for Dutch?

We have added the Treebanks page to this wiki to answer this question.

Is there a corpus with imperative sentences?

There is no such explicit corpus available. If we provide GrETEL with an imperative example we can extract similar sentences, which should be usable as an imperative corpus.

We will run some topic modeling analyses on some Flemish/Belgian Dutch data we have. Because our data set is relatively small for this kind of task, the idea is to train the topic model on a much larger corpus (e.g., social media posts). Do you know of any such corpus that might be available?

Take a look at [1]

How can I calculate the readability of Dutch text

There is a tool called T-scan that may be helpful there.

How can I calculate Flesch-Douma for Dutch

The formula for Flesch-Douma requires two things to be counted: number of words in a sentence and number of syllables per word. While the number of words in a sentence is easily counted with any scripting language, the nr of syllables may seem more difficult. The [ http://hdl.handle.net/10032/tm-a2-h2 e-Lex] lexicon contains hyphenation patterns and hence the number of syllables per word. An alternative is to count the number of vowel clusters in each word using regular expressions, which should also give you the number of syllables.

I am looking for a parallel corpus of Dutch-Turkish texts.

We are comparing the Dutch and Turkish translations of the Linguistic Inquiry and Word Count [LIWC] dictionaries. Do you know of any corpora that would be suitable? I found several candidates on OPUS (https://opus.nlpl.eu/), and downloaded the TED2020 talks. However these are .xml files with paragraph/line IDs and I need .txt files. Would you have a script or a way to automatically recode them and remove the unnecessary tags?

We would also refer you to OPUS, you can find parallel txt files if you download the moses format -- then you get a zip which contains a .nl and a .tr file, and these are sentence aligned. i.e. the same line number in the two files should be translations of each other.

Do you know, are there any reasonable sentiment analysis algorithms/approaches for Dutch?

We have now added a page on sentiment analysis to this wiki.

Are there any spoken corpora available of spontaneous speech with time stamped transcriptions, which are freely available?

On [2] we've collected what is available for Dutch.

The Corpus Gesproken Nederlands (CGN) has a section of spontaneous speech, with time-stamped transcriptions, freely available.

How can I combine search for "green" and "red" word order in OpenSonar?

In principle it is possible to ask for both orders at the same time, see the example for more info.

Can you give advice on setting up a transcription process for a spoken language corpus?

A meeting was held in which we discussed the use of speech recognition, segmentation, speaker diarisation, and post-editing of speech recognition. We have given the advice to include K-Dutch into the project proposal so that K-Dutch can take care of converting ASR output to ELAN tiers, merging of ELAN tiers etc.

What are the character n-gram frequencies for Dutch?

We have counted the n-gram frequencies up to trigrams and made them available at Character_N-grams.

Is there an user friendly interface for working with the EMEA part of the Lassy-Large treebank?

Lassy Large is extremely large, and therefore not entirely available through online query tools such as GrETEL and PaQu, although the latter provides access to the newspaper part. A suggestion is to download the data and then import in into an xml database engine, such as Basex which will allow you to query it with Xpath and Xquery.

What are good POS taggers for non-standard Dutch?

See Basic_language_processing page.

Are there any lexical profiling tools for Dutch?

The request is whether there are any user friendly and freely available tools with which teachers can assess the lexical profile of a text (to which frequency levels do the words belong, how many of the most frequent words should a reader know to understand 95% etc.)

While we are not aware of any tools that do such a thing explicitly, there are a number of tools that go partly that way. We suggest taking a look at

  • LINT which assesses the readability of a text
  • T-scan which is what LINT is based on, and which is a bit less user friendly

There are also a number of tools that can be found at Instituut voor Levende Talen