2000
DOI: 10.1177/13670069000040020101
|View full text |Cite
|
Sign up to set email alerts
|

The LIDES Coding Manual

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(11 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Here, we showcase the different perspectives that the treatment of lone items provides on CS. Barnett et al (2000) developed the Multilingual-Index (M-Index) as a measure of the multilinguality of different corpora, or the distribution of languages in a corpus. Guzman et al ( 2017) also created the Integration-Index (I-Index), which is meant to measure the probability of CS in different multilingual corpora.…”
Section: Related Methodsmentioning
confidence: 99%
“…Here, we showcase the different perspectives that the treatment of lone items provides on CS. Barnett et al (2000) developed the Multilingual-Index (M-Index) as a measure of the multilinguality of different corpora, or the distribution of languages in a corpus. Guzman et al ( 2017) also created the Integration-Index (I-Index), which is meant to measure the probability of CS in different multilingual corpora.…”
Section: Related Methodsmentioning
confidence: 99%
“…Data was gathered at various times for each family: from 2003-2008 (family in France) and from 2007-2010 (families in Norway and Finland). The transcription of data collected was carried out on the pattern from LIDES coding manual (Barnett et al 2000). It seems appropriate to follow this transcription guideline for the members of Indian communities who have three to four languages in their verbal repertoire.…”
Section: Methodology and Hypothesesmentioning
confidence: 99%
“…Therefore, traditional supervised evaluation metrics -BLEU [32], Rouge [33] can not be used directly to evaluate the personification aspects of code-mixed generation models. Similarly, other extrinsic evaluation measures such as Multilingual index (M Index) [34], Burstiness and Span Entropy [35] can not be used, as these metrics are predominantly used to evaluate the ability to capture corpus-level switching patterns of generative models. To overcome the limitations of the existing evaluation metrics, we propose four metrics for benchmarking generated codemixed texts against the historical utterances by different users.…”
Section: B Evaluation Metricsmentioning
confidence: 99%