2017
DOI: 10.1002/acp.3376
|View full text |Cite
|
Sign up to set email alerts
|

‘Lyin' Ted’, ‘Crooked Hillary’, and ‘Deceptive Donald’: Language of Lies in the 2016 US Presidential Debates

Abstract: Language in the high-stakes 2016 US presidential primary campaign was contentious, filled with name-calling, personal attacks, and insults. Language in debates served at least three political functions: for image making, to imagine potential realities currently not in practice, and to disavow facts. In past research, the reality monitoring (RM) framework has discriminated accurately between truthful and deceptive accounts (~70% classification). Truthful accounts show greater sensory, time and space, and affect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 33 publications
(33 citation statements)
references
References 61 publications
0
32
0
Order By: Relevance
“…Numerous approaches aim to discriminate between truthful and deceptive texts, for example by automatically scoring texts on their richness of detail (Bond et al, 2017;Bond & Lee, 2005) or the proportion of named entities (Kleinberg, Mozes, Arntz, & Verschuere, 2017). Little is known about whether these approaches can be integrated through a common, underlying feature that may explain why a text is classified as truthful or deceptive.…”
Section: Concreteness In Deception Detectionmentioning
confidence: 99%
See 3 more Smart Citations
“…Numerous approaches aim to discriminate between truthful and deceptive texts, for example by automatically scoring texts on their richness of detail (Bond et al, 2017;Bond & Lee, 2005) or the proportion of named entities (Kleinberg, Mozes, Arntz, & Verschuere, 2017). Little is known about whether these approaches can be integrated through a common, underlying feature that may explain why a text is classified as truthful or deceptive.…”
Section: Concreteness In Deception Detectionmentioning
confidence: 99%
“…Named entities (Kleinberg, Mozes, et al, 2017) Classifying words or phrases into, e.g. specific names, organizations or times Reality Monitoring (Bond et al, 2017;Bond & Lee, 2005) Increased perceptual, spatial and temporal details, decreased cognitive operations Linguistic specificity (Zhou et al, 2004) Increased perceptual details, first person pronouns, second person pronouns, third person pronouns Person index Personal pronouns, proper nouns and person names (ranging from abstract to concrete) Motion verbs (Newman et al, 2003) Concrete, simple motion verbs (i.e. concrete) versus complex evaluations and judgments (i.e.…”
Section: Operationalizationmentioning
confidence: 99%
See 2 more Smart Citations
“…The LIWC counts how many words per input text belong to predefined psycholinguistic lexicon categories and has been used for verbal deception research before (e.g., Bond et al, 2017). For the current experiment, we used the categories “time” (e.g., “once” and “since”) and “space” (e.g., “above” and “outside”) each of which is standardized by the word count per statement and question type.…”
Section: Methodsmentioning
confidence: 99%