2020
DOI: 10.1177/0963947020949439
|View full text |Cite
|
Sign up to set email alerts
|

Depictions of deception: A corpus-based analysis of five Shakespearean characters

Abstract: Drawing on the Enhanced Shakespearean Corpus: First Folio Plus and using corpus-based methods, this article explores, quantitatively and qualitatively, Shakespeare’s depictions of five deceptive characters (Aaron, Tamora, Iago, Lady Macbeth and Falstaff). Our analysis adopts three strands: firstly, statistical keywords relating to each character are examined to determine what this tells us about their natures more generally. Secondly, the wordlists produced for each of the five characters are drawn upon to det… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

3
4

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…For instance, for requests, Culpeper and Archer (2008) identify a tendency of using multiple requests in the same turn for their data from Early Modern English trials and, to a lesser extent, in drama. Likewise, Archer and Gillings (2020), in their study of lying and deception in Shakespeare's plays, note that deceptive features tend to occur in clusters. Vaughan et al (2017) find that certain items of vague language tend to cooccur.…”
Section: Case Study: Automatic Extraction Of High-density Passages Of...mentioning
confidence: 99%
“…For instance, for requests, Culpeper and Archer (2008) identify a tendency of using multiple requests in the same turn for their data from Early Modern English trials and, to a lesser extent, in drama. Likewise, Archer and Gillings (2020), in their study of lying and deception in Shakespeare's plays, note that deceptive features tend to occur in clusters. Vaughan et al (2017) find that certain items of vague language tend to cooccur.…”
Section: Case Study: Automatic Extraction Of High-density Passages Of...mentioning
confidence: 99%
“…Working from the assumption that the most salient words and semantic domains are meaningful in some way, this initial analysis allows us to get a broad overview of the contents of the dataset before focusing on our specific areas of interest. Keywords and key semantic domains are identified using a three-part filtering procedure, as in Archer and Gillings (2020).…”
Section: Corpus-assisted Discourse Analysismentioning
confidence: 99%
“…• Type 1: a structured top-down process whereby the researcher applies an a priori framework or set of categories to a number of concordance lines (e.g., Culpeper and Gillings (2018) coding for politeness in the BNC1994/ 2014, and Lutzky (2021b) exploring the pragmatic functions of sorry in customer service interactions on Twitter); • Type 2: a structured bottom-up process whereby the researcher assigns categories to the concordance lines, but these come organically from the corpus rather than being imposed on it (e.g., Kopf (2019) exploring the ways that content policies are enforced on Wikipedia, and Zottola et al (2021) identifying coping strategies that patients use in autobiographical narratives while waiting for assessment at a transgender health clinic); • Type 3: an unstructured bottom-up process whereby the researcher eyeballs the concordance lines and lets that qualitative holistic judgement form the basis of analysis (e.g., McEnery, Baker and Dayrell (2019) identifying previously unrecorded droughts in nineteenth-century Britain, and Levon (2016) exploring the extent to which users on a question-and-answer forum use their replies as an opportunity for stance-taking); • Type 4: an unstructured top-down process whereby the researcher identifies concordance lines which match categories proven to be relevant in other datasets (e.g., Archer and Gillings (2020) identifying potential indicators of deception in Shakespeare's plays, and Appleton (2021) exploring how the unification of Germany is discussed in Hansard). Types 1 and 2 both call for the researcher to sift through each and every concordance line within a sample, but they differ with regard to whether categorisation is something that is applied to, or extracted from, the data.…”
Section: Concordance Analysismentioning
confidence: 99%