The SAGE Handbook of Visual Research Methods 2020
DOI: 10.4135/9781526417015.n26
|View full text |Cite
|
Sign up to set email alerts
|

Ethnomethodology and the Visual Practices of Looking, Visualization, and Embodied Action

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 35 publications
0
7
0
Order By: Relevance
“…There is now a well-established body of work that draws heavily on EMCA to explore empirically the ways that visuality is achieved in social action as a set of working practices (Broth et al, 2014;Heath and Luff, 2000;Heinemann, 2016;Hindmarsh and Heath, 2000). Methodologically, in this tradition the empirical exploration of vision as action relies substantially on video as a means of analysing the ways that people make the social world accountable through vision, and the complexity of resources, such as gesture, gaze, physical objects, and talk, through which 'seeing' is performed, made, and made possible (Ball & Smith, 2012). For instance, Heath and Luff's (1992) Similar studies have been carried out in very diverse settings, including air traffic control rooms (R. Harper & Hughes, 1993), recreational cycling (McIlvenny, 2013), emergency care (Bjørn & Rødje, 2008), medical surgery (Bezemer et al, 2011), archaeology (Goodwin, 1994) and brain scanning (Alač, 2008).…”
Section: Vision As An Interactional Accomplishmentmentioning
confidence: 99%
“…There is now a well-established body of work that draws heavily on EMCA to explore empirically the ways that visuality is achieved in social action as a set of working practices (Broth et al, 2014;Heath and Luff, 2000;Heinemann, 2016;Hindmarsh and Heath, 2000). Methodologically, in this tradition the empirical exploration of vision as action relies substantially on video as a means of analysing the ways that people make the social world accountable through vision, and the complexity of resources, such as gesture, gaze, physical objects, and talk, through which 'seeing' is performed, made, and made possible (Ball & Smith, 2012). For instance, Heath and Luff's (1992) Similar studies have been carried out in very diverse settings, including air traffic control rooms (R. Harper & Hughes, 1993), recreational cycling (McIlvenny, 2013), emergency care (Bjørn & Rødje, 2008), medical surgery (Bezemer et al, 2011), archaeology (Goodwin, 1994) and brain scanning (Alač, 2008).…”
Section: Vision As An Interactional Accomplishmentmentioning
confidence: 99%
“…Margolis and Pauwels, 2011; Pink, 2012; Sidnell and Stivers, 2013; Stanczak, 2007; Van Leeuwen and Jewitt, 2001). 1 Most visual-method handbooks published during the last 5 to 10 years include a chapter on visual transcription conventions within EMCA (Ball and Smith, 2011; Forrester, 2011; Hindsmarsh and Tutt, 2012). A common denominator for all these publications is the how of making visual transcriptions, only a few of the texts reflect on the video camera as a cultural tool (see Forrester, 2011).…”
Section: Visual Research In the Social Sciencesmentioning
confidence: 99%
“…Some of the most productive contributions toward developing a sociology of seeing are associated with ethnomethodology (Ball 2003; Ball and Smith 2011). In 1994, Charles Goodwin published a groundbreaking paper titled “Professional Vision.” Goodwin seeks to characterize the kinds of practices that constitute professional vision as a distinct way of ordering visual experience.…”
Section: Toward a Sociology Of Seeingmentioning
confidence: 99%
“… 5. On this distinction, see Ball and Smith 2011 and Goodwin 2004. It should be noted that talk can be a feature of activity systems in ways that are not strictly speaking “conversational.” Classroom lessons, lectures, and public address announcements, for example, are not conversational, but are important occasions for uses of talk. …”
mentioning
confidence: 99%