2011
DOI: 10.1145/1889681.1889688
|View full text |Cite
|
Sign up to set email alerts
|

Inferring colocation and conversation networks from privacy-sensitive audio with implications for computational social science

Abstract: New technologies have made it possible to collect information about social networks as they are acted and observed in the wild, instead of as they are reported in retrospective surveys. These technologies offer opportunities to address many new research questions: How can meaningful information about social interaction be extracted from automatically recorded raw data on human behavior? What can we learn about social networks from such fine-grained behavioral data? And how can all of this be done while protect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
44
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 62 publications
(47 citation statements)
references
References 44 publications
(43 reference statements)
1
44
0
Order By: Relevance
“…Call and SMS interactions on the contrary, require explicit actions by the interacting sides. These results corroborate the findings of [94] that networks derived from co-location and face-to-face conversations may be quite different. Table 3 summarizes the results of the 16 experiments together with the results of the four ZeroR experiments.…”
Section: Experimental Setup and Resultssupporting
confidence: 90%
“…Call and SMS interactions on the contrary, require explicit actions by the interacting sides. These results corroborate the findings of [94] that networks derived from co-location and face-to-face conversations may be quite different. Table 3 summarizes the results of the 16 experiments together with the results of the four ZeroR experiments.…”
Section: Experimental Setup and Resultssupporting
confidence: 90%
“…The first layer of the model infers regarding voice existence and the second layer speech occurrence. This technique was adopted by [Choudhury and Basu 2004], Vibefones [Madan and Pentland 2006], StressSense , MeetingMediator [Kim et al 2008], [Wyatt et al 2011]. Another technique widely used by systems such as SpeakerSense and Auditeur [Nirjon et al 2013] is to calculate the ZCR of an audio frame and then apply a classification method to infer if the segment contains speech [Saunders 1996].…”
Section: Auditorymentioning
confidence: 99%
“…Some recent examples include automatically inferring co-location and conversational networks [21], linking social diversity and economic progress [10], automatic activity and event classification for mass market phones [17], identifying transportation modes [19], as well as feedback tools for improving health and fitness [8] and for modeling human mobility patterns [13].…”
Section: Introductionmentioning
confidence: 99%