2018
DOI: 10.1016/j.diin.2018.05.004
|View full text |Cite
|
Sign up to set email alerts
|

Laying foundations for effective machine learning in law enforcement. Majura – A labelling schema for child exploitation materials

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 16 publications
0
8
0
Order By: Relevance
“…Facebook and major content providing and hosting services have taken steps to strengthen their algorithms; for example, in August 2019, Facebook open-sourced their algorithms for video and image detection. 119 There have been no significant developments or improvements in hashing or AI detection since the Christchurch Massacre. 120 The current AI technology and human-based frameworks are unable to remove all offending content after notification, even over several weeks.…”
Section: Impossible Burdenmentioning
confidence: 99%
“…Facebook and major content providing and hosting services have taken steps to strengthen their algorithms; for example, in August 2019, Facebook open-sourced their algorithms for video and image detection. 119 There have been no significant developments or improvements in hashing or AI detection since the Christchurch Massacre. 120 The current AI technology and human-based frameworks are unable to remove all offending content after notification, even over several weeks.…”
Section: Impossible Burdenmentioning
confidence: 99%
“…Studies Online detection [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [9], [39], [40], [41], [42], [43], [8], [44], [45], [46], [47], [48], [49] Offline detection [50], [51], [52], [53], [54], [7], [55], [56] Safety [57], [13], [58], [59], [60], [61],…”
Section: Categorymentioning
confidence: 99%
“…These studies built ML models that received images, videos, [8] written text [9], or a combination of them [28] as input, to automatically classify them as abusive or not abusive towards women or children. ML used in this context made it possible to quickly separate large amounts of abusive Internet content from its non-abusive counterpart, a task that is often time-consuming [42] or emotionally-taxing [46] when done manually. In order to detect and eliminate VAW and VAC, it is necessary to address explicit cases of violence, but also the risk factors that lead to their occurrence.…”
Section: ) Online Detectionmentioning
confidence: 99%
“…These matters may remain subject to further judicial proceedings, therefore no further details of their provenance will be provided. For tests benefitting from separate corpora, we utilised about 322,490 images of lawful pornography first used by [6], plus a partial (approx. 596,000 images) download of the Google Open Images Dataset [7].…”
Section: Test Corpusmentioning
confidence: 99%