Proceedings of the Fourteenth EuroSys Conference 2019 2019
DOI: 10.1145/3302424.3303971
|View full text |Cite
|
Sign up to set email alerts
|

VStore

Abstract: We present VStore, a data store for supporting fast, resourceefficient analytics over large archival videos. VStore manages video ingestion, storage, retrieval, and consumption. It controls video formats along the video data path. It is challenged by i) the huge combinatorial space of video format knobs; ii) the complex impacts of these knobs and their high profiling cost; iii) optimizing for multiple resource types. It explores an idea called backward derivation of configuration: in the opposite direction alo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 45 publications
(3 citation statements)
references
References 41 publications
0
3
0
Order By: Relevance
“…Modern systems for querying images and video content (i.e., AWStream [25]) rely on extraction techniques based on deep convolutional neural networks (CNNs) due to their accuracy in common computer vision tasks such as classification and object detection (see Figure 3). Visual data is collected and persisted in stable storage in such systems, then analysed either by domain-specific learning models or accelerated by lightweight filters following a store and query processing model (i.e., BlazeIT [26], DeepLens [27], NoScope [28], Optasia [29], Sprocket [30], Tahoma [31], VideoChef [32], and VStore [33]). The most recent and competitive object detection models (Faster R-CNN [34], SSD, YOLOv3 [35], and RetinaNet) have proven to be suitable for image recognition in achieving highperformance results.…”
Section: B Visual Analyticsmentioning
confidence: 99%
“…Modern systems for querying images and video content (i.e., AWStream [25]) rely on extraction techniques based on deep convolutional neural networks (CNNs) due to their accuracy in common computer vision tasks such as classification and object detection (see Figure 3). Visual data is collected and persisted in stable storage in such systems, then analysed either by domain-specific learning models or accelerated by lightweight filters following a store and query processing model (i.e., BlazeIT [26], DeepLens [27], NoScope [28], Optasia [29], Sprocket [30], Tahoma [31], VideoChef [32], and VStore [33]). The most recent and competitive object detection models (Faster R-CNN [34], SSD, YOLOv3 [35], and RetinaNet) have proven to be suitable for image recognition in achieving highperformance results.…”
Section: B Visual Analyticsmentioning
confidence: 99%
“…To reduce the cost of running heavy neural networks on videos, model-level optimizations are developed to make predictions faster while preserving accuracy [13,15]. Other work [8,34] focuses on the storage and decoding of video data, which can be a bottleneck for video analytics as well. [18] envisions a new query system to address new challenges posed by autonomous vehicles (AV) data.…”
Section: Related Workmentioning
confidence: 99%
“…Correlating abrupt changes in the influence pattern to lighting (e.g., as opposed to object occlusion) can be very challenging if not impossible. There are also scenarios where an engineer might only have access to prediction/explanation logs but not the actual source video, e.g., for privacy reasons [33]. In these cases, an engineer might understand which pixels contribute to a prediction but not what those pixels might represent.…”
Section: Introductionmentioning
confidence: 99%