This paper presents a dataset of body-sensor traces and corresponding videos from several professional soccer games captured in late 2013 at the Alfheim Stadium in Tromsø, Norway. Player data, including field position, heading, and speed are sampled at 20 Hz using the highly accurate ZXY Sport Tracking system. Additional per-player statistics, like total distance covered and distance covered in different speed classes, are also included with a 1 Hz sampling rate. The provided videos are in high-definition and captured using two stationary camera arrays positioned at an elevated position above the tribune area close to the center of the field. The camera array is configured to cover the entire soccer field, and each camera can be used individually or as a stitched panorama video. This combination of body-sensor data and videos enables computer-vision algorithms for feature extraction, object tracking, background subtraction, and similar, to be tested against the ground truth contained in the sensor traces.
The importance of winning has increased the role of performance analysis in the sports industry, and this underscores how statistics and technology keep changing the way sports are played. Thus, this is a growing area of interest, both from a computer system view in managing the technical challenges and from a sport performance view in aiding the development of athletes. In this respect, Bagadus is a real-time prototype of a sports analytics application using soccer as a case study. Bagadus integrates a sensor system, a soccer analytics annotations system, and a video processing system using a video camera array. A prototype is currently installed at Alfheim Stadium in Norway, and in this article, we describe how the system can be used in real-time to playback events. The system supports both stitched panorama video and camera switching modes and creates video summaries based on queries to the sensor system. Moreover, we evaluate the system from a systems point of view, benchmarking different approaches, algorithms, and trade-offs, and show how the system runs in real time.
Bagadus is a prototype of a soccer analysis application which integrates a sensor system, a video camera array and soccer analytics annotations. The current prototype is installed at Alfheim Stadium in Norway, and provides a large set of new functions compared to existing solutions. One important feature is to automatically extract video events and summaries from the games, i.e., an operation that traditionally consumes a huge amount of time. In this demo, we demonstrate how our integration of subsystems enable several types of summaries to be generated automatically, and we show that the video summaries are displayed with a response time around one second.
Over the last years, video streaming has become one of the most dominant Internet services. Due to the increased availability of high-speed Internet access, multimedia services are becoming more interactive. Examples of such applications are both cloud gaming (OnLive, 2014) and systems where users can interact with high-resolution content (Gaddam et al., 2014). During the last few years, programmable hardware video encoders have been built into commodity hardware such as CPUs and GPUs. One of these encoders is evaluated in a scenario where individual streams are delivered to the end users. The results show that the visual video quality and the frame size of the hardware-based encoder are comparable to a software-based approach. To evaluate a complete system, a proposed streaming pipeline has been implemented into Quake III. It was found that running the game on a remote server and streaming the video output to a client web browser located in a typical home environment is possible and enjoyable. The interaction latency is measured to be less than 90 ms, which is below what is reported for OnLive in a similar environment
Abstract-Over the last years, video streaming has become one of the most dominant Internet services. A trend now is that due to the increased availability of high-speed internet access, multimedia services are becoming more interactive and immersive. Examples of such applications are both cloud gaming [4] and systems where users can interact with highresolution content [1]. Over the last few years, hardware video encoders have been built into commodity hardware. We evaluate one of these encoders in a scenario where we have individual streams delivered to the end users. Our results show that we can reduce almost half of the CPU time spent on video processing, while also greatly reducing the power consumption on the system. We also compare the visual video quality and the frame size of the hardware based encoder, and we find no significant difference compared to a software based approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.