Ambient Intelligence is a new paradigm in which environments are sensitive and responsive to the presence of people. This is having an increasing importance in multimedia applications, which frequently rely on sensors to provide useful information to the user. In this context, multimedia applications must adapt and personalize both content and interfaces in order to reach acceptable levels of context-specific quality of service for the user, and enable the content to be available anywhere and at any time. The next step is to make content available to everybody in order to overcome the existing access barriers to content for users with specific needs, or else to adapt to different platforms, hence making content fully usable and accessible. Appropriate access to video content, for instance, is not always possible due to the technical limitations of traditional video packaging, transmission and presentation. This restricts the flexibility of subtitles and audio-descriptions to be adapted to different devices, contexts and users. New Web standards built around HTML5 enable more featured applications with better adaptation and personalization facilities, and thus would seem more suitable for accessible AmI environments. This work presents a video subtitling system that enables the customization, adaptation and synchronization of subtitles across different devices and multiple screens. The benefits of HTML5 applications for building the solution are analyzed along with their current platform support. Moreover, examples of the use of the application in three different cases are presented. Finally, the user experience of the solution is evaluated.
Abstract. Following EU directives on Media Access and Ambient Assisted Living, the broadcasting industry needs to introduce new services in order to guarantee access to all citizens. The article and its conclusions are part of the EU project DTV4ALL, which focuses on some possible broadcasting scenarios for achieving barrier-free television for those with visual impairments. Five enhanced Audiodescription (AD) scenarios were proposed and evaluated: 1) Live streaming Internet TV with AD, 2) AD reception in a group situation, 3) Video on demand over a set-top-box, 4) Video on PC and 5) Podcasts. User evaluation concerning usefulness, quality and usability of the services was assessed using questionnaires. Results of the user evaluation show that not only are AD emerging services technically viable but they are also positively rated by users. Implementation of these services will provide improved access to content, making TV accessible for all.
The diversity of Interactive Digital Television (iTV) platforms has generated a diversity of languages, middlewares, technologies and authoring tools that do not facilitate the maintenance of the services across the exiting iTV devices and the business-tobusiness interchange (B2B). This paper presents a new architecture to unify the design of interactive TV (iTV) services across the multiple existing formats and devices. The proposed architecture is based on the DVB-PCF (Digital Video Broadcasting -Portable Content Format) standard. It enables the platform-independent iTV service description through the specification of the viewer's experience rather than its detailed implementation. This feature makes DVB-PCF a suitable format to be transcoded into any specific target platform format. However, this standard does not specify the scalability techniques to be used when mapping coordinates from the reference screen to the device display in cases where the resolutions differ. In this work we manage different service layouts through the automatic scaling of PCF documents and a later layout revision with the PCF Authoring tool. Furthermore, we present a PCF viewer tool to visualize and validate the generated and imported PCF services. Finally, as an example, we show the usage of a transcoder of PCF descriptions into a Web-based format. It allows the generation of the iTV applications optimized for a specific target platform.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.