Personalized delivery of multimedia content over the Internet opens new business perspectives for future multimedia applications and thus plays an important role in the ongoing MPEG-7 and MPEG-21 multimedia standardization efforts. Based on these standards, next-generation multimedia services will be able to automatically prepare the digital content before delivery according to the client's device capabilities, the network conditions, or even the user's content preferences. However, these services will have to deal with a variety of different end user devices, media formats, as well as with additional metadata when adapting the original media resources. In parallel, an increasing number of commercial or open-source media transformation tools will be available, capable of exploiting such descriptive metadata or dealing with new media formats; thus it is not realistic that a single tool will support all possible transformations.In this paper, we present a novel, fully knowledge-based approach for building such multimedia adaptation services, addressing the above mentioned issues of openness, extensibility, and concordance with existing and upcoming standards. In our approach, the original media is transformed in multiple adaptation steps performed by an extensible set of external tools, where the construction of adequate adaptation sequences is solved in an Artificial Intelligence planning process. The interoperability issue is addressed by exploiting standardized Semantic Web Services technology. This technology allows us to express tool capabilities and execution semantics in a declarative and well-defined form. In this context, existing multimedia standards serve as a shared domain ontology.
Multimedia streaming is becoming more and more popular. Seamless video streaming in heterogeneous networks like the Internet turns out as almost impossible due to varying network conditions -streams must be adapted to the current network QoS. Temporal scalability is one of the most reasonable adaptation techniques because it is fast and easy to perform. Today's approaches simply drop frames out of a video without spending much effort on finding an intelligent dropping behavior. This usually leads to good adaptation results in terms of bandwidth consumption but also to suboptimal video quality within the given bounds. Our approach offers analysis of video streams to achieve the qualitatively best temporal scalability. For this reason, we introduce a data structure called modification lattice which represents all frame dropping combinations within a sequence of frames. On the basis of the modification lattice, quality estimations on frame sequences can be performed. Moreover, a heuristic for fast and efficient quality computation in a modification lattice is presented. Experimental results illustrate that temporal video adaptation based on QCTVA information leads to a better video quality compared to "usual" frame dropping approaches. Furthermore, QCTVA offers frame priority lists for videos. Based on these priorities, numerous adaptation techniques can increase their overall performance when using QCTVA.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.