Images of chemical molecules can be produced, manipulated, simulated and analyzed using sophisticated chemical software. However, in the process of publishing such images into scientific literature, all their chemical significance is lost. Although images of chemical molecules can be easily analyzed by the human expert, they cannot be fed back into chemical software and loose much of their potential use. We have developed a system that can automatically reconstruct the chemical information associated to the images of chemical molecules thus rendering them computer readable. We have benchmarked our system against a commercially available product and have also tested it using chemical databases of several thousand images with very encouraging results.
Work-in-Progress Track: Services CoordinationInternational audienceThis paper presents an approach for observing and reacting on the execution of services' coordinations in order to ensure NFP policies specified by the coordination designer. Thanks to policies associated to a services based application running in a Web dynamic environment, it is possible to associate a personalized behaviour: atomic integration of information retrieved from different social network services, automatic generation of an integrated view of the operations executed in different social networks
The emergence of new hardware architectures, and the continuous production of data open new challenges for data management. It is no longer pertinent to reason with respect to a predefined set of resources (i.e., computing, storage and main memory). Instead, it is necessary to design data processing algorithms and processes considering unlimited resources via the ''pay-as-you-go'' model. According to this model, resources provision must consider the economic cost of the processes versus the use and parallel exploitation of available computing resources. In consequence, new methodologies, algorithms and tools for querying, deploying and programming data management functions have to be provided in scalable and elastic architectures that can cope with the characteristics of Big Data aware systems (intelligent systems, decision making, virtual environments, smart cities, drug personalization). These functions, must respect QoS properties (e.g., security, reliability, fault tolerance, dynamic evolution and adaptability) and behavior properties (e.g., transactional execution) according to application requirements. Mature and novel system architectures propose models and mechanisms for adding these properties to new efficient data management and processing functions delivered as services. This paper gives an overview of the different architectures in which efficient data management functions can be delivered for addressing Big Data processing challenges.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.