SUMMARYIn the last 10-15 years the ILAE Commission on Classification and Terminology has been presenting proposals to modernize the current ILAE Classification of Epileptic Seizures and Epilepsies. These proposals were discussed extensively in a series of articles published recently in Epilepsia and Epilepsy Currents. There is almost universal consensus that the availability of new diagnostic techniques as also of a modern understanding of epilepsy calls for a complete revision of the Classification of Epileptic Seizures and Epilepsies. Unfortunately, however, the Commission is still not prepared to take a bold step ahead and completely revisit our approach to classification of epileptic seizures and epilepsies. In this manuscript we critically analyze the current proposals of the Commission and make suggestions for a classification system that reflects modern diagnostic techniques and our current understanding of epilepsy.
This work presents a system that makes use of the Microsoft Kinect to enable Point&Click interaction for the control of appliances in smart environments. A backend server determines through collision detection which device the user is pointing at and sends the respective control interface to the user's smartphone. Any commands the user issues are then sent back to the server which in turn controls the appliance. New devices can either be registered manually or using markers such as QR codes to identify them and get their position at the same time. The video demonstrates the interaction concept and our technical implementation.
Braille documents are part of the collaboration with blind people. To overcome the problem of learning Braille as a sighted person, a technical solution for reading Braille would be beneficial. Thus, a mobile and easy-to-use system is needed for every day situations. Since it should be a mobile system, the environment cannot be controlled, which requires modern computer vision algorithms. Therefore, we present a mobile Optical Braille Recognition system using state-of-the-art deep learning implemented as an app and server application.
We present an architecture for natural language processing that parses an input sentence incrementally and merges information about its structure with a representation of visual input, thereby changing the results of parsing. At each step of incremental processing, the elements in the context representation are judged whether they match the content of the sentence fragment up to that step. The information contained in the best matching subset then influences the result of parsing the subsentence. As processing progresses and the sentence is extended by adding new words, new information is searched in the context to concur with the expanded language input. This incremental approach to information fusion is highly adaptable with regard to the integration of dynamic knowledge extracted from a constantly changing environment. I. MOTIVATIONInformation gained from sensory perception of surroundings of any agent, be it a natural or an artificial one, requires the fusion of modal specific information. This is especially relevant whenever we address such an agent by means of a natural language interface and refer to things that are perceived by visual sensors such as cameras. The system introduced in this paper merges information represented by analyses of an incremental parser of German natural language sentences with knowledge from a representation of visual context. Information integration of this kind is realized as a fusion of data in an abstract, non-metrical space based on the structural properties of input from both modalities. This integration of external information can lead to different interpretations of a sentence fragment compared to an analysis that depends solely on a language model. Any system processing natural language instructions which refer to processes in a real-life environment has to be able to solve problems with regard to its highly dynamic, evolving and ambiguous input from several modalities. To do this, several requirements have to be fulfilled:Firstly, the NLP processing should produce a structural representation of its interpretation. This result is necessary to link the content of an utterance with any non-linguistic context information. As purely syntactic properties of language are difficult to link to content of a visual scene, any analysis of this kind needs to include a semantic interpretation of the linguistic information given.A system adequate for human-like interaction needs to parse its input in a human-like fashion, which means that the processing of a sentence is not started after the whole sentence is received but in an incremental way, starting processing right after the first word becomes available. Any incremental step (i.e. when additional language input is received) should produce a partial structural output that can immediately be used to link linguistic and visual information.In order to integrate visual input, the system has to provide interfaces to external information sources that contribute cues to be fused with its language interpretations. An interface of this kin...
We present a system for integrating knowledge about complex visual scenes into the process of natural language comprehension. The implemented system is able to choose a scene of reference for a natural language sentence from a large set of scene descriptions. This scene is then used to influence the analysis of a sentence generated by a broad coverage language parser. In addition, objects and actions referred to by the sentence are visualized by a saliency map which is derived from the bi-directional influence of top down and bottomup information on a model of visual attention highlighting the regions with the highest probability of attracting the attention of an observer.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.