Abstract-The standard procedure to determine the brain response from a multitrial evoked magnetoencephalography (MEG) or electroencephalography (EEG) data set is to average the individual trials of these data, time locked to the stimulus onset. When the brain responses vary from trial-to-trial this approach is false. In this paper, a maximum-likelihood estimator is derived for the case that the recorded data contain amplitude variations. The estimator accounts for spatially and temporally correlated background noise that is superimposed on the brain response.The model is applied to a series of 17 MEG data sets of normal subjects, obtained during median nerve stimulation. It appears that the amplitude of late component (30-120 ms) shows a systematic negative trend indicating a weakening response during stimulation time. For the early components (20-35 ms) no such a systematic effect was found. The model is furthermore applied on a MEG data set consisting of epileptic spikes of constant spatial distribution but varying polarity. For these data, the advantage of applying the model is that positive and negative spikes can be processed with a single model, thereby reducing the number of degrees of freedom and increasing the signal-to-noise ratio.
At the Faculty of Informatics, Masaryk University, Brno we developed the AUDIS system recently. Description of the system can be found in [I], 121 and [3]. AUDIS isdeveloped primarily as a multimodal support that would help visually impaired students to study various materials. For proper functionality of the system inputs and outputs, we need also high quality speech synthesis. Unfortunately, it is not available for Czech language. Therefore, we are developing a speech engine that allows us to produce high quality Czech speech for some limited domains together with the average quality of general Czech speech synthesis (where average means well comprehensible). Limited domain speech synthesis will be used for frequently used speech outputs (e.g. navigation in a document, control of the system), while general speech synthesis will be available for common text. For these purposes we have developed the automatic recording system that allows us to collect and process the large amount of speech data. The basic principles of our speech synthesis, the recording system and the speech segments selection and processing are described in the first part of the paper. The second part of the paper deals with methods for choosing the best set of speech data to be recorded into the corpus and the speech data segmentation.
MULTIMODAL CONTROL AND OUTPUT OF AUDIS SYSTEMThe Audis system produces the combination of speech output, earcons, text and contrast graphical output and can be controlled using the voice commands, keyboardbraille or mouse. The output is used also for providing the user with the complex technical information (e.g. mathematical expressions, tables or a logical document structure) and the navigation information. Misrepresentation caused by the low quality of speech output leads into the confusion of the user and the communication with the system is slowed down. Our new high quality speech synthesis should help to solve this situation.
BASIC SYNTHESIZER PRINCIPLESThe basic feature of our synthesizer Popokatepetl is the possibility to concatenate the segments of variable lengths. It allows us to optimize the corpus data for particular topic and use whole words or phrases for speech synthesis. This approach enhances the intelligibility and the naturalness of produced speech. It allows us to use the synthesizer for reading 673
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.