The prototype of a system to assist the physicians in differential diagnosis of lymphoproliferative disorders of blood cells from digitized specimens is presented. The user selects the region of interest (ROI) in the image which is then analyzed with a fast, robust color segmentel: Queries in a database of validated cases can be formulated in terms of shape (similarity invariant Fourier descriptors), texture (multiresolution simultaneous autoregressive model), color (L*u*v* space), and area, derived from the delineated ROI. The uncertainty of the segmentation process (obtained through a numerical method) determines the accuracy of shape description (number of Fourier harmonics). Tenfold cross-validated classification over a database of 261 color 640 x 480 images was implemented to assess the system performance. The ground truth was obtained through immunophenotjping by flow cytomety. To provide a natural man-machine interface, most input commands are bimodal: either using the mouse or by voice. A speech synthesizerprovides feedback to the user: All the employed computational modules are context independent and thus the same system can be used in a large variety of application domains.
Recent developments in networking and computing have enabled collaborative biomedical engineering research by geographically separated participants. One of the most promising goals is to use these technologies to extend human intellectual capabilities in medical decision making. These emerging technologies are poised to drastically reduce healthcare cost by providing service at remote locations. This also increases diagnosis capacity since information is made available to experts at any location. In this paper, we propose a novel application of a recently developed interactive and distributed system in medical consultation and education. Our approach builds on the notion that interactive and distributive capabilities of the system are crucial for medical consultation and education. The presented application uses a multiuser, collaborative environment with multimodal human/machine communication in the dimensions of sight, sound, and touch. The experimental setup, consisting of two user stations, and the multimodal interfaces, including sight (eye-tracking), sound (automatic speech), and touch (microbeam pen), were tested and evaluated. The system uses a collaborative workspace as a common visualization space. Users communicate with the application through a fusion agent by eye-tracking, speech, and microbeam pen. The audio/video teleconferencing is also included to help the radiologists to communicate with each other simultaneously while they are working on the mammograms. The system used in this study has three software agents: a fusion agent, a conversational agent, and an analytic agent. The fusion agent interprets multimodal commands by integrating the multimodal inputs. The conversational agent answers the user's questions and detects human-related or semantic errors and notifies the user about the results of the image analysis. The analytic agent enhances the digitized images using the wavelet denoising algorithm if requested by the user. To show how well the system performs in practice, we used the system for medical consultation on mammograms. Results also show that the relevant information about the region of interest (ROI) of the mammograms chosen by the users is extracted automatically and used to enhance the mammograms.
We demonstrate the prototype of an image understanding based system [2] to support decision making in clinicalpathology. The system employs all four major low level vision queues (shape, texture, color, metric measures) in content-based retrieval of visual information. The reliability of the central module of the system, the fast color segmenter, makes possible on-line analysis of the query image. The user inteface is bimodal (speech and mouse input), allowing a natural communication with the system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.