Precision Medicine implies a deep understanding of inter-individual differences in health and disease that are due to genetic and environmental factors. To acquire such understanding there is a need for the implementation of different types of technologies based on artificial intelligence (AI) that enable the identification of biomedically relevant patterns, facilitating progress towards individually tailored preventative and therapeutic interventions. Despite the significant scientific advances achieved so far, most of the currently used biomedical AI technologies do not account for bias detection. Furthermore, the design of the majority of algorithms ignore the sex and gender dimension and its contribution to health and disease differences among individuals. Failure in accounting for these differences will generate sub-optimal results and produce mistakes as well as discriminatory outcomes. In this review we examine the current sex and gender gaps in a subset of biomedical technologies used in relation to Precision Medicine. In addition, we provide recommendations to optimize their utilization to improve the global health and disease landscape and decrease inequalities.
In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion.
Abstract-To build robots that engage in fluid face-to-face spoken conversations with people, robots must have ways to connect what they say to what they see. A critical aspect of how language connects to vision is that language encodes points of view. The meaning of my left and your left differs due to an implied shift of visual perspective. The connection of language to vision also relies on object permanence. We can talk about things that are not in view. For a robot to participate in situated spoken dialog, it must have the capacity to imagine shifts of perspective, and it must maintain object permanence. We present a set of representations and procedures that enable a robotic manipulator to maintain a "mental model" of its physical environment by coupling active vision to physical simulation. Within this model, "imagined" views can be generated from arbitrary perspectives, providing the basis for situated language comprehension and production. An initial application of mental imagery for spatial language understanding for an interactive robot is described.Index Terms-Active vision, grounding, language, mental imagery, mental models, mental simulation, robots. I. SITUATED LANGUAGE USE IN USING language to convey meaning to listeners, speakers leverage situational context [1], [2]. Context may include many levels of knowledge ranging from the details of shared physical environments to cultural norms. As the degree of shared context decreases between communication partners, the efficiency of language also decreases since the speaker is forced to explicate increasing quantities of information that could otherwise be left unsaid. A sufficient lack of common ground can lead to communication failures.If machines are to engage in meaningful, fluent, situated spoken dialog, they must be aware of their situational context. As a starting point, we focus our attention on physical context. A machine that is aware of where it is, what it is doing, the presence and activities of other objects and people in its vicinity, and salient aspects of recent history, can use these contextual factors to interpret natural language.In numerous applications of spoken language technologies such as talking car navigation systems and speech-based control of portable devices, we envision machines that connect word meanings to the machine's immediate environments. For example, if a car navigation system could see landmarks in its vicinity based on computer vision, and anchor descriptive language to this visual perception, then the system would have a basis for generating contextually appropriate directions such as "Take a left turn immediately after the large red building." Con- sider also an assistive service robot that can lend a helping hand based on spoken requests from a human user. For the robot to properly interpret requests such as "Hand me the red cup and put it to the right of my plate," the robot must connect the meaning of verbs, nouns, adjectives, and spatial language to the robot's perceptual and action systems in a situationa...
Abstract-Our long-term objective is to develop robots that engage in natural language-mediated cooperative tasks with humans. To support this goal, we are developing an amodal representation and associated processes which is called a grounded situation model (GSM). We are also developing a modular architecture in which the GSM resides in a centrally located module, around which there are language, perception, and actionrelated modules. The GSM acts as a sensor-updated "structured blackboard", that serves as a workspace with contents similar to a "theatrical stage" in the robot's "mind", which might be filled in with present, past or imagined situations. Two main desiderata drive the design of the GSM: first, "parsing" situations into ontological types and relations that reflect human language semantics, and second, allowing bidirectional translation between sensory-derived data/expectations and linguistic descriptions. We present an implemented system that allows of a range of conversational and assistive behavior by a manipulator robot. The robot updates beliefs (held in the GSM) about its physical environment, the human user, and itself, based on a mixture of linguistic, visual and proprioceptive evidence. It can answer basic questions about the present or past and also perform actions through verbal interaction. Most importantly, a novel contribution of our approach is the robot's ability for seamless integration of both language-and sensor-derived information about the situation: For example, the system can acquire parts of situations either by seeing them or by "imagining" them through descriptions given by the user: "There is a red ball at the left". These situations can later be used to create mental imagery and sensory expectations, thus enabling the aforementioned bidirectionality.I. ROBOTS, LANGUAGE AND MODULARITY As robots grow in ability and complexity, natural language is likely to assume an increasingly central role in humanrobot interaction. Our current work is part of a larger effort to develop conversational interfaces for interactive robots ([3], [6], [11], [8]). Robots that understand and use natural language may find application in entertainment, assistive, and educational domains. Such interactive robots are prime examples of systems where integration of numerous technologies in complex ways is required, and thus well designed modularity is necessary. One of the main challenges that one faces when designing such a system, is interfacing perceptual/motor with speech modules: existing natural language processing (NLP) systems cannot simply "plug and play".One historical reason behind this incompatibility is that the development of NLP and robotics have proceeded with relatively little interaction. NLP deals with the discrete, symbolic world of words and sentences whereas robotics the continuous and stochastic: one must confront the noisy, uncertain nature of physically embodied systems with sensory-motor grounded interaction. Current computational models of semantics used in NLP are variants o...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.