Robots of today are eager to leave constrained industrial environments and embrace unexplored and unstructured areas, for extensive applications in the real world as service and social robots. Hence, in addition to these new physical frontiers, they must face human ones, too. This implies the need to consider a human-robot interaction from the beginning of the design; the possibility for a robot to recognize users' emotions and, in a certain way, to properly react and "behave". This could play a fundamental role in their integration in society. However, this capability is still far from being achieved. Over the past decade, several attempts to implement automata for different applications, outside of the industry, have been pursued. But very few applications have tried to consider the emotional state of users in the behavioural model of the robot, since it raises questions such as: how should human emotions be modelled for a correct representation of their state of mind? Which sensing modalities and which classification methods could be the most feasible to obtain this desired knowledge? Furthermore, which applications are the most suitable for the robot to have such sensitivity? In this context, this paper aims to provide a general overview of recent attempts to enable robots to recognize human emotions and interact properly.
Airborne LiDAR produced large amounts of data for archaeological research over the past decade. Labeling this type of archaeological data is a tedious process. We used a data set from Pacunam LiDAR Initiative survey of lowland Maya region in Guatemala. The data set contains ancient Maya structures that were manually labeled, and ground verified to a large extent. We have built and compared two deep learning-based models, U-Net and Mask R-CNN, for semantic segmentation. The segmentation models were used in two tasks: identification of areas of ancient construction activity, and identification of the remnants of ancient Maya buildings. The U-Net based model performed better in both tasks and was capable of correctly identifying 60–66% of all objects, and 74–81% of medium sized objects. The quality of the resulting prediction was evaluated using a variety of quantifiers. Furthermore, we discuss the problems of re-purposing the archaeological style labeling for production of valid machine learning training sets. Ultimately, we outline the value of these models for archaeological research and present the road map to produce a useful decision support system for recognition of ancient objects in LiDAR data.
this paper modeled the multihop data-routing in Vehicular Ad-hoc Networks(VANET) as Multiple Criteria Decision Making (MCDM) in four steps. First, the criteria which have an impact on the performance of the network layer are captured and transformed into fuzzy sets. Second, the fuzzy sets are characterized by Fuzzy Membership Functions(FMF) which are interpolated based on the data collected from massive experimental simulations. Third, the Analytical Hierarchy Process(AHP) is exploited to identify the relationships among the criteria. Fourth, multiple fuzzy rules are determined and, the TSK inference system is employed to infer and aggregate the final forwarding decision. Through integrating techniques of MCDM, FMF, AHP, and TSK, we designed a distributed and opportunistic data routing protocol, namely, VEFR (Vehicular Environment Fuzzy Router) which targets V2V (vehicle-to-vehicle) communication and runs in two main processes, Road Segment Selection(RSS) and Relay Vehicle Selection(RVS). RSS is intended to select multiple successive junctions through which the packets should travel from the source to the destination, while RVS process is intended to select relay vehicles within the selected road segment. The experimental results showed that our protocol performs and scales well with both network size and density, considering the combined problem of end-to-end packet delivery ratio and end-to-end latency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.