This paper describes the CMS trigger system and its performance during Run 1 of the LHC. The trigger system consists of two levels designed to select events of potential physics interest from a GHz (MHz) interaction rate of proton-proton (heavy ion) collisions. The first level of the trigger is implemented in hardware, and selects events containing detector signals consistent with an electron, photon, muon, τ lepton, jet, or missing transverse energy. A programmable menu of up to 128 object-based algorithms is used to select events for subsequent processing. The trigger thresholds are adjusted to the LHC instantaneous luminosity during data taking in order to restrict the output rate to 100 kHz, the upper limit imposed by the CMS readout electronics. The second level, implemented in software, further refines the purity of the output stream, selecting an average rate of 400 Hz for offline event storage. The objectives, strategy and performance of the trigger system during the LHC Run 1 are described.
<p><strong>Abstract.</strong> The vehicle localization is an essential component for stable autonomous car operation. There are many algorithms for the vehicle localization. However, it still needs much improvement in terms of its accuracy and cost. In this paper, sensor fusion based localization algorithm is used for solving this problem. Our sensor system is composed of in-vehicle sensors, GPS and vision sensors. The localization algorithm is based on extended Kalman filter and it has time update step and measurement update step. In the time update step, in-vehicle sensors are used such as yaw-rate and speed sensor. And GPS and vision sensor information are used to update the vehicle position in the measurement update step.We use visual odometry library to process vision sensor data and generate the moving distance and direction of the car. Especially, when performing visual odometry we use georeferenced image database to reduce the error accumulation. Through the experiments, the proposed localization algorithm is verified and evaluated. The RMS errors of the estimated result from the proposed algorithm are about 4.3<span class="thinspace"></span>m. This result shows about 40<span class="thinspace"></span>% improvement in accuracy even in comparison with the result from the GPS only method. It shows the possibility to use proposed localization algorithm. However, it is still necessary to improve the accuracy for applying this algorithm to the autonomous car. Therefore, we plan to use multiple cameras (rear cameras or AVM cameras) and more information such as high-definition map or V2X communication. And the filter and error modelling also need to be changed for the better results.</p>
<p><strong>Abstract.</strong> Marine incidents have caused serious casualties and damaged on property, and situational awareness and actions are needed to reduce further extensive damage for marine surveillance. The importance of an attempt for maritime monitoring using UAV has been raised, and a platform should be prepared to respond immediately to urgent situations. In this research, a real-time drone image mapping platform is proposed for marine surveillance that receives marine images acquired and transmitted by drones and processes them in real time. The platform proposed in this study is divided into 1) UAV System, 2) Real-time image processing, 3) Visualization. UAV system transfers data from a drone to the ground stations. Real-time image processing module generates individual orthophotos followed by directly georeferencing in real time and detecting ships on the orthophotos. Visualization module enables to visualize the orthophotos. The overall mapping time of 3.26 seconds on average was verified for processing image mapping, and ship detection time for a single image was estimated to be within about 1 second, which corresponds to an environment in which an emergency must be handled. In conclusion, a real-time drone mapping platform that is introduced in this study can be evaluated as being available for maritime monitoring that requires swift responses.</p>
A key factor contributing to the variability in the microbial kinetic parameters reported from batch assays is parameter identifiability, i.e., the ability of the mathematical routine used for parameter estimation to provide unique estimates of the individual parameter values. This work encompassed a three-part evaluation of the parameter identifiability of intrinsic kinetic parameters describing the Andrews growth model that are obtained from batch assays. First, a parameter identifiability analysis was conducted by visually inspecting the sensitivity equations for the Andrews growth model. Second, the practical retrievability of the parameters in the presence of experimental error was evaluated for the parameter estimation routine used. Third, the results of these analyses were tested using an example data set from the literature for a self-inhibitory substrate. The general trends from these analyses were consistent and indicated that it is very difficult, if not impossible, to simultaneously obtain a unique set of estimates of intrinsic kinetic parameters for the Andrews growth model using data from a single batch experiment.
ABSTRACT:The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
Abstract. Facilities such as road, parking lots play an important role in our lives nowadays. Damage to such a vehicle facility can cause human injury, as well as inconvenience and cost. To prevent this, facility monitoring is performed periodically, but the current monitoring method is low efficiency by blocking the facility or performing it late at night. In order to increase the efficiency of monitoring, research using images, especially drone images, was conducted. However, when using a drone image, there is a trade-off relationship between accuracy and processing time. In this study, we propose a real-time drone mapping based on reference images for efficient vehicle facility monitoring. The real-time drone mapping based on the reference image is composed of reference images build, aerial triangulation (AT) based on reference images (refAT), and orthophoto generation. The refAT refers to a method of performing AT by using a reference images as reference data. We compared the processing time and processing accuracy of direct georeferencing and refAT. We built 154 drone reference images in the target area. The refAT showed a processing time of about 8.95 seconds and an accuracy of 3.4 cm, and the direct georeferencing method showed a processing time of about 1.49 seconds and an accuracy of 22.5 m. If the method of this study is used for facility monitoring, it is expected that the efficiency of monitoring will be improved with speed and accuracy.
Commission I, ICWG I/VbKEY WORDS: Mapping, Aerial, On-line, Automatic, Orthoimage, UAV, Sensor, Damage, Disaster ABSTRACT:Damage assessment is an important step toward the restoration of the severely affected areas due to natural disasters or accidents. For more accurate and rapid assessment, one should utilize geospatial data such as ortho-images acquired from the damaged areas. Change detection based on the geospatial data before and after the damage can make possible fast and automatic assessment with a reasonable accuracy. Accordingly, there have been significant demands on a rapid mapping system, which can provide the orthoimages of the damaged areas to the specialists and decision makers in disaster management agencies. In this study, we are developing a UAV based rapid mapping system that can acquire multi-sensory data in the air and generate ortho-images from the data on the ground in a rapid and automatic way. The proposed system consists of two main segments, aerial and ground segments. The aerial segment is to acquire sensory data through autonomous flight over the specified target area. It consists of a micro UAV platform, a mirror-less camera, a GPS, a MEMS IMU, and sensor integration and synchronization module. The ground segment is to receive and process the multi-sensory data to produce orthoimages in rapid and automatic ways. It consists of a computer with appropriate software for flight planning, data reception, georeferencing, and orthoimage generation. In the middle of this on-going project, we will introduce the overview of the project, describe the main components of each segment and provide intermediate results from preliminary test flights.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.