The infrared ͑IR͒ spectrum of protonated azulene ͑AzuH + , C 10 H 9 + ͒ has been measured in the fingerprint range ͑600-1800 cm −1 ͒ by means of IR multiple photon dissociation ͑IRMPD͒ spectroscopy in a Fourier transform ion cyclotron resonance mass spectrometer equipped with an electrospray ionization source using a free electron laser. The potential energy surface of AzuH + has been characterized at the B3LYP/ 6-311Gءء level in order to determine the global and local minima and the corresponding transition states for interconversion. The energies of the local and global minima, the dissociation energies for the lowest-energy fragmentation pathways, and the proton affinity have been evaluated at the CBS-QB3 level. Comparison with calculated linear IR absorption spectra supports the assignment of the IRMPD spectrum to C4-protonated AzuH + , the most stable of the six distinguishable C-protonated AzuH + isomers. Comparison between Azu and C4-AzuH + reveals the effects of protonation on the geometry, vibrational properties, and the charge distribution of these fundamental aromatic molecules. Calculations at the MP2 level indicate that this technique is not suitable to predict reliable IR spectra for this type of carbocations even for relatively large basis sets. The IRMPD spectrum of protonated azulene is compared to that of isomeric protonated naphthalene and to an astronomical spectrum of the unidentified IR emission bands.
Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.
Road detection is a key task for autonomous land vehicles. Monocular vision-based road-detection algorithms are mostly based on machine learning approaches and are usually cast as classification problems. However, the pixelwise classifiers are faced with the ambiguity caused by changes in road appearance, illumination and weather. An effective way to reduce the ambiguity is to model the contextual information with structured learning and prediction. Currently, the widely used structured prediction model in road detection is the Markov random field or conditional random field. However, the random fieldbased methods require additional complex optimization after pixel-wise classification, making them unsuitable for real-time applications. In this paper, we present a structured random forest-based road-detection algorithm which is capable of modelling the contextual information efficiently. By mapping the structured label space to a discrete label space, the test function of each split node can be trained in a similar way to that of the classical random forests. Structured random forests make use of the contextual information of image patches as well as the structural information of the labels to get more consistent results. Besides this benefit, by predicting a batch of pixels in a single classification, the structured random forest-based road detection can be much more efficient than the conventional pixel-wise random forest. Experimental results tested on the KITTI-ROAD dataset and data collected in typical unstructured environments show that structured random forest-based road detection outperforms the classical pixel-wise random forest both in accuracy and efficiency.
Deep learning has recently demonstrated its promising performance for vision-based parking-slot detection. However, very few existing methods explicitly take into account learning the link information of the marking-points, resulting in complex post-processing and erroneous detection. In this paper, we propose an attentional graph neural network based parking-slot detection method, which refers the marking-points in an around-view image as graph-structured data and utilize graph neural network to aggregate the neighboring information between marking-points. Without any manually designed post-processing, the proposed method is end-to-end trainable. Extensive experiments have been conducted on public benchmark dataset, where the proposed method achieves state-of-the-art accuracy. Code is publicly available at https://github.com/Jiaolong/ gcn-parking-slot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.