Cellular structures are lightweight‐engineered materials that have gained much attention with the development of additive manufacturing technologies. This article introduces a precise approach to predict the mechanical properties of additively manufactured lattice structures using deep‐learning approaches. Diamond‐shaped nodal lattice structures are designed by varying strut length, strut diameter, and strut orientation angle. The samples are manufactured using laser powder bed fusion (LPBF) of Ti−64 alloy and subjected to compression testing to measure the ultimate strength, elastic modulus, and specific strength. Machine learning approaches such as shallow neural network (SNN), deep neural network (DNN), and deep learning neural network (DLNN) are developed and compared to the statistical design of experiment (DoE) approach. The trained DLNN model show the highest performance when compared with DNN, DoE, and SNN with a mean percentage error of 5.26%, 14.60%, and 9.39% for the ultimate strength, elastic modulus, and specific strength, respectively. The DLNN model is used to create process maps, and is further validated. The results show that although deep learning is preferred for big data, the optimized DLNN model outperform the statistical DoE approach and can be a favorable tool for lattice structure prediction with limited data.
Vision-based localization systems, namely visual odometry (VO) and visual inertial odometry (VIO), have attracted great attention recently. They are regarded as critical modules for building fully autonomous systems. The simplicity of visual and inertial state estimators, along with their applicability in resource-constrained platforms motivated robotic community to research and develop novel approaches that maximize their robustness and reliability. In this paper, we surveyed state-of-the-art VO and VIO approaches. In addition, studies related to localization in visually degraded environments are also reviewed. The reviewed VO techniques and related studies have been analyzed in terms of key design aspects including appearance, feature, and learning based approaches. On the other hand, research studies related to VIO have been categorized based on the degree and type of fusion process into loosely-coupled, semi-tightly coupled, or tightly-coupled approaches and filtering or optimization-based paradigms. This paper provides an overview of the main components of visual localization, key design aspects highlighting the pros and cons of each approach, and compares the latest research works in this field. Finally, a detailed discussion of the challenges associated with the reviewed approaches and future research considerations are formulated.
Neuromorphic vision is a bio-inspired technology 1 that has triggered a paradigm shift in the computer vision 2 community and is serving as a key enabler for a wide range of 3 applications. This technology has offered significant advantages, 4 including reduced power consumption, reduced processing needs, 5 and communication speedups. However, neuromorphic cameras 6 suffer from significant amounts of measurement noise. This 7 noise deteriorates the performance of neuromorphic event-based 8 perception and navigation algorithms. In this article, we propose 9 a novel noise filtration algorithm to eliminate events that do 10 not represent real log-intensity variations in the observed scene. 11 We employ a graph neural network (GNN)-driven transformer 12 algorithm, called GNN-Transformer, to classify every active event 13 pixel in the raw stream into real log-intensity variation or 14 noise. Within the GNN, a message-passing framework, referred 15 to as EventConv, is carried out to reflect the spatiotemporal 16 correlation among the events while preserving their asynchronous 17 nature. We also introduce the known-object ground-truth label-18 ing (KoGTL) approach for generating approximate ground-truth 19 labels of event streams under various illumination conditions. 20 KoGTL is used to generate labeled datasets, from experiments 21 recorded in challenging lighting conditions, including moon light. 22 These datasets are used to train and extensively test our proposed 23 algorithm. When tested on unseen datasets, the proposed algo-24 rithm outperforms state-of-the-art methods by at least 8.8% in 25 terms of filtration accuracy. Additional tests are also conducted 26 on publicly available datasets (ETH Zürich Color-DAVIS346 27 datasets) to demonstrate the generalization capabilities of the 28 proposed algorithm in the presence of illumination variations 29 and different motion dynamics. Compared to state-of-the-art 30 solutions, qualitative results verified the superior capability of 31 the proposed algorithm to eliminate noise while preserving 32 meaningful events in the scene.
Achieving high estimation accuracy is significant for semantic simultaneous localization and mapping (SLAM) tasks. Yet, the estimation process is vulnerable to several sources of error including limitations of the instruments used to perceive the environment, shortcomings of the employed algorithm, environmental conditions, or other unpredictable noise. In this paper, a novel stacked long short term memory (LSTM) based error reduction approach is developed to enhance the accuracy of semantic SLAM in presence of such error sources. Training and testing datasets were constructed through simulated and realtime experiments. The effectiveness of the proposed approach was demonstrated by its ability to capture and reduce semantic SLAM estimation errors in training and testing datasets. Quantitative performance measurement was carried out using the absolute trajectory error (ATE) metric. The proposed approach was compared to vanilla and bidirectional LSTM networks, shallow and deep neural networks, and to support vector machines. The proposed approach outperforms all other structures and was able to significantly improve the accuracy of semantic SLAM. To further verify the applicability of the proposed approach, it was tested on real-time sequences from the TUM RGB-D dataset, where it was able to improve the estimated trajectories.
In year-round hot climatic conditions, conventional air conditioning systems consume significant amounts of electricity primarily generated by conventional power plants. A compression-assisted, multi-ejector space cooling system driven by low-grade solar thermal energy is investigated in terms of energy and exergy performance, using a real gas property-based ejector model for a 36 kW-scale air conditioning application, exposed to annually high outdoor temperatures (i.e., up to 42 °C), for four working fluids (R11, R141b, R245fa, R600a). Using R245fa, the multi-ejector system effectively triples the operating condenser temperature range of a single ejector system to cover the range of annual outdoor conditions, while compression boosting reduces the generator heat input requirement and improves the overall refrigeration coefficient of performance (COP) by factors of ~3–8 at medium- to high-bound condenser temperatures, relative to simple ejector cycles. The system solar fraction varies from ~0.2 to 0.9 in summer and winter, respectively, with annual average mechanical and overall COPs of 24.5 and 0.21, respectively. Exergy destruction primarily takes place in the ejector assembly, but ejector exergy efficiency improves with compression boosting. The system could reduce annual electric cooling loads by over 40% compared with a conventional local split air conditioner, with corresponding savings in electricity expenditure and GHG emissions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.