High-frame-rate ultrasonography based on coherent compounding of unfocused beams can potentially transform the assessment of cardiac function. As it requires successive waves to be combined coherently, this approach is sensitive to high-velocity tissue motion. We investigated coherent compounding of tilted diverging waves, emitted from a 2.5 MHz clinical phased array transducer. To cope with high myocardial velocities, a triangle transmit sequence of diverging waves is proposed, combined with tissue Doppler imaging to perform motion compensation (MoCo). The compound sequence with integrated MoCo was adjusted from simulations and was tested in vitro and in vivo. Realistic myocardial velocities were analyzed in an in vitro spinning disk with anechoic cysts. While a 8 dB decrease (no motion versus high motion) was observed without MoCo, the contrast-to-noise ratio of the cysts was preserved with the MoCo approach. With this method, we could provide high-quality in vivo B-mode cardiac images with tissue Doppler at 250 frames per second. Although the septum and the anterior mitral leaflet were poorly apparent without MoCo, they became well perceptible and well contrasted with MoCo. The septal and lateral mitral annulus velocities determined by tissue Doppler were concordant with those measured by pulsed-wave Doppler with a clinical scanner (r(2)=0.7,y=0.9 x+0.5,N=60) . To conclude, high-contrast echo cardiographic B-mode and tissue Doppler images can be obtained with diverging beams when motion compensation is integrated in the coherent compounding process.
Color Doppler imaging is an established pulsed ultrasound technique to visualize blood flow non-invasively. High-frame-rate (ultrafast) color Doppler, by emissions of plane or circular wavefronts, allows severalfold increase in frame rates. Conventional and ultrafast color Doppler are both limited by the range-velocity dilemma, which may result in velocity folding (aliasing) for large depths and/or large velocities. We investigated multiple pulse-repetition-frequency (PRF) emissions arranged in a series of staggered intervals to remove aliasing in ultrafast color Doppler. Staggered PRF is an emission process where time delays between successive pulse transmissions change in an alternating way. We tested staggered dual- and triple-PRF ultrafast color Doppler, 1) in vitro in a spinning disc and a free jet flow, and 2) in vivo in a human left ventricle. The in vitro results showed that the Nyquist velocity could be extended to up to 6 times the conventional limit. We found coefficients of determination r(2) ≥ 0.98 between the de-aliased and ground-truth velocities. Consistent de-aliased Doppler images were also obtained in the human left heart. Our results demonstrate that staggered multiple-PRF ultrafast color Doppler is efficient for high-velocity high-frame-rate blood flow imaging. This is particularly relevant for new developments in ultrasound imaging relying on accurate velocity measurements.
A method of near real-time detection and tracking of resident space objects (RSOs) using a convolutional neural network (CNN) and linear quadratic estimator (LQE) is proposed. Advances in machine learning architecture allow the use of low-power/cost embedded devices to perform complex classification tasks. In order to reduce the costs of tracking systems, a low-cost embedded device will be used to run a CNN detection model for RSOs in unresolved images captured by a gray-scale camera and small telescope. Detection results computed in near real-time are then passed to an LQE to compute tracking updates for the telescope mount, resulting in a fully autonomous method of optical RSO detection and tracking.
In this paper, an autonomous method of satellite detection and tracking in images is implemented using optical flow. Optical flow is used to estimate the image velocities of detected objects in a series of space images. Given that most objects in an image will be stars, the overall image velocity from star motion is used to estimate the image frame-to-frame motion. Objects seen to be moving with velocity profiles distinct from the overall image velocity are then classified as potential resident space objects. The detection algorithm is exercised using both simulated star images and ground-based imagery of satellites. Finally, this algorithm will be tested and compared using a commercial and an open-source software approach to provide the reader two different options based on their need.
This work utilizes a MobileNetV2 Convolutional Neural Network (CNN) for fast, mobile detection of satellites, and rejection of stars, in cluttered unresolved space imagery. First, a custom database is created using imagery from a synthetic satellite image program and labeled with bounding boxes over satellites for "satellitepositive" images. The CNN is then trained on this database and the inference is validated by checking the accuracy of the model on an external dataset constructed of real telescope imagery. In doing so, the trained CNN provides a method of rapid satellite identification for subsequent utilization in ground-based orbit estimation.
Robotic and human lunar landings are a focus of future NASA missions. Precision landing capabilities are vital to guarantee the success of the mission, and the safety of the lander and crew. During the approach to the surface there are multiple challenges associated with Hazard Relative Navigation to ensure safe landings. This paper will focus on a passive autonomous hazard detection and avoidance sub-system to generate an initial assessment of possible landing regions for the guidance system. The system uses a single camera and the MobileNetV2 neural network architecture to detect and discern between safe landing sites and hazards such as rocks, shadows, and craters. Then a monocular structure from motion will recreate the surface to provide slope and roughness analysis.
The interest in returning to the Moon for research and exploration has increased as new tipping point technologies are providing the possibility to do so. One of these initiatives is the Artemis program by NASA, which plans to return humans by 2024 to the lunar surface and study water deposits in the surface. This program will also serve as a practice run to plan the logistics of sending humans to explore Mars. To return humans safely to the Moon, multiple technological advances and diverse knowledge about the nature of the lunar surface are needed. This paper will discuss the design and implementation of the flight software of EagleCam, a CubeSat camera system based on the free open-source core Flight System (cFS) architecture developed by NASA's Goddard Space Flight Center. EagleCam is a payload transported to the Moon by the Commercial Lunar Payload Services Nova-C lander developed by Intuitive Machines. The camera system will capture the first third-person view of a spacecraft performing a Moon landing and collect other scientific data such as plume interaction with the surface. The complete system is composed of the CubeSat and the deployer that will eject it. This will be the first time WiFi protocol is used on the Moon to establish a local communication network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.