Totally Automatic Robust Quantitation in NMR (TARQUIN), a new method for the fully automatic analysis of short echo time in vivo 1 H Magnetic resonance spectroscopy is presented. Analysis is performed in the time domain using non-negative least squares, and a new method for applying soft constraints to signal amplitudes is used to improve fitting stability. Initial point truncation and Hankel singular value decomposition water removal are used to reduce baseline interference. Three methods were used to test performance. First, metabolite concentrations from six healthy volunteers at 3 T were compared with LCModel™. Second, a Monte-Carlo simulation was performed and results were compared with LCModel™ to test the accuracy of the new method. Finally, the new algorithm was applied to 1956 spectra, acquired clinically at 1.5 T, to test robustness to noisy, abnormal, artifactual, and poorly shimmed spectra. computationally efficient (HLSVD; Ref. 4) for in vivo data, and are effective at extracting peak parameters from simple spectra. One drawback of black-box methods is that additional knowledge of spectral features cannot be incorporated into the algorithm allowing infeasible results to be possible for more complex data. For example, an incorrect ratio between peaks originating from the same molecule is possible. The AMARES (5) algorithm was developed to address this issue by extending the VARPRO (6) peak-fitting method to allow a greater level of prior knowledge to be incorporated into the fitting model.Black-box and peak-fitting methods have been shown to be highly effective for sparse spectra such as long echo time (TE) 1 H or 31 P MRS; however, the complex patterns of some metabolites seen in short TE 1 H MRS data are cumbersome to model as a series of single peaks. Although long TE 1 H MRS is still popular, there is a growing trend to shorter TE (7) because of the increase in metabolic information. Therefore, analysis methods that are suited to this data type are becoming increasingly important. For complex data, methods that incorporate a metabolite basis set have been shown to be more effective than peak-fitting methods (8).LCModel™ (9) was one of the first algorithms to incorporate a metabolite basis set into the fitting model and is widely used for the analysis of short TE 1 H MRS data. The algorithm models data in the frequency domain using a linear combination of metabolite, lipid, and macromolecule signals combined with a smoothing splines to account for baseline signals. More recently, the Quantitation Based on Quantum Estimation (QUEST) (10) algorithm has been developed that uses a combination of time-domain fitting and HSVD to model background signals. An alternative approach is taken by Automated Quantitation of Short Echo time MRS Spectra (AQSES) (11) that uses a combination of time-domain fitting and penalized splines to model the baseline. AQSES also differs from LCModel™ and QUEST as it uses the variable projection method to estimate the amplitudes of the metabolite basis set resulting in a reduc...
The quantitation of metabolite concentrations from in vitro NMR spectra is hampered by the sensitivity of peak positions to experimental conditions. The quantitation methods currently available are generally labor intensive and cannot readily be automated. Here, an algorithm is presented for the automatic time domain analysis of high-resolution NMR spectra. The TARQUIN algorithm uses a set of basis functions obtained by quantum mechanical simulation using predetermined parameters. Each basis function is optimized by subdividing it into a set of signals from magnetically equivalent spins and varying the simulated chemical shifts of each of these groups to match the signal undergoing analysis. A novel approach to the standard multidimensional minimization problem is introduced based on evaluating the fit resulting from different permutations of possible chemical shifts, obtained from one-dimensional searches. Results are presented from the analysis of (1)H proton magic angle spinning spectra of cell lines illustrating the robustness of the method in a typical application. Simulation was used to investigate the biggest peak shifts that can be tolerated.
Background: Numerous methods for classifying brain tumours based on magnetic resonance spectra and imaging have been presented in the last 15 years. Generally, these methods use supervised machine learning to develop a classifier from a database of cases for which the diagnosis is already known. However, little has been published on developing classifiers based on mixed modalities, e.g. combining imaging information with spectroscopy. In this work a method of generating probabilities of tumour class from anatomical location is presented.
The feasibility of measuring overlay using small targets has been demonstrated in an earlier paper 1 . If the target is small ("smallness" being relative to the resolution of the imaging tool) then only the symmetry of its image changes with overlay offset. For our purposes the targets must be less than 5µm across, but ideally much smaller, so that they can be positioned within the active areas of real devices. These targets allow overlay variation to be tested in ways that are not possible using larger conventional target designs. In this paper we describe continued development of this technology.In our previous experimental work the targets were limited to relatively large sizes (3x3µm) by the available process tools. In this paper we report experimental results from smaller targets (down to 1x1µm) fabricated using an e-beam writer.We compare experimental results for the change of image asymmetry of these targets with overlay offset and with modeled simulations. The image of the targets depends on film properties and their design should be optimized to provide the maximum variation of image symmetry with overlay offset. Implementation of this technology on product wafers will be simplified by using an image model to optimize the target design for specific process layers. Our results show the necessary good agreement between experimental data and the model.The determination of asymmetry from the images of targets as small as 1µm allows the measurement of overlay with total measurement uncertainty as low as 2nm.
Determining the focal position of an overlay target with respect to an objective lens is an important prerequisite of overlay metrology. At best, an out-of-focus image will provide less than optimal information for metrology; focal depth for a high-NA imaging system at the required magnification is of the order of 5 microns. In most cases poor focus will lead to poor measurement performance. In some cases, being out of focus will cause apparent contrast reversal and similar effects, due to optical wavelengths (i.e. about half a micron) being used; this can cause measurement failure on some algorithms. In the very worst case, being out of focus can cause pattern recognition to fail completely, leading to a missed measurement.Previous systems to date have had one of two forms. In the first, a scan through focus is performed, selecting the optimal position using a direct, image-based focus metric, such as the high-frequency component of a Fourier transform. This always gives an optimal or near-optimal focus position, even under wide process variation, but can be time consuming, requiring a relatively large number of images to be captured for each site visited. It also requires the optimal position to be included in the range of the scan; if initial uncertainty is large, then the focus scan needs to be longer, taking even more time.The second approach is to monitor some property which has a known relationship to focus. This is often calibrated with respect to a scan through focus. On subsequent measurements the output of this secondary system is taken as a focus position. This second system may be completely separate from the imaging system; the primary requirement is only that it is coupled to the imaging system. These systems are generally fast; only one measurement per site is required, and they are typically designed so that only limited image / signal processing is required. However, such techniques are less precise or accurate than performing a scan through focus, and they are also susceptible to effects caused by variations of the wafer under test, e.g. variations in stack depth.A fast, precise system for measuring focus position, using the imaging optics, has been developed. This new system achieves better accuracy than previous indirect techniques, significantly faster than executing a scan through focus. Its output is linear with respect to focus position, and it has a very high dynamic range, providing a direct estimate of focal position even at large focus offset. It also has an advantage over indirect systems of being an integral part of the imaging system, eliminating calibration drift over extended periods. In this paper we discuss the mathematical background, optical arrangement and imaging algorithms. We present initial performance results, including data on repeatability and time taken to measure focus.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.