Independent component analysis (ICA) proves to be effective in the removing the ocular artifact from electroencephalogram recordings (EEG). While using ICA in ocular artifact correction, a crucial step is to correctly identify the artifact components among the decomposed independent components. In most previous works, this step of selecting the artifact components was manually implemented, which is time consuming and inconvenient when dealing with a large amount of EEG data. We present a new method which automatically selects the eye blink artifact components based on the pattern of their scalp topographies, which can be exemplified as a template matching approach. The feasibility of using a fixed template for singling out the eye blink component after ICA decomposition was validated by an experiment in which 18 subjects among the 21 subjects involved exhibited a highly consistent pattern of eye blink scalp topographies. Since only the spatial feature is employed for singling out the eye blink component, the proposed method is very efficient and easy to implement. Objective evaluation of the real results shows that the proposed algorithm can remove the eye blink artifact from the EEG while causing little distortion to the underlying brain activities.
The technologies with kilovoltage (kV) and megavoltage (MV) imaging in the treatment room are now available for image-guided radiation therapy to improve patient setup and target localization accuracy. However, development of strategies to efficiently and effectively implement these technologies for patient treatment remains challenging. This study proposed an aggregated technique for on-board CT reconstruction using combination of kV and MV beam projections to improve the data acquisition efficiency and image quality. These projections were acquired in the treatment room at the patient treatment position with a new kV imaging device installed on the accelerator gantry, orthogonal to the existing MV portal imaging device. The projection images for a head phantom and a contrast phantom were acquired using both the On-Board Imager kV imaging device and the MV portal imager mounted orthogonally on the gantry of a Varian Clinac 21EX linear accelerator. MV projections were converted into kV information prior to the aggregated CT reconstruction. The multilevel scheme algebraic-reconstruction technique was used to reconstruct CT images involving either full, truncated, or a combination of both full and truncated projections. An adaptive reconstruction method was also applied, based on the limited numbers of kV projections and truncated MV projections, to enhance the anatomical information around the treatment volume and to minimize the radiation dose. The effects of the total number of projections, the combination of kV and MV projections, and the beam truncation of MV projections on the details of reconstructed kV/MV CT images were also investigated.
Seismic data interpolation is a longstanding issue. Most current methods are only suitable for randomly missing cases. To deal with regularly missing cases, an antialiasing strategy should be included. However, seismic survey design using a random distribution of shots and receivers is always operationally challenging and impractical. We have used deep-learning-based approaches for seismic data antialiasing interpolation, which could extract deeper features of the training data in a nonlinear way by self-learning. It can also avoid linear events, sparsity, and low-rank assumptions of the traditional interpolation methods. Based on convolutional neural networks, eight-layers residual learning networks (ResNets) with a better back-propagation property for deep layers is designed for interpolation. Detailed training analysis is also performed. A set of simulated data is used to train the designed ResNets. The performance is assessed with several synthetic and field data. Numerical examples indicate that the trained ResNets can help to reconstruct regularly missing traces with high accuracy. The interpolated results in the time-space domain and the frequency-wavenumber ([Formula: see text]-[Formula: see text]) domain demonstrate the validity of the trained ResNets. Even though the accuracy decreases with the increase of the feature difference between the test and training data, the proposed method can still provide reasonable interpolation results. Finally, the trained ResNets is used to reconstruct dense data with halved trace intervals for synthetic and field data. The reconstructed dense data are more continuous along the spatial direction, and the spatial aliasing effects disappear in the [Formula: see text]-[Formula: see text] domain. The reconstructed dense data have the potential to improve the accuracy of subsequent seismic data processing and inversion.
In general, we wish to interpret the most broadband data possible. However, broadband data do not always provide the best insight for seismic attribute analysis. Obviously, spectral bands contaminated by noise should be eliminated. However, tuning gives rise to spectral bands with higher signal-to-noise ratios. To quantify geologic discontinuities in different scales, we combined spectral decomposition and coherence. Using spectral decomposition, the spectral amplitudes corresponding to a given scale geologic discontinuity, as well as some subtle features, which would otherwise be buried within the broadband seismic response, can be extracted. We applied this workflow to a 3D land data volume acquired over the Tarim Basin, Northwest China, where karst forms the principle reservoirs. We found that channels are better illuminated around 18 Hz, while subtle discontinuities were better delineated around 25 Hz.
Multiple removal is one of the important preprocessing steps in a seismic data processing sequence. For an accurate multiple prediction, a full 3D method is required. However, the application of such methods is often limited by practical and economic constraints. Therefore, in practical situations, 2D prediction methods are used, such as 2D surface-related multiple elimination. However, the resulting predicted multiples may have significant temporal shift, spatial mismatch, and amplitude inconsistency, compared with true 3D multiples. Adaptive multiple subtraction based on 2D blind separation of convolved mixtures (BSCM) has been proposed to estimate the 2D matching filter in a single gather. To improve the flexibility of the adaptive multiple subtraction for the inconsistencies between the 2D predicted multiples and true 3D multiples, we evaluated the adaptive multiple subtraction as a problem of 3D BSCM. In the proposed method, the predicted multiples were modeled as the convolution of the true multiples with a 3D kernel, whose third dimension is in the gather direction. By maximizing the non-Gaussianity of the estimated primaries, the iterative reweighted least-squares algorithm was exploited to obtain the 3D matching filter, which is the inverse of the 3D kernel. To avoid the possible overfitting problem introduced by the 3D matching filter, the proposed method fit several seismic gathers using one 3D matching filter. In addition, by using the non-Gaussian maximization criterion, the proposed method alleviated the orthogonality assumption used by the least-squares subtraction method. Furthermore, the proposed method can eliminate the temporal and spatial mismatches between the 2D predicted multiples and true 3D multiples better than the 2D BSCM subtraction method. Tests on synthetic and field data sets demonstrated the effectiveness of the 3D BSCM subtraction method.
The spectral decomposition technique plays an important role in reservoir characterization, for which the time-frequency distribution method is essential. The deconvolutive short-time Fourier transform (DSTFT) method achieves a superior time-frequency resolution by applying a 2D deconvolution operation on the short-time Fourier transform (STFT) spectrogram. For seismic spectral decomposition, to reduce the computation burden caused by the 2D deconvolution operation in the DSTFT, the 2D STFT spectrogram is cropped into a smaller area, which includes the positive frequencies fallen in the seismic signal bandwidth only. In general, because the low-frequency components of a seismic signal are dominant, the removal of the negative frequencies may introduce a sharp edge at the zero frequency, which would produce artifacts in the DSTFT spectrogram. To avoid this problem, we used the analytic signal, which is obtained by applying the Hilbert transform on the original real seismic signal, to calculate the STFT spectrogram in our method. Synthetic and real seismic data examples were evaluated to demonstrate the performance of the proposed method.
Time-domain velocity and moveout parameters can be directly obtained from local event slopes, which are estimated on the prestack seismic gathers. In practice, there are always some errors in the estimated local slopes, especially in low signal-to-noise ratio (S/N) situations. Thus, subsurface velocity information may be hidden in the image domain spanned by velocity and other moveout parameters. We have developed an accelerated clustering algorithm to find cluster centers without prior information about the number of clusters. First, plane-wave destruction is implemented to estimate the local event slopes. For every sample in the seismic gathers, we obtain the estimations of velocity and its location in the image domain, according to the local event slopes. These mapped data points in the new domain exhibit the structure of groups. We represent these points by a mixture distribution model. Then, the cluster centers of the mixture distribution model are located, which correspond to maximum likelihood velocities of the main subsurface structures. Approximate velocity uncertainties bounds are used to select centers corresponding to reflections. Finally, interpolation is performed on the clustered unevenly sampled knot velocities to build the effective velocity model on regular grids. With synthetic and field data examples, we have determined that the proposed automatic velocity estimation method can give a stacking velocity model and a time migration velocity model with relatively high accuracy.
Accurate and efficient velocity estimation using Transmission matrix formalism based on the domain decomposition method View the table of contents for this issue, or go to the journal homepage for more 2017 Inverse Problems 33 035002
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.