Soft tissue sarcomas (STSs) are a rare and fascinating group of diseases that can be subdivided into specific reciprocal translocations in STSs (SRTSs) and nonspecific reciprocal translocations in STSs (NRTSs). PTEN mutations are rare in STSs, suggesting that PTEN expression may be lost by alternative mechanisms such as methylation. In order to reveal whether aberrant PTEN methylation occurs in STSs, MassARRAY Spectrometry was carried to detect methylation patterns of PTEN in STSs. We evaluated methylation levels in 41 CpG sites from −2,515 to −2,186 bp (amplicon A) and −1,786 to −1,416 bp (amplicon B) relative to the translation initiation site in 110 different cases (46 cases of SRTSs, 40 cases of NRTSs, and 24 cases of normal controls). In addition, immunohistochemistry (IHC) was used to detect the loss of PTEN to determine whether PTEN alterations were responsible for decreased PTEN expression. Our data showed that expression of PTEN was diminished in 49 (57%) STSs, whereas the remaining cases (43%) were classified as high expression. Our previous results found that only 2 of 86 cases (2.3%) had a PTEN mutation suggesting that PTEN may be mainly downregulated in STSs by methylation, but not by mutation of PTEN itself. We observed that amplicon A was hypermethylated in STSs with low PTEN expression, whereas normal controls had low methylation levels (P<0.0001), which was not present in amplicon B (P>0.05), nor were there significant differences in the methylation levels in PTEN between SRTS and NRTS cases. The majority of individual CpG units within two amplicons was demonstrated to be hypermethylated. These findings indicate that PTEN hypermethylation is a common event in STSs suggesting that the inactivation of PTEN may be due to hypermethylation in the promoter of PTEN. The aberrant methylation of the CpG sites within PTEN promoter may serve as a potential candidate biomarker for STSs.
<abstract><p>In recent years, with the development of deep learning, image color rendering method has become a research hotspot once again. To overcome the detail problems of color overstepping and boundary blurring in the robust image color rendering method, as well as the problems of unstable training based on generative adversarial networks, we propose an color rendering method using Gabor filter based improved pix2pix for robust image. Firstly, the multi-direction/multi-scale selection characteristic of Gabor filter is used to preprocess the image to be rendered, which can retain the detailed features of the image while preprocessing to avoid the loss of features. Moreover, among the Gabor texture feature maps with 6 scales and 4 directions, the texture map with the scale of 7 and the direction of 0° has the comparable rendering performance. Finally, by improving the loss function of pix2pix model and adding the penalty term, not only the training can be stabilized, but also the ideal color image can be obtained. To reflect image color rendering quality of different models more objectively, PSNR and SSIM indexes are adopted to evaluate the rendered images. The experimental results of the proposed method show that the robust image rendered by this method has better visual performance and reduces the influence of light and noise on the image to a certain extent.</p></abstract>
As an important part of face recognition, facial image segmentation has become a focus of human feature detection. In this paper, the AdaBoost algorithm and the Gabor texture analysis algorithm are used to segment an image containing multiple faces, which effectively reduces the false detection rate of facial image segmentation. In facial image segmentation, the image containing face information is first analyzed for texture using the Gabor algorithm, and appropriate thresholds are set with different thresholds of skin-like areas, where skin-like areas in the image’s background information are removed. Then, the AdaBoost algorithm is used to detect face regions, and finally, the detected face regions are segmented. Experiments show that this method can quickly and accurately segment faces in an image and effectively reduce the rate of missed and false detections.
Minority Uigur women residing in Xinjiang, in the northwest of China, have a high incidence of cervical carcinoma (CC; 527/100 000) and are often diagnosed young. We favor the hypothesis that Uigur women may carry different genetic factor(s) making them more susceptible to CC than majority Han (Chinese) women living in the same region. Using PCR-restriction fragment length polymorphism, we investigated associations of a p53Arg72Pro polymorphism with CC in Uigur women compared with those in Han women. The study included 152 Uigur patients with CC and 110 controls, and 120 Han patients with CC and 122 controls. In Uigur women, CC was associated with p5372Arg/Arg homozygosity (chi=7.196, P<0.05) and with human papillomavirus-16 (chi=7.177, P<0.05). In Han women, however, CC was associated with p5372Pro/Pro homozygosity (chi=8.231, P<0.05). These observations suggest that individuals with different genetic backgrounds carry different susceptibilities to CC, at least in the Uigur and Han ethnic women studied in China.
<abstract><p>The image super-resolution reconstruction method can improve the image quality in the Internet of Things (IoT). It improves the data transmission efficiency, and is of great significance to data transmission encryption. Aiming at the problem of low image quality in image super-resolution using neural networks, a self-attention-based image reconstruction method is proposed for secure data transmission in IoT environment. The network model is improved, and the residual network structure and sub-pixel convolution are used to extract the feature of the image. The self-attention module is used extract detailed information in the image. Using generative confrontation method and image feature perception method to improve the image reconstruction effect. The experimental results on the public data set show that the improved network model improves the quality of the reconstructed image and can effectively restore the details of the image.</p></abstract>
Ambulance services play a vital role in intelligent transportation systems (ITS). In an intelligent ambulance system, the medical images can help doctors quickly and accurately understand the patients' condition during first aid. On various display devices in different kinds of ambulances, content-aware image adaption can be used to better present the medical image among different display resolutions and aspect ratios. Most existing methods mainly focus on visual protection of salient areas, such as specific organ parts of the human body, with less attention paid to the visual effect of unimportant areas. However, the human visual system is more sensitive to the edge and contour of images, which are important for ambulance services. To improve the visual effect of adapted images, a contour-maintaining-based image adaption method for an efficient ambulance service in ITS is proposed here. Firstly, the proposed method innovatively combines the weighted gradient, saliency, and edge maps into an importance map. Secondly, energy is optimized for reducing contour distortion and interruption according to the visual slope and curvature of contours and edges in non-salient areas. Finally, applying the sub-procedure of a forward seam carving method, the optimal seams can more evenly pass through the contour areas. The experimental results demonstrate that the proposed method is more effective than other similar methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.