Speckle contrast imaging enables rapid mapping of relative blood flow distributions using camera detection of back-scattered laser light. However, speckle derived flow measures deviate from direct measurements of erythrocyte speeds by 47 ± 15% (n = 13 mice) in vessels of various calibers. Alternatively, deviations with estimates of volumetric flux are on average 91 ± 43%. We highlight and attempt to alleviate this discrepancy by accounting for the effects of multiple dynamic scattering with speckle imaging of microfluidic channels of varying sizes and then with red blood cell (RBC) tracking correlated speckle imaging of vascular flows in the cerebral cortex. By revisiting the governing dynamic light scattering models, we test the ability to predict the degree of multiple dynamic scattering across vessels in order to correct for the observed discrepancies between relative RBC speeds and multi-exposure speckle imaging estimates of inverse correlation times. The analysis reveals that traditional speckle contrast imagery of vascular flows is neither a measure of volumetric flux nor particle speed, but rather the product of speed and vessel diameter. The corrected speckle estimates of the relative RBC speeds have an average 10 ± 3% deviation in vivo with those obtained from RBC tracking.
This article proposes an empirical test of whether aggregate economic behavior maps from the real to the virtual. Transaction data from a large commercial virtual world -the first such data set provided to outside researchers -is used to calculate metrics for production, consumption and money supply based on real-world definitions. Movements in these metrics over time were examined for consistency with common theories of macroeconomic change. The results indicated that virtual economic behavior follows real-world patterns. Moreover, a natural experiment occurred, in that a new version of the virtual world with the same rules came online during the study. The new world's macroeconomic aggregates quickly grew to be nearly exact replicas of those of the existing worlds,
This paper presents the image-processing algorithm customized for high-speed, real-time inspection of pavement cracking. In the algorithm, a pavement image is divided into grid cells of 8 x 8 pixels, and each cell is classified as a non-crack or crack cell using the grayscale information of the border pixels. Whether a crack cell can be regarded as a basic element (or seed) depends on its contrast to the neighboring cells. A number of crack seeds can be called a crack cluster if they fall on a linear string. A crack cluster corresponds to a dark strip in the original image that may or may not be a section of a real crack. Additional conditions to verify a crack cluster include the requirements in the contrast, width, and length of the strip. If verified crack clusters are oriented in similar directions, they will be joined to become one crack. Because many operations are performed on crack seeds rather than on the original image, crack detection can be executed simultaneously when the frame grabber is forming a new image, which permits a real-time, online pavement survey. The trial test results show good repeatability and accuracy when multiple surveys were conducted in different driving conditions. 17. Key Words pavement cracking distress, automatic inspections, real-time inspection, image-processing algorithm, crack seeds, crack cluster, crack detection 18. Distribution Statement No restrictions. This document is available to the public through the National Technical Information Service, Springfield, Virginia 22161; www.ntis.gov. 19. Security Classif. (of report) Unclassified 20. Security Classif.
Interdisciplinary teams are assembled in scientific research and are aimed at solving complex problems. Given their increasing importance, it is not surprising that considerable attention has been focused on processes of collaboration in interdisciplinary teams. Despite such efforts, we know less about the factors affecting the assembly of such teams in the first place. In this paper, we investigate the structure and the success of interdisciplinary scientific research teams. We examine the assembly factors using a sample of 1,103 grant proposals submitted to two National Science Foundation interdisciplinary initiatives during a 3-year period, including both awarded and non-awarded proposals. The results indicate that individuals’ likelihood of collaboration on a proposal is higher among those with longer tenure, lower institutional tier, lower H-index, and with higher levels of prior co-authorship and citation relationships. However, successful proposals have a little bit different relational patterns: individuals’ likelihood of collaboration is higher among those with lower institutional tier, lower H-index, (female) gender, higher levels of prior co-authorship, but with lower levels of prior citation relationships.
Cholesterol is an important lipid molecule in cell membranes and lipoproteins. Cholesterol is also a precursors of steroid hormones, bile acids, and vitamin D. Abnormal levels of cholesterol or its precursors have been observed in various human diseases, such as heart diseases, stroke, type II diabetes, brain diseases and many others. Therefore, accurate quantification of cholesterol is important for individuals who are at increased risk for these diseases. Multiple analytical methods have been developed for analysis of cholesterol, including classical chemical methods, enzymatic assays, gas chromatography (GC), liquid chromatography (LC), and mass spectrometry (MS). Strategy known as ambient ionization mass spectrometry (AIMS), operating at atmospheric pressure, with only minimal sample pretreatments for real time, in situ , and rapid interrogation of the sample has also been employed for quantification of cholesterol. In this review, we summarize the most prevalent methods for cholesterol quantification in biological samples and foods. Nevertheless, we highlight several new technologies, such as AIMS, used as alternative methods to measure cholesterol that are potentially next-generation platforms. Representative examples of molecular imaging of cholesterol in tissue sections are also included in this review article.
Cross-sectional analysis of cotton fibers provides direct, accurate measurements of fiber fineness and maturity, which are often regarded as the reference data for validating or calibrating other indirect measurements of these important cotton properties. Despite the importance, cross-sectional methods of using image analysis have not been broadly applied to cotton quality evaluations because of the tedious procedures for both preparing cotton samples and processing cross-sectional images. This paper illustrates image processing procedures dedicated to cotton cross-sectional analysis for the purpose of increasing the efficiency and accuracy of fiber separation and feature extraction. These procedures greatly improve the automation of processing cotton cross-sectional images and increase the number of analyzable fibers per image. The cross-sectional data of cotton fibers also have good correlations with longitudinal data and data from the Advanced Fiber Information Systems.A cotton cross section contains measurable information directly related to the maturity of the fiber. Crosssectional measurements of cotton maturity may be used as a reference when other methods need to be calibrated.Much research has been conducted with image analysis technology to measure cotton maturity and other parameters from fiber cross sections [4][5][6][11][12][13][14][15]. The success of a cross-sectional method using image analysis largely relies on two techniques: fiber cross-sectioning and image segmentation. Cross-sectioning is the most import step in obtaining analyzable images of fibers, and grinding and cutting are the two general methods for fiber cross-sectioning. In grinding, a bundle of fibers embedded in a polymer resin and hardener mixture is hardened, ground, then polished, and the surface containing the fiber cross sections is imaged on a microscope by reHected light [9]. There are many different ways of cutting a thin slice of fibers perpendicular to the long axes [ 1, 3]. A quick embedding method specifically for cotton fibers was established by researchers at the USDA Southern Regional Research Center (SRRC) [3]. A bundle of fibers is embedded in a methacrylate medium, polymerized in a uv reactor, and cut into 1-3 tkm slices with a microtome. This sectioning method greatly improves the separability and contrast of individual fibers in the image captured by transmitted light. ' Image segmentation is a computational process that separates cotton cross sections from the image background and from one another. Segmentation results directly influence the efficiency and accuracy of crosssectional measurements. Due to variations in the crosssectional shape and thickness of the sliced samples. fibers in different regions may exhibit different levels of contrast and focus in an image. There are always cross sections that contact or overlap others in the image. Some appear to be damaged due to scratching of the cutting knife. Cotton cross sections can have convex or concave boundaries, and hollow or solid cores, making many p...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.