Motions of falling objects through air and liquid media were captured by a conventional digital camera and analyzed by a video analysis open source called Tracker. The position of the moving objects at every 33 ms was evaluated from series of images and the velocity was averaged from the change in position during each interval. For the falling in air experiment, the displacement of a ball was proportional to the time squared and the agreement with the theory was indicated by the derivation of the acceleration due to the gravity with an acceptable level of accuracy. In additions, the effects of the height of falling, the camera distance as well as the color of ball and background were investigated. In the case of falling in glycerol, the average velocity of a metal bead was initially increased with the time of falling until reaching a constant value of the terminal velocity.
The potential of microwave drying in the production of rehydrated foods is demonstrated with stink beans (Parkia speciosa), smelly legumes of Africa and Asia. Compared to stink beans dehydrated by convective drying and freeze drying, the microwave products exhibit higher moisture contents, but the distribution of microscopic pores leads to good rehydration characteristics. Dehydration by microwave drying is also achieved within a much shorter period than that commonly used in freeze drying. The dehydration time can be further reduced to 6 h comparable to convective drying, and the moisture content is dropped to 11% by decreasing the pressure during microwave drying. However, the rehydration time remains around 65 min for products from both ambient and low-pressure (400 Pa) microwave drying. In rehydration, the period is successfully reduced to 30 min by increasing the water temperature to 70 °C. The results indicate that microwave drying does not affect the value of crude protein and rehydrated products are comparable to fresh stink beans. From these findings, the microwave drying technique is an applicable technology for both manufacturers and consumers, with acceptable drying time and rehydration characteristics.
Optical character recognition (OCR) is a technology to digitize a paper-based document to digital form. This research studies the extraction of the characters from a Thai vehicle registration certificate via a Google Cloud Vision API and a Tesseract OCR. The recognition performance of both OCR APIs is also examined. The 84 color image files comprised three image sizes/resolutions and five image characteristics. For suitable image type comparison, the greyscale and binary image are converted from color images. Furthermore, the three pre-processing techniques, sharpening, contrast adjustment, and brightness adjustment, are also applied to enhance the quality of image before applying the two OCR APIs. The recognition performance was evaluated in terms of accuracy and readability. The results showed that the Google Cloud Vision API works well for the Thai vehicle registration certificate with an accuracy of 84.43%, whereas the Tesseract OCR showed an accuracy of 47.02%. The highest accuracy came from the color image with 1024×768 px, 300dpi, and using sharpening and brightness adjustment as pre-processing techniques. In terms of readability, the Google Cloud Vision API has more readability than the Tesseract. The proposed conditions facilitate the possibility of the implementation for Thai vehicle registration certificate recognition system.
The color of a textile material is the first attribute and key component considered by consumers while purchasing cloth. However, measurement of fabric color is a very challenging task in the textile fabrication process. Colorimeters aids this measurement by adding objective assessments to a generally subjective process. The colorimeter application in Android smartphones provides a simple alternative to dedicated colorimetric devices. The purpose of this study was to determine the suitability of the smartphone colorimetry application for a Batik fabric color measurement. The colors of various Batik fabric images in the International Commission on Illumination L*a*b* color space, which includes all the colors that are visible by the human eye, obtained by a spectrophotometer and the colorimeter application were compared. Data of Batik fabric images acquired at three different distances of 10, 20, and 30 cm were analyzed. The color differences between the colorimeter and spectrophotometer results are various depending on the distance from the target. The ΔE * ab and ΔE * ch metrics were used to evaluate the color differences between the reference and sample fabric colors. The lowest mean of ΔE * ab values was 12.11 5.29 measured 20 cm away from each fabric. The mean values of ΔE * ab between pairs of color symbols from the Colorimeter application were comparable to those obtained by the spectrophotometer. The ΔE * ab values were more suitable for fabric color measurement than ΔE * ch. The results indicate that smartphone colorimetry provides reasonable accuracy, is simple to use for amateurs, suitable for fabric color matching, and can satisfy fabric market requirements.
Digital image processing has increasingly been implemented in nanostructural analysis and would be an ideal tool to characterize the morphology and position of self-assembled magnetic nanoparticles for high density recording. In this work, magnetic nanoparticles were synthesized by the modified polyol process using Fe(acac) 3 and Pt(acac) 2 as starting materials. Transmission electron microscope (TEM) images of as-synthesized products were inspected using an image processing procedure. Grayscale images (800 × 800 pixels, 72 dot per inch) were converted to binary images by using Otsu's thresholding. Each particle was then detected by using the closing algorithm with disk structuring elements of 2 pixels, the Canny edge detection, and edge linking algorithm. Their centroid, diameter and area were subsequently evaluated. The degree of polydispersity of magnetic nanoparticles can then be compared using the size distribution from this image processing procedure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.