InGaN light-emitting diodes (LEDs) grown with an asymmetric multiple quantum well (MQW) are proposed for use in an optical link with an avalanche photodiode (APD) based receiver. In contrast to the high photoresponse of red AlGaInP LEDs in APDs, the proposed blue LEDs provide improved light output and enhanced system bandwidth for directed line-of-sight optical links passing through a 100-cm-long water tank. This improvement is due to the nonuniform carrier distribution within the InGaN MQWs being mitigated by using a thin GaN barrier near the n-GaN to facilitate hole transport capacity. In addition, bandwidth degradation resulting from APD module saturation can also be avoided by using these blue LEDs, successfully establishing a 300 Mbit/s LED-based underwater data link. The proposed InGaN LEDs (zero bias) under illumination exhibit a peak responsivity of 0.133 at λ = 370 nm, an ultraviolet (UV)-to-visible rejection ratio of 4849 and a 3-dB cut-off frequency of 33.3 MHz. Using violet UV laser diodes and the proposed LEDs respectively as the optical transmitter and receiver, an underwater optical link (L = 100 cm) with a data transmission rate of up to 130 Mbit/s and a bit error rate of 4.2 × 10 −9 is also demonstrated.
Automatically describing the content of an image is an interesting and challenging task in artificial intelligence. In this paper, an enhanced image captioning model—including object detection, color analysis, and image captioning—is proposed to automatically generate the textual descriptions of images. In an encoder–decoder model for image captioning, VGG16 is used as an encoder and an LSTM (long short-term memory) network with attention is used as a decoder. In addition, Mask R-CNN with OpenCV is used for object detection and color analysis. The integration of the image caption and color recognition is then performed to provide better descriptive details of images. Moreover, the generated textual sentence is converted into speech. The validation results illustrate that the proposed method can provide more accurate description of images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.