We present a weakly supervised model that jointly performs both semantic-and instance-segmentation -a particularly relevant problem given the substantial cost of obtaining pixel-perfect annotation for these tasks. In contrast to many popular instance segmentation approaches based on object detectors, our method does not predict any overlapping instances. Moreover, we are able to segment both "thing" and "stuff" classes, and thus explain all the pixels in the image. "Thing" classes are weakly-supervised with bounding boxes, and "stuff" with image-level tags. We obtain state-of-the-art results on Pascal VOC, for both full and weak supervision (which achieves about 95% of fullysupervised performance). Furthermore, we present the first weakly-supervised results on Cityscapes for both semantic-and instance-segmentation. Finally, we use our weakly supervised framework to analyse the relationship between annotation quality and predictive performance, which is of interest to dataset creators.
No abstract
Object parsing -the task of decomposing an object into its semantic parts -has traditionally been formulated as a category-level segmentation problem. Consequently, when there are multiple objects in an image, current methods cannot count the number of objects in the scene, nor can they determine which part belongs to which object. We address this problem by segmenting the parts of objects at an instance-level, such that each pixel in the image is assigned a part label, as well as the identity of the object it belongs to. Moreover, we show how this approach benefits us in obtaining segmentations at coarser granularities as well. Our proposed network is trained end-to-end given detections, and begins with a category-level segmentation module. Thereafter, a differentiable Conditional Random Field, defined over a variable number of instances for every input image, reasons about the identity of each part by associating it with a human detection. In contrast to other approaches, our method can handle the varying number of people in each image and our holistic network produces state-of-the-art results in instance-level part and human segmentation, together with competitive results in category-level part segmentation, all achieved by a single forward-pass through our neural network.
Multiple-layer InAs/GaAs quantum dot (QD) laser structures were etched to remove the p-side AlGaAs cladding layers to investigate the temperature-dependent photoluminescence (PL) characteristics. Four QD samples, including undoped as grown QDs, p-doped as grown QDs, undoped annealed QDs, and p-doped annealed QDs, were prepared by molecular beam epitaxy (MBE) and a postgrowth annealing process for comparison. Among them, modulation p-doped QD samples exhibit much less temperaturedependent characteristics of PL spectra and notable insensitivity to intermixing compared to undoped ones. This is attributed to the effects of modulation p-doping, which can inhibit holes' thermal broadening in their closely spaced energy levels and significantly suppress In/Ga interdiffusion between QDs and their surrounding matrix. These results provide greater freedom in the choice of MBE growth for high-quality active regions and claddings of QD laser diodes. The superior features of the modulation p-doped QD materials have been transferred naturally to the laser devices. The continuous-wave ground-state (GS) lasing has been realized in both p-doped QD Fabry−Perot (F−P) and laterally coupled distributed-feedback (LC-DFB) narrow ridge lasers with very short cavity length without facet coatings, in which a 1315 nm GS lasing has been found in a F−P laser with a 400 μm cavity length, while single longitudinal mode lasing with a very large tunable range of 140 nm and side mode suppression ratio of 51 dB is achieved in an LC-DFB laser. This work demonstrates great development potential of InAs/GaAs QD lasers for applications in high-speed fiber-optic communication.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.