In this paper, we study the classi cation problem in which we have access to easily obtainable surrogate for true labels, namely complementary labels, which specify classes that observations do not belong to. Let Y andȲ be the true and complementary labels, respectively. We rst model the annotation of complementary labels via transition probabilities P (Ȳ = i|Y = j), i = j ∈ {1, · · · , c}, where c is the number of classes. Previous methods implicitly assume that P (Ȳ = i|Y = j), ∀i = j, are identical, which is not true in practice because humans are biased toward their own experience. For example, as shown in Figure 1, if an annotator is more familiar with monkeys than prairie dogs when providing complementary labels for meerkats, she is more likely to employ "monkey" as a complementary label. We therefore reason that the transition probabilities will be di erent. In this paper, we propose a framework that contributes three main innovations to learning with biased complementary labels: (1) It estimates transition probabilities with no bias. (2) It provides a general method to modify traditional loss functions and extends standard deep neural network classi ers to learn with biased complementary labels. (3) It theoretically ensures that the classi er learned with complementary labels converges to the optimal one learned with true labels. Comprehensive experiments on several benchmark datasets validate the superiority of our method to current state-of-the-art methods.
In this paper, we study the mixture proportion estimation (MPE) problem in a new setting: given samples from the mixture and the component distributions, we identify the proportions of the components in the mixture distribution. To address this problem, we make use of a linear independence assumption, i.e., the component distributions are independent from each other, which is much weaker than assumptions exploited in the previous MPE methods. Based on this assumption, we propose a method (1) that uniquely identifies the mixture proportions, (2) whose output provably converges to the optimal solution, and (3) that is computationally efficient. We show the superiority of the proposed method over the state-of-the-art methods in two applications including learning with label noise and semi-supervised learning on both synthetic and real-world datasets.
and various substrates is essential for graphene-based nanomechanical and nanoelectric devices. [12,13] In recent years, a large number of experimental studies of measuring interfacial adhesion energies between mechanically exfoliated graphene or chemically vapor deposited (CVD) graphene and support substrates have been reported by using the pressurized blister test with intercalated nanoparticles, [14] inflated air, [15,16] and deionized water, [17] double cantilever beam fracture mechanics test, [18] pleat defect measurement, [19] atomic force microscope (AFM) nanoindentation, [20][21][22] and optical fiber Fabry-Perot interference. [23] Although the discrepancy exists in the measurement results due to the nonuniformity of fabricated graphene membranes and principle errors of various measurement methods, these experimental studies have significantly advanced the understanding of graphene adhesion behaviors. Unfortunately, such approaches to determine the adhesion energy of graphene and a substrate typically involve specific measurement setups and professional sample fabrication or test procedures, as well as relatively complicated analytical models for adhesion energies. Even if for the AFM nanoindentation method recently utilized more, the extremely thin sheet is hard to handle and prone to damage by clamps and fixtures as in the conventional peel test, [14] and the effect of surface roughness of the spherical tip of AFM should be addressed by the modified Rumpf model. [21] Moreover, the disturbance of the pull-off instability generally occurs during the tip-sample interaction. [24] For this reason, a simple, general, and direct measurement of adhesion energy for graphene membranes on a variety of substrates is highly necessary.In this paper, we demonstrated an improved nanoscale quantification study of adhesion energy of CVD graphene membranes with different layer thicknesses on SiO 2 substrate, by directly measuring the size (diameter and height) of graphene bubbles covered with single and dual gold nanoparticles showing regular circular or elliptical geometries, respectively. This presented method differs from the aforementioned ref. [14] in which only isolated single particles with regular circular blister geometries are available. This resulting disadvantage is that the sample fabrication process is possibly repeated many times to achieve satisfactory results by locating those tiny regular single nanoparticles via scanning electron microscope (SEM). However, regular blister geometries formed by two particles can also contribute to the solution of interfacial adhesion energy of graphene on substrates. Hence, to extend the applicability and robustness of this presented method, a generalized van der Waals adhesion behavior at the interface between graphene and support substrates is important to characterize the performance of graphenebased sensors. Here an improved, general, and direct method of determining the adhesion energies of monolayer/few-layer/multilayer graphene sheets on silicon wafers is demonstra...
Transfer learning aims to improve learning in target domain by borrowing knowledge from a related but di erent source domain. To reduce the distribution shift between source and target domains, recent methods have focused on exploring invariant representations that have similar distributions across domains. However, when learning this invariant knowledge, existing methods assume that the labels in source domain are uncontaminated, while in reality, we often have access to source data with noisy labels. In this paper, we rst show how label noise adversely a ect the learning of invariant representations and the correcting of label shift in various transfer learning scenarios. To reduce the adverse e ects, we propose a novel Denoising Conditional Invariant Component (DCIC) framework, which provably ensures (1) extracting invariant representations given examples with noisy labels in source domain and unlabeled examples in target domain; (2) estimating the label distribution in target domain with no bias. Experimental results on both synthetic and real-world data verify the e ectiveness of the proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.