In this paper, we propose a novel deep learning architecture for multi-label zero-shot learning (ML-ZSL), which is able to predict multiple unseen class labels for each input instance. Inspired by the way humans utilize semantic knowledge between objects of interests, we propose a framework that incorporates knowledge graphs for describing the relationships between multiple labels. Our model learns an information propagation mechanism from the semantic label space, which can be applied to model the interdependencies between seen and unseen class labels. With such investigation of structured knowledge graphs for visual reasoning, we show that our model can be applied for solving multi-label classification and ML-ZSL tasks. Compared to state-of-the-art approaches, comparable or improved performances can be achieved by our method.
We study the effects of isovector-scalar meson δ on the equation of state (EOS) of neutron star matter in strong magnetic fields. The EOS of neutron-star matter and nucleon effective masses are calculated in the framework of Lagrangian field theory, which is solved within the mean-field approximation. From the numerical results one can find that the δ-field leads to a remarkable splitting of proton and neutron effective masses. The strength of δ-field decreases with the increasing of the magnetic field and is little at ultrastrong field. The proton effective mass is highly influenced by magnetic fields, while the effect of magnetic fields on the neutron effective mass is negligible. The EOS turns out to be stiffer at B < 10 15 G but becomes softer at stronger magnetic field after including the δ-field. The AMM terms can affect the system merely at ultrastrong magnetic field(B > 10 19 G). In the range of 10 15 G -10 18 G the properties of neutron-star matter are found to be similar with those without magnetic fields.
For most entity disambiguation systems, the secret recipes are feature representations for mentions and entities, most of which are based on Bag-of-Words (BoW) representations. Commonly, BoW has several drawbacks: (1) It ignores the intrinsic meaning of words/entities; (2) It often results in high-dimension vector spaces and expensive computation; (3) For different applications, methods of designing handcrafted representations may be quite different, lacking of a general guideline. In this paper, we propose a different approach named EDKate. We first learn low-dimensional continuous vector representations for entities and words by jointly embedding knowledge base and text in the same vector space. Then we utilize these embeddings to design simple but effective features and build a two-layer disambiguation model. Extensive experiments on real-world data sets show that (1) The embedding-based features are very effective. Even a single one embedding-based feature can beat the combination of several BoW-based features. (2) The superiority is even more promising in a difficult set where the mention-entity prior cannot work well. (3) The proposed embedding method is much better than trivial implementations of some off-the-shelf embedding algorithms. (4) We compared our EDKate with existing methods/systems and the results are also positive.
The PHENIX Collaboration at the Relativistic Heavy Ion Collider has measured open heavy flavor production in minimum bias Au+Au collisions at √ s N N = 200 GeV via the yields of electrons from semileptonic decays of charm and bottom hadrons. Previous heavy flavor electron measurements indicated substantial modification in the momentum distribution of the parent heavy quarks due to the quark-gluon plasma created in these collisions. For the first time, using the PHENIX silicon vertex detector to measure precision displaced tracking, the relative contributions from charm and bottom hadrons to these electrons as a function of transverse momentum are measured in Au+Au collisions. We compare the fraction of electrons from bottom hadrons to previously published results extracted from electron-hadron correlations in p+p collisions at √ s N N = 200 GeV and find the fractions to be similar within the large uncertainties on both measurements for pT > 4 GeV/c. We use the bottom electron fractions in Au+Au and p+p along with the previously measured heavy flavor electron RAA to calculate the RAA for electrons from charm and bottom hadron decays separately. We find that electrons from bottom hadron decays are less suppressed than those from charm for the region 3 < pT < 4 GeV/c.
Argon oxygen decarburisation-electroslag remelting (AOD-ESR) process has been well used to produce the Fe-Mn-Si-Al twinning induced plasticity steel (TWIP) steels. The characteristics of AlN inclusions formed in TWIP steels after AOD refining, ESR and forging process were investigated by scanning electron microscopy and X-ray energy dispersive spectrometry. An automated program called 'INCAFeature' was used to collect statistics of inclusion characteristics. Great differences on the amount, distribution and morphologies of AlN inclusions were observed in AOD ingots, ESR ingots and forgings. The dominating inclusions in AOD ingots are mainly single Al(O)N and MnS(Se)-Al(O)N aggregate, accounting for 66.7% of the total inclusions. After the ESR process, AlN inclusions in all size range significantly decreased, which were rarely observed in ESR ingots. Thermodynamic calculations show that AlN inclusions can precipitate in the liquid Fe-Mn-Si-Al TWIP steels, which is different from the viewpoint of literatures that the precipitation of AlN inclusions took place at solidifying front or solid phase. Furthermore, the thermodynamic calculation result has been verified by high temperature laser scanning confocal microscope experiments.
1 nuclear collisions of p+Al, p+Au, d+Au, and 3 He+Au at √ s N N = 200 GeV 2 121 4Asymmetric nuclear collisions with a light projectile nucleus striking a heavier target nucleus have proven to be an 123 excellent testing ground for particle production models and the longitudinal dynamics following the initial collision -124 for an early review see Ref. [1]. Many calculations have successfully described the longitudinal (or rapidity) distribution 125 of produced particles in proton-nucleus (p+A) collisions via the fragmentation of color strings and with counting rules 126 based on the number of "wounded" or struck nucleons or quarks in the projectile and target. Recently, a proposal 127 for testing the wounded-quark model [2] was put forth that specifically called for the measurement of dN ch /dη over a 128 broad range of pseudorapidity in p+Au, d+Au, and 3 He+Au collisions [3]. Fully three-dimensional hydrodynamical 129 models also require input on the longitudinal distribution of initial deposited energy and gradients thereof [4]. Once 130 the initial partons or fluid elements are populated, the models evolve the system dynamically. Measurements of elliptic 131 flow as a function of pseudorapidity provide constraints on the longitudinal dynamics of the evolution. 132As the incoming hadrons or nuclei break up, the rapidity distribution of liberated partons may be determined by 133 the longitudinal parton distribution functions [5, 6] or via a universal color field breakup for each struck nucleon 134 or quark [7]. For that reason, calculations based on Monte Carlo Glauber models have been developed to calculate 135 the number of struck nucleons and struck quarks (see for example Refs. [8-10]). The PHOBOS collaboration has 136 previously published charged hadron dN ch /dη measurements over |η| < 5.4 in d+Au collisions at √ s N N = 200 GeV [11]. 137 PHENIX has also published dN ch /dη measurements in high-multiplicity d+Au collisions at √ s N N = 200, 62, 39, and 138 19.6 GeV [12]. The wounded-quark model has been constrained by the d+Au data and found to be in reasonable 139 agreement with the centrality dependence, while the wounded-nucleon model cannot describe the data [3]. A crucial 140 test of the wounded-quark model is to see if it is universal across different colliding systems. Additional measurements 141 in light and heavy systems at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) can 142 also be tested in this context-see for example different geometry tests in Refs. [13-15]. 143 157 section of 2.30, 2.26, 1.76, 0.54 barns for 3 He+Au, d+Au, p+Au, and p+Al respectively. The dN ch /dη analysis has 158 negligible statistical uncertainties and thus a subset of runs with the most stable detector configuration are utilized 159and the run-to-run variation is used in the determination of systematic uncertainties. For the elliptic flow v 2 analysis 160 in high-multiplicity events, also referred to as central events, an additional trigger was used that required the number 161 of fi...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.