It is well-entrenched folklore that all torsion gravity theories predict observationally negligible torsion in the solar system, since torsion (if it exists) couples only to the intrinsic spin of elementary particles, not to rotational angular momentum. We argue that this assumption has a logical loophole which can and should be tested experimentally, and consider non-standard torsion theories in which torsion can be generated by macroscopic rotating objects. In the spirit of action=reaction, if a rotating mass like a planet can generate torsion, then a gyroscope would be expected to feel torsion. An experiment with a gyroscope (without nuclear spin) such as Gravity Probe B (GPB) can test theories where this is the case.Using symmetry arguments, we show that to lowest order, any torsion field around a uniformly rotating spherical mass is determined by seven dimensionless parameters. These parameters effectively generalize the PPN formalism and provide a concrete framework for further testing GR. We construct a parametrized Lagrangian that includes both standard torsion-free GR and HayashiShirafuji maximal torsion gravity as special cases. We demonstrate that classic solar system tests rule out the latter and constrain two observable parameters. We show that Gravity Probe B is an ideal experiment for further constraining non-standard torsion theories, and work out the most general torsion-induced precession of its gyroscope in terms of our torsion parameters.
We propose a model-free deep reinforcement learning method that leverages a small amount of demonstration data to assist a reinforcement learning agent. We apply this approach to robotic manipulation tasks and train end-to-end visuomotor policies that map directly from RGB camera inputs to joint velocities. We demonstrate that our approach can solve a wide variety of visuomotor tasks, for which engineering a scripted controller would be laborious. In experiments, our reinforcement and imitation agent achieves significantly better performances than agents trained with reinforcement learning or imitation learning alone. We also illustrate that these policies, trained with large visual and dynamics variations, can achieve preliminary successes in zero-shot sim2real transfer. A brief visual description of this work can be viewed in this video.
Summary Bromodomain-containing protein 7 (BRD7) is a member of the bromodomain-containing protein family that is known to play role as tumor suppressors. Here, we show that BRD7 is a component of the unfolded protein response (UPR) signaling through its ability to regulate X-box binding protein1 (XBP1) nuclear translocation. BRD7 interacts with the regulatory subunits of phosphatidyl-inositol3-kinase (PI3K) and increases the nuclear translocation of both p85α/β and XBP1s. Deficiency of BRD7 blocks the nuclear translocation of XBP1s. Furthermore, our in vivo studies have shown that BRD7 protein levels are reduced in the liver of obese mice, and reinstating BRD7 levels in the liver restores XBP1s nuclear translocation, improves glucose homeostasis, and ultimately reduces the blood glucose levels in the obese and diabetic mouse models.
ordered alphabetically, † Equal contributions, ordered alphabetically, ‡ Equal senior contributions Building models that can be rapidly adapted to numerous tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. Flamingo models include key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of the proposed Flamingo models, exploring and measuring their ability to rapidly adapt to a variety of image and video understanding benchmarks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer, captioning tasks, which evaluate the ability to describe a scene or an event, and close-ended tasks such as multiple choice visual question-answering. For tasks lying anywhere on this spectrum, we demonstrate that a single Flamingo model can achieve a new state of the art for few-shot learning, simply by prompting the model with task-specific examples. On many of these benchmarks, Flamingo actually surpasses the performance of models that are fine-tuned on thousands of times more task-specific data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.