Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world. In this paper, we present an analysis of Transformer-based language model performance across a wide range of model scales -from models with tens of millions of parameters up to a 280 billion parameter model called Gopher. These models are evaluated on 152 diverse tasks, achieving state-of-the-art performance across the majority. Gains from scale are largest in areas such as reading comprehension, fact-checking, and the identification of toxic language, but logical and mathematical reasoning see less benefit. We provide a holistic analysis of the training dataset and model's behaviour, covering the intersection of model scale with bias and toxicity. Finally we discuss the application of language models to AI safety and the mitigation of downstream harms.
ordered alphabetically, † Equal contributions, ordered alphabetically, ‡ Equal senior contributions Building models that can be rapidly adapted to numerous tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. Flamingo models include key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of the proposed Flamingo models, exploring and measuring their ability to rapidly adapt to a variety of image and video understanding benchmarks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer, captioning tasks, which evaluate the ability to describe a scene or an event, and close-ended tasks such as multiple choice visual question-answering. For tasks lying anywhere on this spectrum, we demonstrate that a single Flamingo model can achieve a new state of the art for few-shot learning, simply by prompting the model with task-specific examples. On many of these benchmarks, Flamingo actually surpasses the performance of models that are fine-tuned on thousands of times more task-specific data.
Subdural haematomas (SDHs) are characterized by rapidly or gradually accumulated haematomas between the arachnoid and dura mater. The mechanism of haematoma clearance has not been clearly elucidated until now. The meningeal lymphatic vessel (mLV) drainage pathway is a novel system that takes part in the clearance of waste products in the central nervous system (CNS). This study aimed to explore the roles of the mLV drainage pathway in SDH clearance and its impacting factors. We injected FITC-500D, A488-fibrinogen and autologous blood into the subdural space of mice/rats and found that these substances drained into deep cervical lymph nodes (dCLNs). FITC-500D was also observed in the lymphatic vessels (LYVE+) of the meninges and the dCLNs in mice. The SDH clearance rate in SDH rats that received deep cervical lymph vessel (dCLV) ligation surgery was significantly lower than that in the control group, as evaluated by haemoglobin quantification and MRI scanning. The drainage rate of mLVs was significantly slower after the SDH model was established, and the expression of lymphangiogenesisrelated proteins, including LYVE1, FOXC2 and VEGF-C, in meninges was downregulated. In summary, our findings proved that SDH was absorbed through the mLV drainage pathway and that haematomas could inhibit the function of mLVs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.