CopyrightItems in 'OpenAIR@RGU', Robert Gordon University Open Access Institutional Repository, are protected by copyright and intellectual property law. If you believe that any material held in 'OpenAIR@RGU' infringes copyright, please contact openair-help@rgu.ac.uk with details. The item will be removed from the repository while the claim is investigated. Scan to BIM: the development of a clear workflow for the incorporation of point clouds within a BIM environment AbstractThe emergence in recent years of technology to support the use of data rich models within architecture has significantly aided the uptake of building information modelling. Simultaneously, there has been a rapid expansion in the capabilities and widespread use of 3D high definition laser scanning technology. Although laser scanning has often been associated with industries outside of architecture and building (including heavy engineering and oil and gas installations), the potential to record the existing built environment is clear. Indeed, well-established concepts within building and materials conservation concerning the importance of being able to accurately monitor and recognise the importance of surface characteristics are well suited to the use of scanning to capture geometrical idiosyncrasies as well as designed detail. Likewise, the ability to capture structures which are well outside the physical reach of the expert makes the accurate recording of large scale buildings and streetscapes possible, and at a speed and level of accuracy which was not feasible even 15 years ago. This paper concerns a series of workflow stages which are required to incorporate the output of laser scan data within a BIM environment. Although it is possible to import point clouds within industry standard BIM software, in order to make best use of the highly accurate and often massive data files a certain amount of post-processing and modelling is required. We describe a process whereby cloud data can be transformed to produce representative surface meshes, and explore how the resultant models can be linked with meta data within the BIM environment. The development of methods to help the incorporation of already existing environments within BIM will be of great value within FM, building conservation and new design alike. Therefore, the refinement and adoption of clear methods to support such work is a vital step towards BIM maturation.
No abstract
Integrating gene expression across tissues and cell types is crucial for understanding the coordinated biological mechanisms that drive disease and characterize homoeostasis. However, traditional multi-tissue integration methods either cannot handle uncollected tissues or rely on genotype information, which is often unavailable and subject to privacy concerns. Here we present HYFA (hypergraph factorization), a parameter-efficient graph representation learning approach for joint imputation of multi-tissue and cell-type gene expression. HYFA is genotype agnostic, supports a variable number of collected tissues per individual, and imposes strong inductive biases to leverage the shared regulatory architecture of tissues and genes. In performance comparison on Genotype–Tissue Expression project data, HYFA achieves superior performance over existing methods, especially when multiple reference tissues are available. The HYFA-imputed dataset can be used to identify replicable regulatory genetic variations (expression quantitative trait loci), with substantial gains over the original incomplete dataset. HYFA can accelerate the effective and scalable integration of tissue and cell-type transcriptome biorepositories.
Learning from structured data is a core machine learning task. Commonly, such data is represented as graphs, which normally only consider (typed) binary relationships between pairs of nodes. This is a substantial limitation for many domains with highly-structured data. One important such domain is source code, where hypergraph-based representations can better capture the semantically rich and structured nature of code.In this work, we present HEAT, a neural model capable of representing typed and qualified hypergraphs, where each hyperedge explicitly qualifies how participating nodes contribute. It can be viewed as a generalization of both message passing neural networks and Transformers. We evaluate HEAT on knowledge base completion and on bug detection and repair using a novel hypergraph representation of programs. In both settings, it outperforms strong baselines, indicating its power and generality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.