RiboVision is a visualization and analysis tool for the simultaneous display of multiple layers of diverse information on primary (1D), secondary (2D), and three-dimensional (3D) structures of ribosomes. The ribosome is a macromolecular complex containing ribosomal RNA and ribosomal proteins and is a key component of life responsible for the synthesis of proteins in all living organisms. RiboVision is intended for rapid retrieval, analysis, filtering, and display of a variety of ribosomal data. Preloaded information includes 1D, 2D, and 3D structures augmented by base-pairing, base-stacking, and other molecular interactions. RiboVision is preloaded with rRNA secondary structures, rRNA domains and helical structures, phylogeny, crystallographic thermal factors, etc. RiboVision contains structures of ribosomal proteins and a database of their molecular interactions with rRNA. RiboVision contains preloaded structures and data for two bacterial ribosomes (Thermus thermophilus and Escherichia coli), one archaeal ribosome (Haloarcula marismortui), and three eukaryotic ribosomes (Saccharomyces cerevisiae, Drosophila melanogaster, and Homo sapiens). RiboVision revealed several major discrepancies between the 2D and 3D structures of the rRNAs of the small and large subunits (SSU and LSU). Revised structures mapped with a variety of data are available in RiboVision as well as in a public gallery (). RiboVision is designed to allow users to distill complex data quickly and to easily generate publication-quality images of data mapped onto secondary structures. Users can readily import and analyze their own data in the context of other work. This package allows users to import and map data from CSV files directly onto 1D, 2D, and 3D levels of structure. RiboVision has features in rough analogy with web-based map services capable of seamlessly switching the type of data displayed and the resolution or magnification of the display. RiboVision is available at .
Working dogs have improved the lives of thousands of people throughout history. However, communication between human and canine partners is currently limited. The main goal of the FIDO project is to research fundamental aspects of wearable technologies to support communication between working dogs and their handlers. In this study, the FIDO team investigated on-body interfaces for dogs in the form of wearable technology integrated into assistance dog vests. We created five different sensors that dogs could activate based on natural dog behaviors such as biting, tugging, and nose touches. We then tested the sensors on-body with eight dogs previously trained for a variety of occupations and compared their effectiveness in several dimensions. We were able to demonstrate that it is possible to create wearable sensors that dogs can reliably activate on command, and to determine cognitive and physical factors that affect dogs' success with body-worn interaction technology.
Working dogs perform a variety of essential services for their human partners, from assisting people with disabilities, to Search and Rescue, police, and military work. Recent interest in the nascent field of Animal-Computer Interaction has prompted research in computer-mediated technology for communication between working dogs and their handlers. Haptic (touch) interfaces in the form of vibrating motors are a promising approach for handler-to-dog communication. Haptic interfaces can provide a silent, long-range method of sending commands to a dog, when voice or hand signals are inappropriate or impossible. However, evaluating haptic interfaces for dogs, who cannot self-report sensations, creates interesting challenges. This study draws on human-computer interaction concepts, such as Just Noticeable Difference, to explore methods and issues in evaluating haptic interfaces for working dogs. We created a haptic system and developed an evaluation method, reporting results for ten dogs of widely varying breeds, sizes, and coat types.
Working dogs1 are significantly beneficial to society; however, a substantial number of dogs are released from time consuming and expensive training programs because of unsuitability in behavior. Early prediction of successful service dog placement could save time, resources, and funding. Our research focus is to explore whether aspects of canine temperament can be detected from interactions with sensors, and to develop classifiers that correlate sensor data to predict the success (or failure) of assistance dogs in advanced training. In a 2-year longitudinal study, our team tested a cohort of dogs entering advanced training in the Canine Companions for Independence (CCI) Program with 2 instrumented dog toys: a silicone ball and a silicone tug sensor. We then create a logistic model tree classifier to predict service dog success using only 5 features derived from dog-toy interactions. During randomized 10-fold cross validation where 4 of the 40 dogs were kept in an independent test set for each fold, our classifier predicts the dogs' outcomes with 87.5% average accuracy. We assess the reliability of our model by performing the testing routine 10 times over 1.5 years for a single suitable working dog, which predicts that the dog would pass each time. We calculate the resource benefit of identifying dogs who will fail early in their training, and the value for a cohort of 40 dogs using our toys and our methods for prediction is over $70,000. With CCI's 6 training centers, annual savings could be upwards of $5 million per year.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.