The emergence of soft robots has presented new challenges associated with controlling the underlying fluidics of such systems. Here, we introduce a strategy for additively manufacturing unified soft robots comprising fully integrated fluidic circuitry in a single print run via PolyJet three-dimensional (3D) printing. We explore the efficacy of this approach for soft robots designed to leverage novel 3D fluidic circuit elements—e.g., fluidic diodes, “normally closed” transistors, and “normally open” transistors with geometrically tunable pressure-gain functionalities—to operate in response to fluidic analogs of conventional electronic signals, including constant-flow [“direct current (DC)”], “alternating current (AC)”–inspired, and preprogrammed aperiodic (“variable current”) input conditions. By enabling fully integrated soft robotic entities (composed of soft actuators, fluidic circuitry, and body features) to be rapidly disseminated, modified on demand, and 3D-printed in a single run, the presented design and additive manufacturing strategy offers unique promise to catalyze new classes of soft robots.
Estimating the form and functional performance of a design in the early stages can be crucial for a designer for effective ideation Humans have an innate ability to guess the size, shape, and type of a design from a single view. The brain fills in the unknowns in a fraction of a second. However, humans may struggle with estimating the performance of designs in the early stages of the design process without making prototypes or doing back-of-the-envelope calculations. In contrast, machines need information about the full 3D model of a design to understand its structure. Machines can estimate the performance using pre-defined rules, expensive numerical simulations, or machine learning models. In this paper, we show how information about the form and functional performance of a design can be estimated from a single image using machine learning methods. Specifically, we leverage the image-to-image translation method to predict multiple projections of an image-based design. We then train deep neural network models on the predicted projections to provide estimates of design performance. We demonstrate the effectiveness of our method by predicting the aerodynamic performance from images of aircraft models. To estimate ground truth aero-dynamic performance, we run CFD simulations for 4045 3D aircraft models from the ShapeNet dataset and use their lift-to-drag ratio as the performance metric. Our results show that single images do carry information for both form and functional performance. From a single image, we are able to produce six additional images of a design in different orientations, with an average Structural Similarity Index score of 0.872. We also find image-translation methods provide a promising direction in estimating the performance of design. Using multiple images of a design (gathered through image-translation) to predict design performance yields a recall value of 47%, which is 14% higher than a base guess, and 3% higher than using a single image. Our work identifies the potential and provides a framework for using a single image to predict the form and functional performance of a design during the early-stage design process. Our code and additional information about our work are available at http://decode.mit.edu/projects/formfunction/.
A picture is worth a thousand words, and in design metric estimation, a word may be worth a thousand features. Pictures are awarded this worth because of their ability to encode a plethora of information. When evaluating designs, we aim to capture a range of information as well, information including usefulness, uniqueness, and novelty of a design. The subjective nature of these concepts makes their evaluation difficult. Despite this, many attempts have been made and metrics developed to do so, because design evaluation is integral to innovation and the creation of novel solutions. The most common metrics used are the consensual assessment technique (CAT) and the Shah, Vargas-Hernandez, and Smith (SVS) method. While CAT is accurate and often regarded as the “gold standard,” it heavily relies on using expert ratings as a basis for judgement, making CAT expensive and time consuming. Comparatively, SVS is less resource-demanding, but it is often criticized as lacking sensitivity and accuracy. We aim to take advantage of the distinct strengths of both methods through machine learning. More specifically, this study seeks to investigate the possibility of using machine learning to facilitate automated creativity assessment. The SVS method results in a text-rich dataset about a design. In this paper we utilize these textual design representations and the deep semantic relationships that words and sentences encode, to predict more desirable design metrics, including CAT metrics. We demonstrate the ability of machine learning models to predict design metrics from the design itself and SVS Survey information. We demonstrate that incorporating natural language processing (NLP) improves prediction results across all of our design metrics, and that clear distinctions in the predictability of certain metrics exist. Our code and additional information about our work are available at http://decode.mit.edu/projects/nlp-design-eval/.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.