Use of hookah and little cigars/cigarillos (LCCs) is high among adolescents and young adults. Although these products have health effects similar to cigarettes, adolescents and young adults believe them to be safer. This study examined adolescent and young adult perceptions of hookah and LCCs to develop risk messages aimed at discouraging use among users and at-risk nonusers. Ten focus groups with 77 adolescents and young adults were conducted to explore their perceptions about the perceived risks and benefits of hookah and LCC use. Participants were users of other (non-cigarette) tobacco products (n=47) and susceptible nonusers (n=30). Transcripts were coded for emergent themes on participants’ perceptions of hookah and LCCs. Participants did not perceive health effects associated with hookah and LCC use to be serious or likely to happen given their infrequency of use and perceptions that they are less harmful than cigarettes. Participants generally had positive associations with smoking hookah and LCCs for several reasons, including that they are used in social gatherings, come in various flavors, and can be used to perform smoke tricks. Because adolescents and young adults underestimate and discount the long-term risks associated with hookah and LCC use, effective messages may be those that focus on the acute/immediate health and cosmetic effects.
Abstract:To formally verify a large software application, the standard method is to invest a considerable amount of time and expertise into the manual construction of an abstract model, which is then analyzed for its properties by either a mechanized or by a human prover. There are two main problems with this approach. The first problem is that this verification method can be no more reliable than the humans that perform the manual steps. If rate of error for human work is a function of problem size, this holds not only for the construction of the original application, but also for the construction of the model. This means that the verification process tends to become unreliable for larger applications. The second problem is one of timing and relevance. Software applications built by teams of programmers can change rapidly, often daily. Manually constructing a faithful abstraction of any one version of the application, though, can take weeks or months. The results of a verification, then, can quickly become irrelevant to an ongoing design effort. In this paper we sketch a verification method that aims to avoid these problems. This method, based on automated model extraction, was first applied in the verification of the call processing software for a new Lucent Technologies' system called PathStar.
A significant part of the call processing software for Lucent's new PathStar access server [FSW98] was checked with automated formal verification techniques. The verification system we built for this purpose, named FeaVer, maintains a database of feature requirements which is accessible via a web browser. Via the browser the user can invoke verification runs. The verifications are performed by the system with the help of a standard logic model checker that runs in the background, invisibly to the user. Requirement violations are reported as C execution traces and stored in the database for user perusal and correction. The main strength of the system is in the detection of undesired feature interactions at an early stage of systems design, the type of problem that is notoriously difficult to detect with traditional testing techniques. Error reports are typically generated by the system within minutes after a check is initiated, quickly enough to allow near interactive probing of requirements or experimenting with software fixes.
Objective To analyze the impact of factors in healthcare delivery on the net benefit of triggering an Advanced Care Planning (ACP) workflow based on predictions of 12-month mortality. Materials and Methods We built a predictive model of 12-month mortality using electronic health record data and evaluated the impact of healthcare delivery factors on the net benefit of triggering an ACP workflow based on the models’ predictions. Factors included nonclinical reasons that make ACP inappropriate: limited capacity for ACP, inability to follow up due to patient discharge, and availability of an outpatient workflow to follow up on missed cases. We also quantified the relative benefits of increasing capacity for inpatient ACP versus outpatient ACP. Results Work capacity constraints and discharge timing can significantly reduce the net benefit of triggering the ACP workflow based on a model’s predictions. However, the reduction can be mitigated by creating an outpatient ACP workflow. Given limited resources to either add capacity for inpatient ACP versus developing outpatient ACP capability, the latter is likely to provide more benefit to patient care. Discussion The benefit of using a predictive model for identifying patients for interventions is highly dependent on the capacity to execute the workflow triggered by the model. We provide a framework for quantifying the impact of healthcare delivery factors and work capacity constraints on achieved benefit. Conclusion An analysis of the sensitivity of the net benefit realized by a predictive model triggered clinical workflow to various healthcare delivery factors is necessary for making predictive models useful in practice.
Formal verification methods are used only sparingly in software development. The most successful methods to date are based on the use of model checking tools. To use such tools, the user must first define a faithful abstraction of the application (the model), specify how the application interacts with its environment, and then formulate the properties that it should satisfy. Each step in this process can become an obstacle. To complete the verification process successfully often requires specialized knowledge of verification techniques and a considerable investment of time.In this paper we describe a verification method that requires little or no specialized knowledge in model construction. It allows us to extract models mechanically from the source of software applications, securing accuracy. Interface definitions and property specifications have meaningful defaults that can be adjusted when the checking process becomes more refined. All checks can be executed mechanically, even when the application itself continues to evolve. Compared to conventional software testing, the thoroughness of a check of this type is unprecedented.
Runtime verification as a field faces several challenges. One key challenge is how to keep the overheads associated with its application low. This is especially important in real-time critical embedded applications, where memory and CPU resources are limited. Another challenge is that of devising expressive and yet user-friendly specification languages that can attract software engineers. In this paper, it is shown that for many systems, in-place logging provides a satisfactory basis for postmortem "runtime" verification of logs, where the overhead is already included in system design. Although this approach prevents an online reaction to detected errors, possible with traditional runtime verification, it provides a powerful tool for test automation and debugging-in this case, analysis of spacecraft telemetry by ground operations teams at NASA's Jet Propulsion Laboratory. The second challenge is addressed in the presented work through a temporal specification language, designed in collaboration with Jet Propulsion Laboratory test engineers. The specification language allows for descriptions of relationships between data-rich events (records) common in logs, and is translated into a form of automata supporting data-parameterized states. The automaton language is inspired by the rule-based language of the RULER runtime verification system.A case study is presented illustrating the use of our LOGSCOPE tool by software test engineers for the 2011 Mars Science Laboratory mission.
AbstractÐSoftware verification methods are used only sparingly in industrial software development today. The most successful methods are based on the use of model checking. There are, however, many hurdles to overcome before the use of model checking tools can truly become mainstream. To use a model checker, the user must first define a formal model of the application, and to do so requires specialized knowledge of both the application and of model checking techniques. For larger applications, the effort to manually construct a formal model can take a considerable investment of time and expertise, which can rarely be afforded. Worse, it is hard to secure that a manually constructed model can keep pace with the typical software application, as it evolves from the concept stage to the product stage. In this paper, we describe a verification method that requires far less specialized knowledge in model construction. It allows us to extract models mechanically from source code. The model construction process now becomes easily repeatable, as the application itself continues to evolve. Once the model is constructed, existing model checking techniques allow us to perform all checks in a mechanical fashion, achieving nearly complete automation. The level of thoroughness that can be achieved with this new type of software testing is significantly greater than for conventional techniques. We report on the application of this method in the verification of the call processing software for a new telephone switch that was recently developed at Lucent Technologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.