J. J. Gibson's concept of affordance, one of the central pillars of ecological psychology, is a truly remarkable idea that provides a concise theory of animal perception predicated on environmental interaction. It is thus not surprising that this idea has also found its way into robotics research as one of the underlying theories for action perception. The success of the theory in this regard has meant that existing research is both abundant and diffuse by virtue of the pursuit of multiple different paths and techniques with the common goal of enabling robots to learn, perceive and act upon affordances. Up until now there has existed no systematic investigation of existing work in this field. Motivated by this circumstance, in this article we begin by defining a taxonomy for computational models of affordances rooted in a comprehensive analysis of the most prominent theoretical ideas of import in the field. Subsequently, after performing a systematic literature review, we provide a classification of existing research within our proposed taxonomy. Finally, by both quantitatively and qualitatively assessing the data resulting from the classification process, we highlight gaps in the research terrain and outline open questions for the investigation of affordances in robotics that we believe will help inform future work, prioritize research goals, and potentially advance the field towards greater robot autonomy.Prepared using sagej.cls [Version: 2015/06/09 v1.01]
Model-based security testing relies on models to test whether a software system meets its security requirements. It is an active research field of high relevance for industrial applications, with many approaches and notable results published in recent years. This article provides a taxonomy for model-based security testing approaches. It comprises filter criteria (i.e. model of system security, security model of the environment and explicit test selection criteria) as well as evidence criteria (i.e. maturity of evaluated system, evidence measures and evidence level). The taxonomy is based on a comprehensive analysis of existing classification schemes for model-based testing and security testing. To demonstrate its adequacy, 119 publications on model-based security testing are systematically extracted from the five most relevant digital libraries by three researchers and classified according to the defined filter and evidence criteria. On the basis of the classified publications, the article provides an overview of the state of the art in model-based security testing and discusses promising research directions with regard to security properties, coverage criteria and the feasibility and return on investment of model-based security testing. 120 M. FELDERER ET AL. 9126 [4] defining security as a functional quality characteristic. However, it seems desirable that security testing directly targets the previous security properties, as opposed to taking the detour of functional tests of security mechanisms. This view is supported by the ISO/IEC 25010 [2] standard that revises ISO/IEC 9126 and introduces security as a new quality characteristic that is not included in the characteristic functionality any more.Because the former kind of (non-functional) security properties describes all executions of a system, this kind of security testing is intrinsically hard. Because testing cannot show the absence of faults, an immediately useful perspective directly considers the violation of these properties. This has resulted in the development of specific testing techniques such as penetration testing that simulates attacks to exploit vulnerabilities. Penetration tests are difficult to craft because tests often do not directly cause observable security exploits, and because the testers must think like an attacker [5], which requires specific expertise. During penetration testing, testers build a mental model of security properties, security mechanisms and possible attacks against the system and its environment. It seems intuitive that security testing can benefit from specifying these security test models in an explicit and processable way. Security test models provide guidance for the systematic and effective specification and documentation of security test objectives and security test cases, as well as for their automated generation and evaluation.The variant of testing that relies on explicit models that encode information on the system under test and/or its environment is called model-based testing (MBT) [6,7]. Especially in ...
Assuring the security of a software system in terms of testing nowadays still is a quite tricky task to conduct. Security requirements are taken as a foundation to derive tests to be executed against a system under test. Yet, these positive requirements by far do not cover all the relevant security aspects to be considered. Hence, especially in the event of security testing, negative requirements, derived from risk analysis, are vital to be incorporated. If considering today's emerging trend in the adoption of cloud computing, security testing even has a more important significance. Due to a cloud's openness, in theory there exists an infinite number of tests. Hence, a concise technique to incorporate the results of risk analysis in security testing is inevitable. We therefore propose a new model-driven methodology for the security testing of cloud environments, ingesting misuse cases, defined by negative requirements derived from risk analysis.
In recent years Cloud computing became one of the most aggressively emerging computer paradigms resulting in a growing rate of application in the area of IT outsourcing. However, as recent studies have shown, security most of the time is the one requirement, neglected at all. Yet, especially because of the nature of usage of Cloud computing, security is inevitable. Unfortunately, assuring the security of a Cloud computing environment is not a one time task, it is a task to be performed during the complete lifespan of the Cloud. This is motivated by the fact that Clouds undergo daily changes in terms of newly deployed applications and offered services. Based on this assumption, in this paper, we propose a novel modelbased, change-driven approach, employing risk analysis, to test the security of a Cloud computing environment among all layers. As a main intrusion point, our approach exploits the public service interfaces, as they are a major source of newly introduced vulnerabilities, possibly leading to severe security incidents.
We present a tool environment and its underlying principles for Telling TestStories, an approach to modeldriven system testing of serviceoriented systems. Telling TestStories is based on tightly integrated platformindependent system and test models. The approach is capable of testdriven development on the model level, and guarantees high quality system and test models by checking consistency and coverage. Additionally, Telling TestStories provides full traceability between the requirements, the system and test models, and the executable services of the system. The tool environment supports these features in an integrated and clear way.
Understanding and defining the meaning of "action" is substantial for robotics research. This becomes utterly evident when aiming at equipping autonomous robots with robust manipulation skills for action execution. Unfortunately, to this day we still lack both a clear understanding of the concept of an action and a set of established criteria that ultimately characterize an action. In this survey we thus first review existing ideas and theories on the notion and meaning of action. Subsequently we discuss the role of action in robotics and attempt to give a seminal definition of action in accordance with its use in robotics research. Given this definition we then introduce a taxonomy for categorizing action representations in robotics along various dimensions. Finally, we provide a systematic literature survey on action representations in robotics where we categorize relevant literature along our taxonomy. After discussing the current state of the art we conclude with an outlook towards promising research directions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.