Abstract. Comparing humans and machines is one important source of information about both machine and human strengths and limitations. Most of these comparisons and competitions are performed in rather specific tasks such as calculus, speech recognition, translation, games, etc. The information conveyed by these experiments is limited, since it portrays that machines are much better than humans at some domains and worse at others. In fact, CAPTCHAs exploit this fact. However, there have only been a few proposals of general intelligence tests in the last two decades, and, to our knowledge, just a couple of implementations and evaluations. In this paper, we implement one of the most recent test proposals, devise an interface for humans and use it to compare the intelligence of humans and Q-learning, a popular reinforcement learning algorithm. The results are highly informative in many ways, raising many questions on the use of a (universal) distribution of environments, on the role of measuring knowledge acquisition, and other issues, such as speed, duration of the test, scalability, etc.
Berzisa, S.; Bravos, G.; Cardona Gonzalez, T.; Czubayko, U.; España, S.; Grabis, J.; Henkel, M.... (2015). Capability driven development: an approach to designing digital enterprises. Business and Information Systems Engineering. 57(1):15-25. doi:10.1007/s12599-014-0362-0. 1 Capability Driven Development: An Approach to Designing Digital EnterprisesAbstract. The need for organizations to operate in changing environments is addressed by proposing an approach that integrates organizational development with information system (IS) development taking into account changes in the application context of the solution. This is referred to as Capability Driven Development (CDD). A meta-model representing business and IS designs consisting of goals, key performance indicators, capabilities, context and capability delivery patterns, is being proposed. The use of the meta-model is validated in three industrial case studies as part of an ongoing collaboration project, whereas one case is presented in the paper. Issues related to the use of the CDD approach, namely, CDD methodology and tool support are also discussed.
Abstract. One insightful view of the notion of intelligence is the ability to perform well in a diverse set of tasks, problems or environments. One of the key issues is therefore the choice of this set and the probability of each individual, which can be formalised as a 'distribution'. Formalising and properly defining this distribution is an important challenge to understand what intelligence is and to achieve artificial general intelligence (AGI). In this paper, we agree with previous criticisms that a universal distribution using a reference universal Turing machine (UTM) over tasks, environments, etc., is perhaps much too general, since, e.g., the probability of other agents appearing on the scene or having some social interaction is almost 0 for most reference UTMs. Instead, we propose the notion of Darwin-Wallace distribution for environments, which is inspired by biological evolution, artificial life and evolutionary computation. However, although enlightening about where and how intelligence should excel, this distribution has so many options and is uncomputable in so many ways that we certainly need a more practical alternative. We propose the use of intelligence tests over multi-agent systems, in such a way that agents with a certified level of intelligence at a certain degree are used to construct the tests for the next degree. This constructive methodology can then be used as a more realistic intelligence test and also as a testbed for developing and evaluating AGI systems.
Abstract. Context: Model-Driven Development (MDD) is a paradigm that prescribes building conceptual models that abstractly represent the system and generating code from these models through transformation rules. The literature is rife with claims about the benefits of MDD, but they are hardly supported by evidences. Objective: This experimental investigation aims to verify some of the most cited benefits of MDD. Method: We run an experiment on a small set of classes using student subjects to compare the quality, effort, productivity and satisfaction of traditional development and MDD. The experiment participants built two web applications from scratch, one where the developers implement the code by hand and another using an industrial MDD tool that automatically generates the code from a conceptual model. Results: Outcomes show that there are no significant differences between both methods with regard to effort, productivity and satisfaction, although quality in MDD is more robust to small variations in problem complexity. We discuss possible explanations for these results. Conclusions: For small systems and less programming-experienced subjects, MDD does not always yield better results than a traditional method, even regarding effort and productivity. This contradicts some previous statements about MDD advantages. The benefits of developing a system with MDD appear to depend on certain characteristics of the development context.
No abstract
Giraldo-Velásquez, FD.; España Cubillo, S.; Pastor López, O.; Giraldo, WJ. (2016). Considerations about quality in model-driven engineering. Software Quality Journal. 1-66. doi:10.1007/s11219-016-9350-6. Considerations about quality in model-driven engineeringCurrent state and challenges.Fáber D. Giraldo · Sergio España · Oscar Pastor · William J. GiraldoReceived: date / Accepted: date Abstract The virtue of quality is not itself a subject; it depends on a subject. In the software engineering field, quality means good software products that meet customer expectations, constraints, and requirements. Despite the numerous approaches, methods, descriptive models and tools, that have been developed, a level of consensus has been reached by software practitioners. However, in the modeldriven engineering (MDE) field, which has emerged from software engineering paradigms, quality continues to be a great challenge since the subject is not fully defined. The use of models alone is not enough to manage all of the quality issues at the modelling language level.In this work, we present the current state and some relevant considerations regarding quality in MDE, by identifying current categories in quality conception and by highlighting quality issues in real applications of the model-driven initiatives.We identified sixteen categories in the definition of quality in MDE. From this identification, by applying an adaptive sampling approach, we discovered the five most influential authors for the works that propose definitions of quality. These include (in order): the OMG standards (e.g., MDA, UML, MOF, OCL, SysML), the ISO standards for software quality models (e.g., 9126 and 25000), Krogstie, Lindland, and Moody. We also discovered families of works about quality, i.e., works that belong to the same author or topic.Seventy-three works were found with evidence of the mismatch between the academic/research field of quality evaluation of modelling languages and actual MDE practice in industry. We demonstrate that this field does not currently solve quality issues reported in industrial scenarios. The evidence of the mismatch was grouped in eight categories, four for academic/research evidence and four for industrial reports. These categories were detected based on the scope proposed in each one of the academic/research works and from the questions and issues raised by real practitioners.We then proposed a scenario to illustrate quality issues in a real information system project in which multiple modelling languages were used. For the evaluation of the quality of this MDE scenario, we chose one of the most cited and influential quality frameworks; it was detected from the information obtained in the identification of the categories about quality definition for MDE. We demonstrated that the selected framework falls short in addressing the quality issues. Finally, based on the findings, we derive eight challenges for quality evaluation in MDE projects that current quality initiatives do not address sufficiently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.