Purpose
– The purpose of this paper is to provide a review of the innovation literature, with special focus on studies applying a complexity perspective. As a contribution in its own right to the innovation literature, the review clarifies the concept of complexity, explores possible points of relevance and the “added value” gained from complexity theory (CT) to the study of innovation, and identifies some of the applications of the theory.
Design/methodology/approach
– A literature search was conducted which yielded 20 relevant articles. These articles were analyzed by focusing on the key concepts of complexity and studying their applications in the context of innovation research.
Findings
– Based on the approach adopted, the literature was divided into three categories, namely research focusing on microdynamics, macrodynamics, and leadership and management. The key complexity concepts identified in the innovation literature were “edge of chaos”, “phase shift”, “emergence and self-organization”, “(co)evolution”, and “complexity regulation”. The articles reviewed differed in terms of their perspectives on complexity and, accordingly, their operationalization of the complexity concepts. Key areas of development suggested by the authors include forging a stronger link with existing innovation theory and giving greater weight to empirical evidence.
Research limitations/implications
– While a systematic review strategy was adopted to identify all relevant research on “open innovation” and complexity, a selective snowball strategy was deemed the only feasible approach to cover research conducted on “innovation” and complexity.
Practical implications
– Practitioners can learn to put CT-based research in context and also learn to recognize the value of CT for innovation management. The authors distilled three important lessons for practice from the research done: embracing complexity, embracing ambidexterity, and embracing failure.
Originality/value
– To the best of the authors’ knowledge no review has as yet been undertaken to encapsulate the current state of applications of CT to innovation research.
IS automation pervades business processes today. Thus, concerns have been raised about automation's potential deskilling effects on knowledge workers. We conduct a revelatory case study about an IT service firm where a managerial decision was taken to discontinue a fixed assets management (FAM) software that provided seemingly effective automation of fixed assets accounting and reporting. We study how automation can result in latent deskilling that later becomes apparent when the system gets discontinued, causing disruptions in employees' daily work and organizational processes. We also investigate how the employees and the company recover from this disruption by leveraging various coping strategies. We suggest that automation of an accounting function/process played a key role in the deskilling of accountants. Although the effect on worker skills may not be obvious when the system is operational, discontinuing the system brings the effect to the apparent level.
Guidelines for different qualitative research genres have been proposed in information systems (IS). As these guidelines are outlined for conducting and evaluating good research, studies may be denied publication simply because they do not follow a prescribed methodology. This can result in "checkbox" compliance, where the guidelines become more important than the study. We argue that guidelines can only be used to evaluate what good research is if there is evidence that they lead to certain good research outcomes. Currently, the guidelines do not present such evidence. Instead, when it is presented, the evidence is often an authority argument or evidence of popularity with usability examples. We further postulate that such evidence linking guidelines and outcomes cannot be presented. Therefore, it may be time for the IS research community to acknowledge that many research method principles we regard as authoritative may ultimately be based on speculation and opinion, and thus, they should be taken less seriously as absolute guidelines in the review process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.