The humanity is now at the threshold of a new era when a widening use of artificial intelligence (AI) will start a new industrial revolution. Its use inevitably leads to the problem of ethical choice, it gives rise to new legal issues that require urgent actions. The authors analyze the criminal law assessment of the actions of AI. Primarily, the still open issue of liability for the actions of AI that is capable of self-learning and makes a decision to act / not to act, which is qualified as a crime. As a result, there is a necessity to form a system of criminal law measures of counteracting crimes committed with the use of AI. It is shown that the application of AI could lead to four scenarios requiring criminal law regulation. It is stressed that there is a need for a clear, strict and effective definition of the ethical boundaries in the design, development, production, use and modification of AI. The authors argue that it should be recognized as a source of high risk. They specifically state that although the Criminal Code of the Russian Federation contains norms that determine liability for cybercrimes, it does not eliminate the possibility of prosecution for infringements committed with the use of AI under the general norms of punishment for various crimes. The authors also consider it possible to establish a system to standardize and certify the activities of designing AI and putting it into operation. Meanwhile, an autonomous AI that is capable of self-learning is considerably different from other phenomena and objects, and the situation with the liability of AI which independently decides to undertake an action qualified as a crime is much more complicated. The authors analyze the resolution of the European Parliament on the possibility of granting AI legal status and discuss its key principles and meaning. They pay special attention to the issue of recognizing AI as a legal personality. It is suggested that a legal fiction should be used as a technique, when a special legal personality of AI can be perceived as an unusual legal situation that is different from reality. It is believed that such a solution can eliminate a number of existing legal limitations which prevent active involvement of AI into the legal space.
The topics of artificial intelligence (AI) and the development of intelligent technologies are highly relevant and important in the modern digital world. Over its fifty years of history, AI has developed from a theoretical concept to an intelligent system capable of making independent decisions. Key advantages of using AI include, primarily, an opportunity for mankind to get rid of routine work and to engage in creative activities that machines are not capable of. According to international consulting agencies, global business investments in digital transformation will reach 58 trillion USD by 2021, while global GDP will grow by 14 %, or 15.7 trillion USD, in connection with the active use of AI. However, its rapid evolvement poses new threats connected with AI’s ability to self-develop that the state and the society have to counteract; specifically, they have to introduce normative regulation of AI activities and to address threats arising from its functioning. The authors present a thorough analysis of the opinions of leading researchers in the field of social aspects of AI’s functioning. They also state that the regulation of the status of AI as a legal personality, not to mention its ability to commit legally meaningful actions, remains an open question today. At present, the process of creating a criminological basis for applying AI, connected with the development of new intelligent technologies, is underway, it requires actions and decisions aimed at preventing possible negative effects of its use and reacting to them on a state level. The authors’ analysis of the history of AI’s emergence and development has allowed them to outline its key features that pose criminological risks, to determine criminological risks of using AI and to present their own classification of such risks. In particular, they single out direct and indirect criminological risks of using AI. A detailed analysis has allowed the authors to identify an objective need for establishing special state agencies that will develop state policy in the sphere of normative legal regulation, control and supervision over the use of AI.
Abstract: In the modern digital age, the issues of using artificial intelligence and the field of development of intelligent technologies are extremely important and relevant. Over the past few years, there have been attempts of state regulation of artificial intelligence, both in Russia and in other countries of the world. Artificial intelligence poses new challenges to various areas of law: from patent to criminal law, from privacy to antitrust law. Among the current approaches, the most optimal is the creation of a separate legal regulation mechanism that creates a clear distinction between areas of responsibility of developers and users of systems with artificial intelligence and the technology itself. Today, the development of the legal framework for the existence of artificial intelligence can be conditionally divided into two approaches: the creation of a legal framework for the introduction of applied systems with artificial intelligence and stimulate their development; regulation of the sphere of creating artificial "super intelligence", in particular, compliance of the developed technologies with generally recognized standards in the field of ethics and law.A separate area should be the introduction of uniform ethical principles for all developers and users of systems with artificial intelligence. The most optimal in this aspect is the approach implemented within the framework of the Asilomar principles. In these circumstances, the appeal to the problem of legal regulation of artificial intelligence is becoming more relevant than ever. This paper presents the results of a detailed analysis of existing approaches to the legal regulation of artificial intelligence.
Robotics is considered by modern researchers from various positions. The most common technical approach to the study of this concept, which examines the current state and achievements in the field of robotics, as well as the prospects for its development. Also, quite often in recent years, legal experts have begun to address problems related to the development of robotics, focusing on issues related to the legal personality of robots and artificial intelligence, as well as the responsibility of AI for causing harm. A separate direction in the field of robotics research is the analysis of this concept and the relations associated with it, from the standpoint of morality, ethics and technologies.
Digital technology is an integral part of our daily lives. Regardless of whether we have a computer at home, whether we use the possibilities of obtaining state and municipal services in digital form or simply operate electronic gadgets, society's dependence on technology is increasing. A secure digital environment enhances trust and contributes to the creation of a stable and prosperous nation. Government and the business community are also taking advantage of the technological revolution through greater adoption and use of digital technologies. Traditional forms of crime have also evolved, as criminal associations increasingly use the information and telecommunications network - the Internet - to commit cybercrimes and increase their profits. Digital crime is developing at an incredibly fast pace, and new types of criminal acts are constantly emerging. So we need to keep up with digital technologies, understand the opportunities they create for cybercriminals, and how they can be used as a tool to combat cybercrime. The active use of digital technologies in all spheres of social life in the last three decades formed a background for the emergence of a special type of criminals - the so-called hackers. Criminal groups of hackers pose a public danger because, if they unite, they are capable of planning a large-scale computer attack which could target, among other things, critically important information infrastructure objects. Besides, hacker groups have become a real danger for both governments, large corporations, the military, and for private persons. The trend for blurring the boundaries between hacker groups and organized crime, that the experts predicted a few years ago, has now become a reality. In fact, it is possible to say that a new independent type of organized crime has emerged - the hacking community. These circumstances make it necessary to develop a special norm that provides for the liability for organizing hacking community or participating in it. Such a norm will allow for a complex approach to the criminal law counteraction against such criminal groups by ensuring an adequate criminal law assessment of the actions of the organizers and coordinators of hackers organizations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.