This article focuses on the problems of the application of AI as a tool of crime from the perspective of the norms and principles of Criminal law. The article discusses the question of how the legal framework in the area of culpability determination could be applied to offenses committed with the use of AI. The article presents an analysis of the current state in the sphere of criminal law for both intentional and negligent offenses as well as a comparative analysis of these two forms of culpability.
Part of the work is devoted to culpability in intentional crimes. Results of analysis in the paper demonstrate that the law-enforcer and the legislator should reconsider the approach to determining culpability in the case of the application of artificial intelligence systems for committing intentional crimes. As an artificial intelligence system, in some sense, has its own designed cognition and will, courts could not rely on the traditional concept of culpability in intentional crimes, where the intent is clearly determined in accordance with the actions of the criminal.
Criminal negligence is reviewed in the article from the perspective of a developer’s criminal liability. The developer is considered as a person who may influence on and anticipate harm caused by AI system that he/she created. If product developers are free from any form of criminal liability for harm caused by their products, it would lead to highly negative social consequences. The situation when a person developing AI system has to take into consideration all potential harm caused by the product also has negative social consequences. The authors conclude that the balance between these two extremums should be found. The authors conclude that the current legal framework does not conform to the goal of a culpability determination for the crime where AI is a tool.
ИСКУССТВЕННЫЙ ИНТЕЛЛЕКТ КАК СОЦИАЛЬНЫЙ РЕГУЛЯТОР: ЗА И ПРОТИВ 3Аннотация. Цифровизация влияет не только на общественные отношения, но и грозит замещением права, как регулятора отношений, на новые формы регулирования. Это означает, что в ближайшем будущем под влиянием цифровизации программный код (алгоритмы) может выполнять регуляторную функцию. Анализ процессов цифровизации позволяет прогнозировать изменение механизма правообразования и композиции существующей модели социального регулирования, коррекцию границ известных социальных регуляторов и образование в ней ниши, которую займёт программный код. По факту регулирующий алгоритм -это программный код, автоматически контролирующий или влияющий на поведение людей. Новые методы технологий искусственного интеллекта (например, глубокое машинное 1 Роман Игоревич Дремлюга, кандидат юридических наук, доцент Юридической школы Дальневосточного федерального университета, Владивосток, Россия. 2 Алексей Сергеевич Кошель, кандидат политических наук, доцент кафедры конституционного и административного права Юридической школы Дальневосточного федерального университета, Владивосток, Россия. Для цитирования: Дремлюга Р. И., Кошель А. С. Искусственный интеллект как социальный регулятор: за и против // Азиатско-Тихоокеанский регион: экономика, политика и право. 2018. № 3. С. 55-68. 3 Исследование выполнено при финансовой поддержке РФФИ в рамках научного проекта № 18-29-16129.
The paper is devoted to the problems of committing crimes using virtual reality technologies and their qualifications. The optional features of the objective side and their significance when using new digital technologies are characterized. The factors complicating the investigation of such crimes are analyzed in detail. According to the results of the study, the authors come to the conclusion that the technology of virtual reality gives a criminal completely new opportunities. First, virtual reality allows you to manipulate the emotions and consciousness of the victim at a completely new level. The psycho-emotional effect is comparable in strength to the effect of events in the real world, at the same time it can be achieved remotely via the Internet. Secondly, in connection with the integration into the virtual environment of real-world devices, the consequences of actions in virtual reality also extend to the real world. This means that many criminal acts for which contact with the victim was necessary can now be performed remotely.
This article focuses on the regulation of maritime autonomous surface vessels from the perspective of international law of the sea. The article discusses on the possibility of developing a legal framework to regulate autonomous maritime navigation based on laws and regulation of autonomous driving of landed vehicles. The authors opine that existing legal framework does not conform to the goal of regulation of autonomous navigation. However, the regulation of autonomous car testing and exploitation could be imitated to design a new legal framework for autonomous shipping. Despite the divergent approaches, some principles remain in common particularly of cybersecurity and privacy. As computer systems are replacing the need of a master and crew for digitally managed ships, low level of cybersecurity implies an increase in risk of losing control over the vessel. The authors are of the opinion that that current legal acts, standards and their drafts do not pay necessary attention to the problem of cybersecurity of autonomous ships. Moreover, current legislations do not provide mechanisms of influence on behavior of shipowner and shipbuilder to make them apply the best measures. The similar situation is with privacy. Factually, an autonomous ship is a natural tool for surveillance, as to effectively navigate through the seas, it must collect and process information pertaining to navigational safety and other related matters. The question raises how this information has to be collected, kept, processed and deleted. Thus, the maritime community may consider adopting the approach on privacy from regulation for autonomous cars.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.