The article analyzes the process of transformation of trust while digitalization. Interaction of people in the Internet became a reason of formation of a new digital environment of trust. A special role in this process is devoted to the technologies of distributed registers. The most famous of them is the blockchain technology. Blockchain provides technical opportunities for trust relations between unfamiliar remote partners, which are based on the transparency of transactions and the technical inability of changing them. The introduction of new digital technologies in all spheres of public life requires re-approaching of trust as multidisciplinary category. The authors conclude that such types of trust as interpersonal, generalized and institutional will remain in the digital society, but their forms of expression will change significantly, as they will be closely linked with the Internet. Keywords-digital society, interpersonal trust, generalized trust, institutional trust, blockchainI.
Introduction: this paper focuses on the legal problems of applying the artificial intelligence technology when solving socio-economic problems. The convergence of two disruptive technologies – Artificial Intelligence (AI) and Data Science – has created a fundamental transformation of social relations in various spheres of human life. A transformational role was played by classical areas of artificial intelligence such as algorithmic logic, planning, knowledge representation, modeling, autonomous systems, multiagent systems, expert systems (ES), decision support systems (DSS), simulation, pattern recognition, image processing, and natural language processing (NLP), as well as by special areas such as representation learning, machine learning, optimization, statistical modeling, mathematical modeling, data analytics, knowledge discovery, complexity science, computational intelligence, event analysis, behavior analysis, social network analysis, and also deep learning and cognitive computing. The mentioned AI and Big Data technologies are used in various business spheres to simplify and accelerate decision-making of different kinds and significance. At the same time, self-learning algorithms create or reproduce inequalities between participants in circulation, lead to discrimination of all kinds due to algorithmic bias. Purpose: to define the areas and directions of legal regulation of algorithmic bias in the application of artificial intelligence from the legal perspective, based on the analysis of Russian and foreign scientific concepts. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods such as the legal-dogmatic method and the method of interpretation of legal norms. Results: artificial intelligence has many advantages (it allows us to improve creativity, services and lifestyle, to enhance the security, helps in solving various problems), but at the same time it causes numerous concerns due to the harmful effects on individual autonomy, privacy, and fundamental human rights and freedoms. Algorithmic bias exists even when the algorithm developer has no intention to discriminate, and even when the recommendation system does not accept demographic information as input: even in the absence of this information, due to thorough analysis of the similarities between products and users, the algorithm may recommend a product to a very homogeneous set of users. The identified problems and risks of AI bias should be taken into consideration by lawyers and developers and should be mitigated to the fullest extent possible, both when developing ethical principles and requirements and in the field of legal policy and law at the national and supranational levels. The legal community sees the opportunity to solve the problem of algorithmic bias through various kinds of declarations, policies, and standards to be followed in the development, testing, and operation of AI systems. Conclusions: if left unaddressed, biased algorithms could lead to decisions that would have a disparate collective impact on specific groups of people even without the programmer’s intent to make a distinction. The study of the anticipated and unintended consequences of applying AI algorithms is especially necessary today because the current public policy may be insufficient to identify, mitigate, and remedy the effects of such non-obvious bias on participants in legal relations. Solving the issues of algorithmic bias by technical means alone will not lead to the desired results. The world community recognizes the need to introduce standardization and develop ethical principles, which would ensure proper decision-making with the application of artificial intelligence. It is necessary to create special rules that would restrict algorithmic bias. Regardless of the areas where such violations are revealed, they have standard features of unfair behavior of the participants in social relations and can be qualified as violations of human rights or fair competition. Minimization of algorithmic bias is possible through the obligatory introduction into circulation of data in the form that would not allow explicit or implicit segregation of various groups of society, i.e. it should become possible to analyze only data without any explicit attributes of groups, data in their full diversity. As a result, the AI model would be built on the analysis of data from all socio-legal groups of society.
The paper reveals the concept of "digital inheritance" - a new term in law, which has become widespread in many legal systems and refers to the transfer of rights to digital assets in a broad sense. It is established that only turnable digital assets are subject to transfer by way of universal succession. It is shown that the possibility of digital inheritance by law and by will is limited depending on the object by the terms of the contract (a license, services, confidentiality) and/or the human constitutional right to privacy.
Introduction: the article deals with the issues concerning the protection of the rights to digital content created with the use of artificial intelligence technology and neural networks. This topic is becoming increasingly important with the development of the technologies and the expansion of their application in various areas of life. The problems of protecting the rights and legitimate interests of developers have come to the fore in intellectual property law. With the help of intelligent systems, there can be created not only legally protectable content but also other data, relations about which are also subject to protection. In this regard, of particular importance are the issues concerning the standardization of requirements for procedures and means of storing big data used in the development, testing and operation of artificial intelligence systems, as well as the use of blockchain technology. Purpose: based on an analysis of Russian and foreign scientific sources, to form an idea of the areas of legal regulation and the prospects for the application of artificial intelligence technology from a legal perspective. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods (legal-dogmatic and the method of interpretation of legal norms). Results: analysis of the practice of using artificial intelligence systems has shown that today intelligent algorithms include a variety of technologies that are based on or related to intelligent systems, but not always fall under the concept of classical artificial intelligence. Strictly speaking, classic artificial intelligence is only one of the intelligent system technologies. The results created by autonomous artificial intelligence have features of works. At the same time, there are some issues of a public law nature that require resolution: obtaining consent to data processing from the subjects of this data, determining the legal personality of these persons, establishing legal liability in connection with the unfair use of data obtained for decision-making. Standardization in the sphere and application of blockchain technology could help in resolving these issues. Conclusions: in connection with the identified and constantly changing composition of high technologies that fall under the definition of artificial intelligence, there arise various issues, which can be divided into some groups. A number of issues of legal regulation in this area have already been resolved and are no longer of relevance for advanced legal science (legal personality of artificial intelligence technology); some issues can be resolved using existing legal mechanisms (analysis of personal data and other information in course of applying computational intelligence technology for decision-making); some other issues require new approaches from legal science (development of a sui generis legal regime for the results of artificial intelligence technology, provided that the original result is obtained).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.