Over the last few years, the interpretability of classification models has been a very active area of research. Recently, the concept of interpretability was given a more specific legal context. In 2016, the EU adopted the General Data Protection Regulation (GDPR), containing the right to explanation for people subjected to automated decision-making (ADM). The regulation itself is very reticent about what such a right might imply. As a result, since the introduction of the GDPR there has been an ongoing discussion about not only the need to introduce such a right, but also about its scope and practical consequences in the digital world. While there is no doubt that the right to explanation may be very difficult to implement due to technical challenges, any difficulty in explaining how algorithms work cannot be considered a sufficient reason to completely abandon this legal safeguard. The aim of this article is twofold. First, to demonstrate that the interpretability of "black box" machine learning algorithms is a challenging technical problem for which no solutions have been found. Second, to demonstrate how the explanation task should instead be completed using well-known and well-trialled IT solutions, such as event logging or statistical analysis of the algorithm. Based on the evidence exposed in this paper, the authors find that the most effective solution would be to benchmark the automated decision-making algorithms using certification frameworks, thus balancing the need to ensure adequate protection of individuals' rights with the understandable expectations of AI technology providers to have their intellectual property rights protected.
For several years there has been debate among EU Member States on the need to regulate cross‐border access to electronic data used as evidence in criminal proceedings and how best to do this. The existing model of cooperation, based mainly on bilateral agreements, appears dysfunctional and is perceived by many as a barrier to effectively combatting rising cross‐border crime. In response, work has begun on several new legal mechanisms, most importantly the draft e‐Evidence Regulation from the European Commission and a proposal to extend the Convention on Cybercrime – already in operation for almost 20 years – with an additional new protocol. At the same time, the United States has proposed its own model of cooperation, arising from the CLOUD Act. This article discusses the current state of play and the expected shape of future regulations – in terms of both facilitating law enforcement cooperation and clarifying obligations imposed on digital service providers.
The aim of this article is to verify whether existing international legal mechanisms provide effective protection of privacy in cyberspace in supra-regional terms. For years, human rights systems have been perceived as effective mechanisms for strengthening the area of fundamental rights. Nevertheless, in the case of activities taking place in cyberspace, the protective standards arising from international treaties seem to be insufficient. Despite the dynamic expansion of legislation in the area of data protection, the scope of the standards being used is still localnational or regional, rather than global. Hence, it is necessary to consider whether attaining an equal level of privacy protection in cyberspace and in physical space does not require putting forward new legal mechanisms that not only overcome the limitations of existing international agreements, but also enhance the trust in and credibility of the global data market, given that it is essential to the development of modern society.
W trwającej od dwudziestu lat dyskusji na temat granic dopuszczalnej inwigilacji w państwach demokratycznych nadal nie wypracowano jednego, powszechnie akceptowalnego stanowiska. Problem ten zajmuje badaczy z różnych dziedzin nauki — zarówno prawa, jak i socjologów czy filozofów. W istocie bowiem pytanie o granice inwigilacji jest pytaniem o definicję państwa, jego ustroju i formy sprawowanej władzy. W ostatnich latach szczególnego znaczenia nabiera dyskusja dotycząca masowej inwigilacji elektronicznej, a więc formy nadzoru, w której gromadzone są hurtowe ilości danych w celu ich dalszej analizy. Masowa inwigilacja w ocenie Komisji Weneckiej musi budzić uzasadnione skojarzenia z formą rządów niedemokratycznych. Jed-nocześnie jednak zwolennicy stosowania tego środka wskazują na nieuchronność jego stosowania, uwzględniając nowe wyzwania związane z zapewnieniem bezpieczeństwa w cyfrowym świecie.Celem artykułu jest podjęcie wskazanej tematyki z innej perspektywy badawczej — w szczególności próba odpowiedzi na pytanie, czy niezależnie od wdrożonych zabezpieczeń prawnych sama koncepcja stosowania masowych środków inwigilacji elektronicznej może zostać pogodzona ze sposobem funkcjonowania państwa demokratycznego. Przyczynkiem do tych rozważań jest obserwacja, że inwigilacja tego typu naturalnie może być zastosowana jako mechanizm kontroli społecznej, ograniczania wolności słowa czy wpływania na preferencje wyborcze, to jest do celów, które choć bliskie państwom autorytarnym, są obce demokracji.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.