Tax authorities worldwide make extensive use of artificial intelligence (AI) technologies to automate various aspects of their tasks, such as answering taxpayer questions, assessing fraud risk, risk profiling, and auditing (selecting tax inspections). Since this automation has led to concerns about the impact of nonexplainable AI systems on taxpayers' rights, explainable AI (XAI) technologies appear to be fundamental for the lawful use of AI in the tax domain. This paper provides an initial map of the explainability requirements that AI systems must meet for tax applications. To this end, the paper examines the constitutional principles that guide taxation in democracies and the specific human rights system of the European Convention of Human Rights (ECHR), as interpreted by the European Court of Human Rights (ECtHR). Based on these requirements, the paper suggests how approaches to XAI might be deployed to address the specific needs of the various stakeholders in the tax domain.
Ongoing research projects, among them the Brazilian SPIRA and SoundCov initiatives, seek to diagnose Covid-19 and severe respiratory insufficiency through the analysis of voice recordings.• The voice recordings may also be used to infer information about various personal traits, including some considered as sensitive by data protection law.• The deployment of such apps may promote significant benefits in the context of a pandemics, since telemedicine avoids the risks of infection, both to potential patients and to the health-care professionals, as compared to presential consultation; those benefits, however, must be evaluated in light of the ethical and data protection concerns mapped in this paper.• The operation of voice-based medical apps involves various kinds of personal data, which means that those apps must follow the requirements imposed by Brazil's General Data Protection Law (LGPD), such as the need for a legal basis for data processing and the purpose limitation of processing.• The LGPD also provides a series of rights to users and other data subjects, such as the right to erasure and the right to information about the processing, which must be implemented by any diagnosis app.
Transparency is widely acknowledged as a core value in the governance of artificial intelligence (AI) technologies. However, scholarship on AI technologies and their regulation often casts this need for transparency in terms of requirements for the explanation of algorithmic outputs and/or decisions produced with the involvement of opaque black-box AI systems. Our article argues that this discourse has re-interpreted and reshaped transparency in fundamental ways away from its original meaning. The target of transparency – in most cases, the provider of AI software – determines and shapes what is made visible to the outside world, and there is no external check on the validity and accuracy of such mediated accounts and explanations, opening transparency up for manipulation. Through a theoretically informed and critical analysis of the transparency provisions in the European Union’s AI Act proposal, the article shows that the substitution of transparency with mediated explanations faces important technical constraints, creates opportunities and incentives for both providers and public-sector users of AI systems to adopt opaque practices, and reinforces secrecy requirements that gag accountability in practice. An approach to transparency as disclosure thus becomes necessary, even if not sufficient in and of itself, to ensure the accountable development and use of AI technologies in the European Union. Transparency needs to be reclaimed as a core concept, accountability tailored and reinforced and the necessity for secrecy re-examined and cordoned off.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.