The autonomy of software agents and the consideration of its intentional states have led us to consider the issue of divergences and defects in the declaration of will of the software agent. Nevertheless, there is a topic that still requires a legal and artificial intelligence combined analysis since the acting of autonomous software agents brings along the issues of guilt and negligence. In this paper we will try to identify the concepts of guilt and negligence and its various different levels, both in civil and criminal domains, and from these we will try to enquire if there is any possibility of developing a system of knowledge representation and reasoning, under a formal framework based on Logic Programming, allowing the evaluation and representation of the possible levels of guilt in the actions of autonomous software agents. In order to accomplish this representation, we have to take a look at the civil and criminal general legal theory on guilt and to distinguish the different possible levels of guilt encompassed in the actions of software. Of course we are aware that often it will not be an easy task to distinguish different behaviours and to relate them with different environmental conditions and, from these, to try to understand the reasons that led the software to adopt a certain behaviour 1 The complexity of recognizing intentions in software behavior must be acknowledged (Pereira & Saptawijaya, 2016:143-144).
ReferencesDias, Figueiredo. 2012. Direito penal, Parte Geral, Tomo I, Questões fundamentais, A doutrina geral do crime. Coimbra: Coimbra Editora. 2 The Portuguese legal framework on the services of information society, particularly electronic commerce (DL 7/2004), already recognizes in article 33 that electronic contracting may occur without human intervention. 3 See the important "trolley problem".