Taking into account the discussions around the ethical implications in the use of artificial intelligence (AI) systems, one of the most prominent topics is the AI agents accountability.Learning algorithms inherently possess discriminatory potential, which directly impacting fundamental rights. Consequently, many stakeholders are mobilising to delineate inherent minimum standards to build a responsible AI. By embracing this reality this paper aims to bolster the accountability principle, improving it beyond a mere conceptual iteration to endow it with a dialogical and intrinsic significance within the framework of technology regulation. This improvement is achieved through the application of the algorithmic impact assessment (AIA) instrument. The research delves into the functional and conceptual intricacies of the accountability principle within regulatory dynamics. It scrutinises the scope, objectives, and components of the AIA as accountability instruments, positioned to supplant the structuring of the regulatory landscape for AI. The main outcome of the research is the recognition of AIA as a tool for accountability and responsibility allocation. This recognition is grounded in an encompassing cognition structure, management, and the evaluation of consequences stemming from AI system usage. The primary objective is to facilitate responsive regulatory behaviour aligned with the demonstrated risk load in the assessment, spanning from protective biases to the safeguarding of fundamental rights.