top of page
Writer's pictureAna Chubinidze

#3: AI & Accountability



Accountable: “Someone who is accountable is completely responsible for what they do and must be able to give a satisfactory reason for it”


Responsible: “to have control and authority over something or someone and the duty of taking care of it, him, or her” *



Therefore, when you are accountable you are responsible for both “what”** (actions) and “why” (explanations).




Explainability as technical feature


Black box is a long withstanding problem for AI developers: it’s hard and sometimes impossible to explain the algorithmic decision-making process and justification of those decisions in human terms. Being able to observe cause and effect in the system is also often referred to as interpretability. Needless to say why it is important for human developer to be able to explain and interpret machine decisions.




Explainability as organizational feature


Good corporate governance practices and laws have generally been challenging for the corporate world and policy-makers alike. Artificial Intelligence highlighted these challenges even sharper and added some more colour on the challenge palette. For instance, the existing legal system only applies to human actors - you cannot ask for justice or compensation from the algorithm. But if a machine makes decisions independently from a human and additionally it’s technically unexplainable, should we attribute evoked harm to the developer solely? Or to the managers?


How can organizations ensure and check for ethical standards internally? How do they hold each other accountable? That’s where what I’d call a purpose-transparency-accountability** chain makes an appearance: identify the purpose of the AI product at the management level, communicate it transparently and precisely internally and hold human actors accountable reassuring that each actor has clearly defined function. Do not forget instituting policies for whistleblowing.




Explainability as social feature


Similar to the need for explainability internally, external actors also require this feature. Society usually has prescribed factors and expectations for the decision-making process and reasoning behind it. For example, hiring decision should not be made based on race; or user’s privacy should be considered relative to third parties.

For companies it would be highly costly to make socially charged explanations the default feature of each AI product. Then if it is an on-demand function, another important question arises here: when it is reasonable to demand an explanation? - Is it when harm has been imposed? When there are doubts that harm might be imposed?


Existing legal instruments (hard law) are not clearly applicable to such demands. Other codes or best examples (soft law) are practically non-existent so far.




And finally

Explanations neither technical, nor organizational and social are available at this moment. And prioritizing these are not quite noticeable neither in government nor corporate ranks. Meanwhile, clash comes after clash and we do not seem to find human actors responsible and accountable when we ask for it. And the fact that no one wants to hold themselves accountable means that socially undesirable intentions had been planted in the algorithms or negligently scattered around.


We should build reliable frameworks for accountability not only legally but also culturally.








* Definitions by Cambridge dictionary

**I will discuss “what” with an upcoming blogpost on responsibility.

***Check blog #1 AI & Purpose and #2 AI & Transparency


112 views0 comments

Recent Posts

See All

Comments


bottom of page