Focus on transparency always goes around techwise - explainability, interpretability, understandability - which then ends up with a paradox. Unveiling algorithms exposes them to risk of attack and risk to be stolen. At the same time, gives tech companies satisfaction that the job of building ethical algorithms is done with “transparency”.
Moreover, explainability and interpretability means nothing to the user. If the harm is imposed, they do not look for justice in the code.
Explainability and interpretability is for auditors. Transparency is for people, for the public.
They say information is power.
Depending on who holds it, power needs separation and oversight.
Powerful tech companies
We are witnessing how much information tech companies are acquiring recently about people, about markets and even about governments. The information is of quite broad spectrum, sometimes all-encompassing and can be crunched and turned into pieces of information that alters our behavior, opinion, financial circumstances, health, opportunities, physical security and life at large.
The more intelligent these technologies grow, the more power its owners have and the less empowered its consumers find themselves.
So, high power concentration goes to... tech companies! Does this remind you of some kind of order?
Today we take it for granted, but not so long ago it was a groundbreaking thought: “Separation of powers refers to the division of powers into distinct branches of government. The intent is to prevent the concentration of unchecked power and to provide for checks and balances, in which the powers of one branch of government is limited by the powers of another branch—to prevent abuses of power and avoid autocracy.”
Then how do we start distributing power to the users in the context of ever increasing technological world?
Following are the questions we should ask to tech companies:
Who: Do you have an internal board that is independent from a company's money-making machine? How much power does it have or is it only a decoration? What are the values of decision-makers?.
How: How does decision-making process work in the company? Who reports to whom?
When: Do you ensure proactive remediation of risk scenarios? How often do you assess and measure impact?
What: Do you have tools or frameworks for a public scrutiny? How do you talk to your users and try to explore their experiences, good or bad. If bad, how do you respond to the feedback or remedy harm imposed?
Do you have a purpose of your technology clearly identified and communicated internally as well as externally?
Users need no public algorithms, they need public answers to these questions.
Because governments have become left behind and even vulnerable to recent dramatic technology changes, we need to ask questions to governments as well:
Will you regain power (strict regulations) or will you integrate it with big tech companies (flexible regulations)?
Finally, transparency is in no way a job done for ethical AI. It’s just one part of the long chain connected with the following part - accountability.