top of page

#5: AI & Responsibility



Why be responsible when you can still get away with being irresponsible?


  1. You cannot always get away with it

  2. There are actually benefits to being responsible



But what does it really mean to be responsible? Do no harm, do not discriminate, do not exploit, do not manipulate - is everything that isn’t forbidden allowed? Is everything (un)acceptable the same for everyone? Harm, fairness, justice, benefit, goodness are all notions from a realm of subjective abstractions.


***


After the A-level algorithm scandal in the UK, Ofqual head Sally Collier resigned. Guardian writes: “The government was forced to apologise for the fiasco that resulted in disadvantaged students being worst hit by downgrades, while private school pupils’ results were boosted under the algorithm. University admissions were thrown into chaos and public confidence in the exams system plummeted.”



What happened?


Impact: One team with several decision-makers affected the future of thousands of students.

Reputation: Public trust in government, in company, and in technology diminished

Accountability: Project manager resigned



Why did it happen?


Incompetence: Purpose and the context of the technology was not well thought after.

Negligence: Various actors have not been consulted. Adequate impact assessment has not been conducted.

Emerging field: Lack of best practice examples and standards.



***


Responsible companies

If you are not doing harm, does it necessarily mean you are doing good? - No, you are just doing your business. But if you are not doing harm, does it mean you are doing responsible business? - Yes. But here is a point: all too often you are trapped to believe you do no harm unless it escalates. What is the distance that you have to walk towards responsible AI development and deployment? - Competence and care (antonyms to incompetence and negligence), questioning all the “hows” and “whys” of your AI to be deployed in the real world.


Furthermore, to walk beyond responsible, towards good, you may consider a recent Mckinsey study which demonstrates that US millennials increasingly value brand purpose. Also employees often seek value alignment with a company.


Benefits: user loyalty, public trust, positive impact, sustainability and long-term gains including financial.



Responsible Policy Makers

We do not want to nip technological development in its bud, at the same time we do not want to allow damage. To find the best balance policy makers need to be equipped with a comprehensive understanding of the AI technologies, business models, risks and broader impacts.


Benefits: public trust, unleashing technologies’ best potentials, economic growth and prosperity, empowered citizens.



Responsible end-Users

Awareness is key. End users also bear responsibility to seek to better understand how the everyday-use technology works.


Benefits: better control, less risk, harnessing technology




Everyone bears their portion of responsibility - companies, policy-makers, end users, watchdogs. You are creating the environment of which part you become and sooner or later it reflects upon yourself - as an employee, as a company, as a government and as an user - do what you want to be part of.


Awareness, competence and care are what makes responsible development of future technologies possible.


188 views0 comments

Recent Posts

See All
bottom of page