You do not write software because you can, or... do you?
Ideally, you have a specific purpose identified and clarified for you, you have specific metrics for success and you know how to spot the signs of failure. But we do not live in an ideal world, right?
Why knowing why is important
Let’s see through two perspectives:
Short-term or project-based
Often Machine Learning developers are given a specific task, to write an algorithm, for example, that determines credit score. The developer builds an algorithm that technically can perform quite reasonably and looks no further. This, I would call task-driven innovation.
In a purpose-driven scenario you identify the purpose of the project at the management level and then precisely communicate it, for simply picturing it, as follows: “we want to build the algorithm because we want to make the process faster”. “We also want to make the process fairer” and that’s when developer will take heed of the bias. “We additionally want to continually ensure robustness of the model through yearly algorithmic assessments” and that’s when developer will seek safety and reliability.
Now AI ethics principles that you have set out will make sense - a guide for developer to refer to when working on a specific purpose. Additionally, the developer has the meaning of work, context, responsibility to complete it and a little more headache - a state where they can make a difference, as the algorithm to be deployed in the real world will make a difference to people’s lives, for better or for worse - developers decide, not the black box, not only the user.
When you have a purpose precisely identified and metrics for what it means for the model to be successful, then it is also easier to notice deviations at an early stage. Additionally, deviation threshold better be determined. It is an additional opportunity to address any risk scenario proactively and mitigate it before escalation.
long term or value-based
The long-term perspective is far more complex. What do you seek to achieve with your AI solution when it becomes technically much more profound and generally, we need to ask ourselves what we really want to do with so much powerful intelligence in our hands?
To replace human worker? To co-create with us?
To completely eliminate bias from decision-making? Then probably we should consider building intelligence unlike that of human.
To make friends with machines? Then probably we’d better build an intelligence similar to human.
Fight better wars? Explore the universe? Cure all diseases? Notice, some of those goals are mutually exclusive. Most generally, we want machines to understand our goals, to adopt them and to retain them. Then alignment comes as a real challenge.
All in all
Purpose is a destination where we are heading. Roaming without destination also has its benefits - exploration, inspiration, experimentation. But in the real world we must be careful with experimentation. At the end of the day what matters are outcomes - whether arrived at coincidently or intentionally - that become a destination.