AI Governance Approach
Publication date
18.09.2023
We moved in an unprecedented age where we have not seen such a global and concerted effort towards one technology in such a brief amount of time. While AI technologies have been around for more than a couple of decades, the last several years developments shook the world in both good and bad ways. It has led governments, companies, civil society and everyone in between to draw their attention to one major question: how to harness the immense power of AI without causing harm to life on earth (and maybe beyond). Last three years have seen an unprecedented amount of policy papers, frameworks, guidelines, emergence of new communities, organizations, new departments within existing organizations and new drafts of regulations.
​
But at the same time most of the efforts appear to be repeating past mistakes that humans have been making throughout history in other areas. We do not seem to learn a lesson. The biggest misconception repeated over and over has been assuming that rigid framework or a model will solve the entire AI governance issue.
​
Here is a hiatus. Models and frameworks have long been essential tools in various fields, including economics, science, and social sciences. They serve as simplified representations of complex systems, helping us understand, predict, and make decisions about the real world. However, they have limitations that become apparent when faced with the ever-changing and increasingly interconnected nature of real-world scenarios.
​
Simplification of Reality: Models and frameworks necessarily simplify reality by making assumptions and ignoring certain variables to make the problem manageable. This simplification can lead to inaccuracies and a lack of nuance, which becomes problematic when dealing with complex, dynamic systems.
Inflexibility: Many models and frameworks are rigid and have difficulty adapting to unexpected changes or emerging phenomena. Real-world scenarios often involve unforeseen factors and non-linear interactions, rendering traditional models less effective in such situations.
​
Context Dependency: Models and frameworks are often context-dependent, meaning they work well under specific conditions but may fail when applied to different contexts. Economic models, for example, may work in one country but not in another with a different economic structure.
​
Data Limitations: Models heavily rely on historical data and assumptions, but real-world situations can change rapidly, making it challenging to gather accurate data and assumptions for decision-making.
​
Human Behavior Complexity: Human behavior, a critical element in many models, is notoriously difficult to predict and model accurately due to its inherent complexity, cultural variations, and emotional factors.
​
These limitations has especially far reaching effects in the context of AI. AI technologies are advancing at an unprecedented pace, outstripping the ability of static governance models and frameworks to keep up. AI systems can exhibit unpredictable behaviors due to their complexity and reliance on vast amounts of data. Ethical concerns regarding bias, fairness, and discrimination in AI systems are prevalent. Meanwhile, existing frameworks do not adequately address these issues or provide practical solutions for ensuring fairness in AI applications. AI governance and ethics are global issues, but there is often a lack of consensus on standards and regulations across countries and regions. Frameworks that are overly rigid will struggle to accommodate the diversity of approaches and values worldwide.
​
To address these limitations and develop a more adaptable, practical approach to understanding and responding to real-world challenges, we are building AI Governance Approach (rather than Framework). The advantage of an approach is that it shows how to choose right path unlike framework that shows which path to take. Our AI Governance Approach includes six cornerstones:
​
-
Embracing Human-AI Co-Evolution
-
Implementing System Theoretic Risk Management
-
Documenting Governance Cards for Transparency and Compliance
-
Asking good questions
-
Sourcing Collective Intelligence
-
Leveraging AI Tools and other emerging technologies
​
At the heart of the Approach lies what we would call ‘metatheory’ of human-AI co-evolution which observes how humans and AI will be interdependently evolving, the process where nothing is static, rather every interaction will be characterized with fluidity and contextual interdependence. We will be continually asking how humans will be impacted with prevalence of AI applications and how AI will be impacted as a response to human behavior on the continuum of human-AI interaction. This prompts us to also think of rather systemic approach to existential or non-existential risks of AI that led us to develop risk management methods rooted in systems theory. Systems theory views organizations as complex and interrelated systems composed of various elements working together to achieve common goals. It provides a holistic approach to understanding organizations by considering the interactions between their components and their environment. Holistic and systemic approach also runs through our way to revolutionize AI transparency through comprehensive documentation practices. In this vein, we are introducing "Governance Cards," a unified approach to AI documentation that goes beyond traditional practices such as model cards and AI factsheets. It provides a more transparent, interactive, user-friendly and regulation-oriented method guided by comprehensive questions.
​
However comprehensive and good questions are not only relevant for documentation. In our AI Governance Approach questioning practices underlies the entire process of AI development life-cycle embedded in our products, services and educational materials. Greek philosopher Socrates famously employed a method currently known as the Socratic Method which is based on the practice of disciplined, rigorously thoughtful dialogue and is a powerful approach to inquiry. While it has its roots in philosophy, the Socratic method has been widely used in psychology, education and can be effectively applied to the business context. At the same time asking good questions is half way to truth. To achieve higher level of inclusivity and fairness, what we really need is deliberation. Solving society's most pressing issues, especially those posed by AI, demands deliberation by and with those directly impacted by AI crystalized as a form of collective intelligence. At AdalanAI, we champion this notion and developing a platform that facilitates democratic deliberation and sources collective intelligence to inform the most critical decisions concerning AI. Finally, we are not into picturing doom and gloom future of AI technologies, moreover, we have sincere hopes and expectations that this technology can be life-changing to everyone and everything on this planet. This is why we are aiming at leveraging this huge potential and working to carefully incorporate AI technologies within our products. Starting from ensuring data quality as well as security with the latest developments in AI continued with agent based simulations for scenario-based policy planning, we can go further, faster and better towards safest and most beneficial AI.
​
Embracing Human-AI Co-Evolution
In the dynamic landscape of AI, the interaction between humans and machines has evolved beyond mere technological determinism. Computers have reached a level of intelligence that continues to grow, profoundly affecting our lives. Yet, the impact of AI is not a one-way street. Rather, it is a dynamic, interdependent process, akin to a living, evolving system where nothing remains static. We describe this phenomena as human-AI co-evolution and introduce it as a ‘metatheory’ that invites us to explore the intricate dynamics at play—dynamics that transcend conventional theories and delve into the heart of human existence.
To understand the nature of this co-evolution, we must first dispel the notion of AI as a neutral force. AI's impact is not solely about the technology itself but extends to how humans use, interact with, and adapt to it. It involves norms, cultural shifts, perceptions, and the intentions of system developers. Furthermore, self-learning AI systems absorb knowledge and behaviors from their human interactions, shaping their own development and influencing the humans they engage with. In this intricate processes of evolution, both humans and AI are constantly changing, learning, and adapting.
​
​
Implementing System Theoretic Risk Management
In the pursuit of business excellence, it's no longer sufficient to merely react to challenges as they arise; the modern landscape demands proactive measures to preemptively address potential disruptions. AI, being a disruptive force in its own right, embodies both unparalleled opportunities and unprecedented risks. Proactive AI risk management, informed by systems theory, is the proactive stance that forward-looking organizations adopt to identify, analyze, and mitigate risks before they escalate into crises. With the interconnected and intricate nature of AI systems, organizations cannot afford to adopt a narrow perspective that only focuses on immediate functionalities. Rather, a holistic view, rooted in systems thinking, becomes imperative to comprehend the interdependencies and potential ripple effects that AI implementations can trigger across the business ecosystem.
​
​
Documenting Governance Cards for Transparency and Compliance
As AI systems continue to proliferate across a wide range of industries and fields, the need for standardized procedures to document these systems has become increasingly apparent. However, the establishment of such procedures has been inconsistent and fragmented. To address this gap, several approaches have emerged, ranging from traditional model cards to AI factsheets. Big tech giants, such as Google, Microsoft, Meta, and IBM also introduced their frameworks for AI documentation. While these approaches represent progress towards responsible AI, they do not yet fully address today's regulatory and ethical requirements for AI transparency.
With this in mind, we are introducing "Governance Cards"- our innovative and unified approach to AI documentation that transcends the conventional methods. It offers a more transparent, interactive, comprehensive, user-friendly, and regulation-oriented way to document AI systems. This innovative and all-encompassing tool for AI transparency equips organizations with effective means for self-assessment and prepares them for existing and emerging AI regulations.
​
​
Asking good questions
In the ever-changing landscape of human advancement, one principle remains unwavering: the immense power of inquiry. From the deep annals of history to the transformative digital age, the act of asking questions has consistently served as a catalyst for change, exploration, and enlightenment. The ability to ask good questions holds profound importance in nearly every facet of life and helps us navigate the complexities of our world. Curiosity is a basic human nature and asking good questions is fortunately a skill that can be learned and trained.
Greek philosopher Socrates recognized the importance of questioning, famously employing a method currently known as the Socratic Method. It is based on the practice of disciplined, rigorously thoughtful dialogue and is a powerful approach to inquiry that seeks to stimulate critical thinking, elicit deeper insights, and uncover underlying assumptions. While it has its roots in philosophy, the Socratic method has been widely used in psychology, education and can be effectively applied to the business context.
Inquisitive mindset and Socratic methods are embedded at the heart of our organizational culture as well as in all our products and services that serve our partners and clients. We will be heavily incorporating templates of ‘good questions’ in our software products, sharing our questioning skills in our educational services and encouraging a curious mindset in our community from which we source collective intelligence.
​
Sourcing Collective Intelligence
The emergence of AI technologies such as ChatGPT has sparked a global competition, not only to develop innovative AI tools, but also to draft regulations that ensure their responsible use. In today's rapidly evolving AI landscape, it is indeed imperative to harness the collective intelligence of diverse communities to shape AI policy. However, the discourse on AI is often confined to a selected group of experts, academics and policymakers, limiting the participation of the general public in this transformative era. This discrepancy raises concerns about potential risks and a monopolisation of decision-making power by technocratic elites.
Collective intelligence, a force that has driven human achievements throughout history, offers a solution. Just as democracy thrives on the informed participation of the masses, AI policymaking should also involve a broad spectrum of society. We believe that collective intelligence platforms that mimic the model of consensus democracy, can bridge this gap by enabling citizens to actively participate, collaborate and deliberate on AI-related challenges. These platforms should focus on exchanging knowledge, fostering cooperation and building consensus about pressing issues related to the AI domain. By promoting inclusive and informed discussions through accessible platforms, we can legitimize AI policymaking processes and adequately address society's diverse needs and concerns. In this way, we democratize the shaping of AI ethics, governance and policy and make the future of AI a collective endeavour that benefits everyone.
​
Leveraging AI Tools and other emerging technologies
Paradoxically, AI can be harnessed to address very challenges it presents. While these technologies evolve dynamically, we can already have a glimpse at some of the opportunities it creates in this context. For example, AI algorithms can be employed to automatically clean and preprocess data, identifying and rectifying errors and inconsistencies, ensuring the integrity of training datasets. AI can also generate synthetic data, supplementing limited datasets and improving model generalization. Explainable AI (XAI) is also emerging area where certain techniques can provide explanations for model predictions, such as SHAP (SHapley Additive exPlanations) And LIME (Local Interpretable Model-agnostic Explanations). Adversarial defense and anomaly detection with AI helps improve security of the models too. Agent-based simulations are very promising too providing a controlled environment for testing AI algorithms and models. They allow developers to create virtual agents that interact with AI systems, simulating real-world scenarios. This enables comparing AI algorithms and models under various conditions to identify strengths and weaknesses.
​
​
On this journey, we aim for a world where humans and AI exist harmoniously, creating a future that's not only intelligent but also ethically grounded and fair. Welcome to the future of human-AI co-evolution, where our collective efforts come together to pioneer a brighter, more beneficial era for AI!
Have Questions?
Talk to us. AdalanAI is management consulting & SaaS platform for AI Governance, Policy and Ethics.
