AI Governance Approach
We moved in an unprecedented age where we have not seen such a global and concerted effort towards one technology in such a brief amount of time. While AI technologies have been around for more than a couple of decades, the last several years' developments shook the world in both good and bad ways. It has led governments, companies, civil society and everyone in between to draw their attention to one major question: how to harness the immense power of AI without causing harm to life on Earth (and maybe beyond). Last three years have seen an unprecedented amount of policy papers, frameworks, guidelines, emergence of new communities, organizations, new departments within existing organizations and new drafts of regulations.
But at the same time most of the efforts appear to be repeating past mistakes that humans have been making throughout history in other areas. We do not seem to learn a lesson. The biggest misconception repeated over and over has been assuming that a rigid framework or a model will solve the entire AI governance issue.
Here is the problem. Models and frameworks have long been essential tools in various fields, including economics, science, and social sciences. They serve as simplified representations of complex systems, helping us understand, predict, and make decisions about the real world. However, they have limitations that become apparent when faced with the ever-changing and increasingly interconnected nature of real-world scenarios. For example, economic models have long been used for policy decisions but these models have been theoretical rather than evidence-based and have failed to serve society’s needs many times. Additionally, traditional policy-making methods rely more heavily on political ideology, intuition, anecdotal evidence, and the influence of interest groups or political parties. Decisions may be based on tradition, precedent, or the preferences of influential individuals rather than a systematic examination of evidence. This can lead to policies that are less effective, less efficient, or even counterproductive.
The biggest challenges of the models have been:
Simplification of Reality: Models and frameworks necessarily simplify reality by making assumptions and ignoring certain variables to make the problem manageable. This simplification can lead to inaccuracies and a lack of nuance, which becomes problematic when dealing with complex, dynamic systems.
Inflexibility: Many models and frameworks are rigid and have difficulty adapting to unexpected changes or emerging phenomena. Real-world scenarios often involve unforeseen factors and non-linear interactions, rendering traditional models less effective in such situations.
Context Dependency: Models and frameworks are often context-dependent, meaning they work well under specific conditions but may fail when applied to different contexts. Economic models, for example, may work in one country but not in another with a different economic structure.
Data Limitations: Models heavily rely on historical data and assumptions, but real-world situations can change rapidly, making it challenging to gather accurate data and assumptions for decision-making.
Human Behavior Complexity: Human behavior, a critical element in many models, is notoriously difficult to predict and model accurately due to its inherent complexity, cultural variations, and emotional factors.
These limitations have especially far reaching effects in the context of AI. AI technologies are advancing at an unprecedented pace, outstripping the ability of static governance models and frameworks to keep up. AI systems, due to their complexity and reliance on vast amounts of data, raise a multitude of ethical and regulatory concerns. These encompass issues and critical questions such as: How can we ensure fairness and eliminate bias in AI systems? How can we make the decision-making processes of AI systems more transparent and interpretable; How should the use of copyrighted material in generative AI model datasets be regulated? What measures should be taken to prevent harm and misuse of AI technology? and In a broader context, what steps can be taken to avoid existential risks, and how can we protect both humans and the natural habitat from AI's potential harms?
On top of that we do not yet have dedicated processes and institutions to tackle this challenge in the midst of geopolitical and corporate powerplay - we, in fact, live in an anarchic state of AI Governance. Despite the existence of numerous independent and localized endeavors focused on advancing the principles of integrity and equity in AI, none of these initiatives have garnered comprehensive, worldwide endorsement from either governmental bodies or multinational corporations. This absence of widespread support has allowed the AI landscape to persist in a state of anarchy.
To address these limitations and develop a more adaptable, practical approach to understanding and responding to real-world challenges, we are building an AI Governance Approach (rather than Framework). The advantage of an approach is that it shows how to choose the right path unlike a framework that shows which path to take. We are also building technological solutions which will augment our efforts and help organizations operationalize all relevant processes. Emerging technologies, including AI itself, allows tremendous opportunities to improve these processes including shift from traditional policy-making to evidence-based policymaking both on governmental and corporate levels. It differs from existing policy-making methods by its emphasis on empirical research, transparency, and a focus on outcomes rather than solely on political or ideological considerations - it is a data-driven, systematic approach that aims to improve the quality and effectiveness of policies by basing decisions on rigorous evidence and evaluation.
Our AI Governance Approach includes six cornerstones: [click each one to learn more]
At the heart of our AI Governance Approach lies our philosophy of human-AI co-evolution which observes how humans and AI will be interdependently evolving, the process where nothing is static, rather every interaction will be characterized with fluidity and contextual interdependence, hence policy-making and governance processes have to come along with this fluidity. Our philosophy touches upon the fundamental question of individual autonomy and freedom of decision-making as AI technologies are aimed to become autonomous agents and take certain decision-making processes from us. With the growing influence of AI in our lives, such as recommender systems, AI assistants or healthcare chatbots, we find ourselves in a unique position where we must decide what decisions to delegate to machines and how much autonomy we are willing to relinquish. Delegating decision-making to the algorithms is not necessarily a bad thing, rather something we can take advantage of. We must engage in thoughtful deliberation and establish clear guidelines and protocols for AI development.
This also prompts us to also think of a rather systemic approach to existential or non-existential risks of AI that led us to develop risk management methods rooted in systems theory. Systems theory views organizations as complex and interrelated systems composed of various elements working together to achieve common goals. It provides a holistic approach to understanding organizations by considering the interactions between their components and their environment. Holistic and systemic approach also runs through our way to revolutionize AI transparency through comprehensive documentation practices. In this vein, we are introducing "Governance Cards," a unified approach to AI documentation that goes beyond traditional practices such as model cards and AI factsheets. It provides a more transparent, interactive, user-friendly and regulation-oriented method guided by comprehensive questions.
However, comprehensive and good questions are not only relevant for documentation. In our AI Governance Approach questioning practices underlies the entire process of AI development life-cycle embedded in our products, services and educational materials. Greek philosopher Socrates famously employed a method currently known as the Socratic Method which is based on the practice of disciplined, rigorously thoughtful dialogue and is a powerful approach to inquiry. While it has its roots in philosophy, the Socratic method has been widely used in psychology, education and can be effectively applied to the business context. At the same time asking good questions is half way to truth. To achieve a higher level of inclusivity and fairness, what we really need is deliberation. Solving society's most pressing issues, especially those posed by AI, demands deliberation by and with those directly impacted by AI. At AdalanAI, we champion this notion and are developing a platform that facilitates democratic deliberation and sources collective intelligence to inform the most critical decisions concerning AI.
Finally, we are not into picturing the doom and gloom future of AI technologies, moreover, we have sincere hopes and expectations that this technology can be life-changing to everyone and everything on this planet. This is why we are aiming at leveraging this huge potential and working to carefully incorporate AI technologies within our products. Starting from ensuring data quality as well as security with the latest developments in AI continued with agent based simulations for scenario-based policy planning, we can go further, faster and better towards safest and most beneficial AI.
On this journey, we aim for a world where humans and AI exist harmoniously, creating a future that's not only intelligent but also ethically grounded and fair. Welcome to the future of human-AI co-evolution, where our collective efforts come together to pioneer a brighter, more beneficial era for AI!
Six Elements of AI Governance Approach
As AI systems like recommender algorithms, virtual assistants, and healthcare chatbots become increasingly integrated into our lives, we are confronted with the pivotal task of determining which decisions to entrust to machines and how much autonomy we are willing to cede. While delegating decision-making to AI can enhance efficiency and reduce errors in tasks such as data analysis and logistics planning, it becomes ethically complex when applied to matters of morality, ethics, and deeply personal choices. Consequently, society must engage in thoughtful deliberation and establish clear guidelines for AI development to ensure that the technology augments rather than supplants human judgment.
While we contemplate the role of AI in our lives, it's equally important to consider the reciprocal relationship: how humans impact AI, how end-users interact with and influence AI systems both on individual and group/societal levels. This interactions will be dynamic and highly context-dependent which necessitates other type of policy-making practices that are more fluid and responsive to change.
In the pursuit of business excellence, it's no longer sufficient to merely react to challenges as they arise; the modern landscape demands proactive measures to preemptively address potential disruptions. AI, being a disruptive force in its own right, embodies both unparalleled opportunities and unprecedented risks. Proactive AI risk management, informed by systems theory, is the proactive stance that forward-looking organizations adopt to identify, analyze, and mitigate risks before they escalate into crises. With the interconnected and intricate nature of AI systems, organizations cannot afford to adopt a narrow perspective that only focuses on immediate functionalities. Rather, a holistic view, rooted in systems thinking, becomes imperative to comprehend the interdependencies and potential ripple effects that AI implementations can trigger across the business ecosystem.
As AI systems continue to proliferate across a wide range of industries and fields, the need for standardized procedures to document these systems has become increasingly apparent. However, the establishment of such procedures has been inconsistent and fragmented. To address this gap, several approaches have emerged, ranging from traditional model cards to AI factsheets. Big tech giants, such as Google, Microsoft, Meta, and IBM also introduced their frameworks for AI documentation. While these approaches represent progress towards responsible AI, they do not yet fully address today's regulatory and ethical requirements for AI transparency.
With this in mind, we are introducing "Governance Cards"- our innovative and unified approach to AI documentation that transcends the conventional methods. It offers a more transparent, interactive, comprehensive, user-friendly, and regulation-oriented way to document AI systems. This innovative and all-encompassing tool for AI transparency equips organizations with effective means for self-assessment and prepares them for existing and emerging AI regulations.
In the ever-evolving landscape of human progress, one timeless principle stands firm: the profound impact of inquiry. Throughout the annals of history and into the transformative digital era, the act of posing questions has consistently acted as a catalyst for change, exploration, and enlightenment. The skill of asking insightful questions holds significant importance across myriad aspects of life, enabling us to navigate the intricate complexities of our world. Fortunately, curiosity is a fundamental aspect of human nature, and the ability to ask discerning questions is a skill that can be cultivated and honed.
The venerable Greek philosopher Socrates astutely recognized the centrality of questioning, famously employing a method now renowned as the Socratic Method. Rooted in disciplined and rigorously thoughtful dialogue, this method serves as a potent approach to inquiry, designed to stimulate critical thinking, unearth profound insights, and reveal underlying assumptions. While originating in philosophy, the Socratic method has found widespread application in fields such as psychology, education, and the business context.
At the core of our organizational culture and embedded within all our products and services is an inquisitive mindset and a commitment to the Socratic approach. We are dedicated to infusing templates of 'thoughtful questioning' into our software products, disseminating our expertise in questioning through our educational offerings, and fostering a culture of curiosity within our community, where we draw upon collective intelligence to serve our partners and clients effectively.
The emergence of AI technologies such as ChatGPT has sparked a global competition, not only to develop innovative AI tools, but also to draft regulations that ensure their responsible use. In today's rapidly evolving AI landscape, it is indeed imperative to harness the collective intelligence of diverse communities to shape AI policy. However, the discourse on AI is often confined to a selected group of experts, academics and policymakers, limiting the participation of the general public in this transformative era. This discrepancy raises concerns about potential risks and a monopolisation of decision-making power by technocratic elites.
Collective intelligence, a force that has driven human achievements throughout history, offers a solution. Just as democracy thrives on the informed participation of the masses, AI policymaking should also involve a broad spectrum of society. We believe that collective intelligence platforms that mimic the model of consensus democracy, can bridge this gap by enabling citizens to actively participate, collaborate and deliberate on AI-related challenges. These platforms should focus on exchanging knowledge, fostering cooperation and building consensus about pressing issues related to the AI domain. By promoting inclusive and informed discussions through accessible platforms, we can legitimize AI policymaking processes and adequately address society's diverse needs and concerns. In this way, we democratize the shaping of AI ethics, governance and policy and make the future of AI a collective endeavour that benefits everyone. We are launching Discord community to pilot these processes, join us!
Paradoxically, AI can be harnessed to address the very challenges it presents. While these technologies evolve dynamically, we can already have a glimpse at some of the opportunities it creates in this context. For example, AI algorithms can be employed to automatically clean and preprocess data, identifying and rectifying errors and inconsistencies, ensuring the integrity of training datasets. AI can also generate synthetic data, supplementing limited datasets and improving model generalization. Explainable AI (XAI) is also emerging area where certain techniques can provide explanations for model predictions, such as SHAP (SHapley Additive exPlanations) And LIME (Local Interpretable Model-agnostic Explanations). Adversarial defense and anomaly detection with AI helps improve security of the models too. Agent-based simulations are very promising too providing a controlled environment for testing AI algorithms and models. They allow developers to create virtual agents that interact with AI systems, simulating real-world scenarios. This enables comparing AI algorithms and models under various conditions to identify strengths and weaknesses.
Talk to us. AdalanAI is building an end-to-end solution for AI Governance: SaaS platform and AI Governance Approach - novel ways to govern entire AI development life-cycle.