top of page
Image by Milad Fakurian
AdalanAI_Logo_edited.jpg

Manifesto 1.0

Cornerstone: Systems Theoretic Risk Management for AI

AdalanAI_Logo_edited.jpg

Publication date

18.09.2023

This is one of the 6 cornerstones of our Manifesto. Read the Manifesto here

In today's rapidly evolving business landscape, the integration of artificial intelligence (AI) has ushered in a new era of efficiency, innovation, and competitiveness. However, this transformative power of AI also comes with inherent risks that can significantly impact organizations if left unaddressed. The realm of AI risk management has emerged as a critical discipline to ensure the sustained success of businesses harnessing the potential of AI technologies. By systematically applying the principles of systems theory to AI risk management, organizations can navigate this complex terrain with foresight and vigilance, mitigating potential pitfalls and securing a future marked by both technological advancement and resilience.

In the pursuit of business excellence, it's no longer sufficient to merely react to challenges as they arise; the modern landscape demands proactive measures to preemptively address potential disruptions. AI, being a disruptive force in its own right, embodies both unparalleled opportunities and unprecedented risks. Proactive AI risk management, informed by systems theory, is the proactive stance that forward-looking organizations adopt to identify, analyze, and mitigate risks before they escalate into crises. With the interconnected and intricate nature of AI systems, organizations cannot afford to adopt a narrow perspective that only focuses on immediate functionalities. Rather, a holistic view, rooted in systems thinking, becomes imperative to comprehend the interdependencies and potential ripple effects that AI implementations can trigger across the business ecosystem. Those who overlook this proactive approach do so at their peril, exposing themselves to vulnerabilities that could undermine operations, reputation, and shareholder value. In essence, the ability to manage AI risks has transitioned from a discretionary aspect to a strategic necessity, shaping the very foundation of sustainable business growth in today's AI-driven landscape.

Briefly about Systems Theory

Systems theory is a comprehensive framework that views organizations as complex and interrelated systems composed of various elements working together to achieve common goals. This theory provides a holistic approach to understanding organizations by considering the interactions between their components and their environment.

Organizations heavily depend on their external environment to secure a variety of crucial resources. These resources encompass customers who make purchases, suppliers who furnish necessary materials, employees contributing labor and management skills, shareholders who invest capital, and governments responsible for regulatory oversight. It's important to recognize that any changes occurring within one facet of the organization can trigger far-reaching consequences that permeate the entire system.

The systems approach offers an external benchmark to gauge an organization's effectiveness, evaluating it in terms of its capacity for sustained growth and long-term viability. In this context, effective systems are characterized by achieving a state of equilibrium referred to as "homeostasis" by systems theorists. The term "homeostasis" is employed to emphasize the dynamic, ongoing nature of this equilibrium, as opposed to the static connotations of traditional equilibrium. As stated by Buckley (1967), maintaining homeostasis involves not just survival but also fostering growth. Thus, an organization can be deemed effective when it manages to sustain homeostasis over time.

This perspective on organizational effectiveness extends beyond the conventional goal-attainment approach, which primarily measures success based on the achievement of goals set by influential internal factions, often overlooking the overall well-being of the organization. According to systems theory, the most effective organizations are those that exhibit adaptability to their external surroundings, ensuring they remain responsive and resilient in the face of changing environmental conditions.

Organizations that exist in dynamic environments must be open systems to maintain homeostasis. Because dynamic environments are constantly changing, they create a lot of uncertainty about what an organization must do to survive and grow. The key to dealing with uncertainty is information. An open organization monitors its environment and collects information about environmental deviations that is labeled as input. Input can also be thought of as a form of feedback. The most important information is negative input, according to systems theorists, because this information alerts the organization to problems that need to be corrected. Negative input tells the organization that it is doing something wrong and that it must make adjustments to correct the problem; positive input tells the organization that it is doing something right and that it should continue or increase that activity.

Organizations then organize and process this information to formulate solutions or responses to these changes. Processing positive and negative input to adjust to environmental change is called throughput. In the throughput of information, the organization analyzes it and tailors it strategically to fit with the organization’s goals, and values, and within the relationship context it holds with the public. After an organization adapts to environmental changes, its actions and messages represent its output. Only messages will not be helpful if not coupled with the action.

To summarize, systems theory has the following main characteristics:

Systems and Subsystems: An organization is seen as a system made up of smaller subsystems. These subsystems can be departments, teams, or even individual employees. Just like a machine has different parts working together, an organization's departments and teams must collaborate for the organization to function effectively.

Interconnectedness: Systems theory highlights that the components of an organization are interconnected and interdependent. Changes in one part of the organization can impact other parts and the organization as a whole. For instance, altering a production process might affect supply chain logistics and customer satisfaction.

Boundaries: Organizations have boundaries that distinguish them from their environment. These boundaries help define what's inside the organization (internal processes, structures) and what's outside (competitors, market trends). Understanding these boundaries helps organizations adapt to external changes while maintaining internal stability.

Input, Process, Output: Organizations receive inputs from the environment (resources, information), process these inputs (through workflows, decision-making), and produce outputs (products, services). Systems theory encourages optimizing these processes to enhance overall productivity and quality.

Feedback Loops: Feedback is a critical concept in systems theory. Organizations should gather feedback from both internal and external sources to continuously evaluate and improve their performance. This can involve customer feedback, employee surveys, and financial performance analysis.

Emergent Properties: Systems theory acknowledges that the organization's behavior can exhibit emergent properties—traits that are not directly evident from examining individual components. For example, team collaboration and innovation might emerge from a combination of individual skills and a supportive culture.

To achieve effectiveness, an organization must exhibit a keen awareness of various external groups referred to as "environmental publics," which include customers, suppliers, governmental agencies, and local communities. Successful interaction with these external stakeholders is vital. Additionally, organizations should also be mindful of their internal publics, such as employees and labor unions, who can both influence and be impacted by the organization's actions. In systems theory literature, the connection between an organization and its stakeholders is termed "interdependence." While these interdependent relationships do impose constraints on an organization's autonomy, it's worth noting that maintaining positive relationships with stakeholders tends to impose fewer limitations than negative relationships. Collaborative efforts with key stakeholders often lead to an increase in the organization's autonomy. Establishing favorable relationships involves the organization proactively engaging with its stakeholders to identify mutually advantageous solutions.

However, it's important to acknowledge that systems theory has certain limitations. The first limitation pertains to the challenge of measurement, while the second revolves around the question of whether the methods an organization employs for survival are truly significant. As Robbins pointed out, a criticism of this approach is its emphasis on "the means necessary to achieve effectiveness," rather than directly measuring organizational effectiveness itself. Assessing the processes or means within an organization can be considerably more complex compared to measuring the specific end objectives typically emphasized in the goal-attainment approach.

Risk Management Frameworks Inspired by Systems Theory

The STAMP (System-Theoretic Accident Model and Processes) framework is a relatively new accident causality model based on system theory that was developed by Professor Nancy Leveson at MIT and has been gaining popularity across industries. The STAMP framework represents a cutting-edge approach to comprehending accidents, moving beyond traditional linear causality models. At its core, STAMP is grounded in the belief that accidents are not the result of isolated events, but rather arise from the intricate interplay of complex systems. Central to the STAMP framework is the concept of an abstraction hierarchy, a notion originally put forth by Jens Rasmussen. This hierarchy encapsulates the various levels of system components, ranging from high-level goals and objectives to concrete physical processes. By delineating these hierarchical layers, Rasmussen aimed to provide a structured understanding of how accidents originate within the intricate fabric of a system. Professor Leveson's STAMP framework seamlessly builds upon Rasmussen's abstraction hierarchy, leveraging it as a framework to delve deeper into the layers of complexity that underlie accidents. STAMP moves beyond the traditional binary approach of identifying immediate causes and instead focuses on the system's entire hierarchy. This multi-layered approach recognizes that accidents emerge from systemic interactions, where failures at various levels cascade and amplify, ultimately leading to an adverse event. STAMP emphasizes that accidents are often the outcome of inadequacies in system design, communication, control mechanisms, and organizational culture. By examining the interaction of these factors across the abstraction hierarchy, STAMP provides a holistic perspective on accident causality.

One of the most significant contributions of the STAMP framework lies in its ability to enable proactive risk management. By recognizing that accidents are not isolated incidents but manifestations of systemic flaws, STAMP offers a unique opportunity to identify vulnerabilities before they lead to catastrophe. The comprehensive analysis of the abstraction hierarchy allows for the identification of potential weaknesses at multiple levels, leading to the implementation of targeted safeguards and mitigation strategies.

In her recent paper titled "Rasmussen’s Legacy: A Paradigm Change in Engineering for Safety," Professor Nancy Leveson delves into the profound contributions of Jens Rasmussen to the field of safety engineering and explores how his work has paved the way for revolutionary approaches like the System-Theoretic Accident Model and Processes (STAMP) framework. The paper not only highlights Rasmussen's foundational concepts but also contextualizes them within the broader landscape of Systems Theory literature that has played a pivotal role in shaping the development of STAMP.

I believe his greatest achievements are in integrating human factors and engineering and applying the resulting ideas to safety. However, his ideas have had the most impact on human factors and cognitive systems engineering and not on the engineering of the hardware and software beyond the interface design” - writes Leveson.

However, while the STAMP framework is a powerful tool for analyzing and mitigating complex system failures, it does have some limitations, for example, it’s extremely complex to perform, is data and resource-intensive, requires expertise, and is subjective. It is also relatively new and there are no standardized procedures for implementation, especially when it comes to applying it to AI.

Systems Theoretic Risk Management for AI

We are building a robust risk management system that is also inspired by systems theory but is not constrained by its limitations. We strongly believe that a combination of human insight and new technologies can eliminate challenges that conventional risk management frameworks have been facing.

In essence, effective business preparedness involves careful resource allocation, adept change management, and vigilant risk management, all orchestrated to ensure the organization's resilience and success in a dynamic business environment. In the context of AI risk management, the principles of systems theory offer invaluable insights for comprehensive and effective process management. Here's just a glimpse of how systems theory can offer invaluable guidance:

Component Interplay and Vulnerabilities: Similar to a system's composition of parts, AI ecosystems consist of interconnected components. Vulnerabilities within these components introduce uncertainties that can magnify the risk not only to the system as a whole but also to individual parts. Any alteration in these AI components may shift the risk landscape, potentially impacting other components. Therefore, a holistic AI risk management approach necessitates examining both the system as a unified entity and its constituent parts, ensuring that vulnerabilities are identified and addressed at various levels.

Interdependencies Among AI Systems: Enterprises employ a network of AI systems that often rely on each other. Although individual AI systems might undergo separate risk assessments, when considering their collective impact (risk aggregation), the interdependencies could lead to a distinct risk profile for the organization. Recognizing the intricate web of AI interconnections, organizations must evaluate the combined risk landscape to ascertain how changes in one AI system might reverberate through others, potentially influencing the overall risk posture.

System Boundaries and Resource Management: Just as a system is defined by its boundaries, AI systems are confined by their operational limits. In the AI context, risk exposure emerges from shifts in resource states, particularly data, due to user interactions within the system. Understanding the implications of users' actions and their influence on resource status is crucial for gauging potential risks and implementing safeguards to maintain resource integrity.

Nested Systems and Nonlinear Impact: Within a broader AI ecosystem, there can be nested AI components or subsystems with their complexities. These nested components might include specific AI algorithms, data pipelines, or applications. If one of these nested components experiences an issue, the impact might not be linear; it could have broader, potentially unforeseen repercussions on the entire AI system. For instance, a data quality problem within a nested AI component might lead to inaccurate recommendations across the entire AI system, impacting user trust and regulatory compliance.

Overlapping AI Systems: In the AI realm, the overlap of multiple AI systems or components is common - they may share common data, processes, or objectives. These systems may operate independently but often have some degree of interaction or influence over each other. The interaction between overlapping AI systems can generate unique risk dynamics. This underscores the need for a nuanced approach that recognizes how overlapping AI systems can amplify or mitigate risks through their interplay.

AI System Lifecycle: Just as systems undergo lifecycles, AI systems evolve through initiation, operation, and retirement phases. Risk assessments at these junctures can inform decisions regarding AI system continuation or termination. By incorporating risk assessments into each phase of the AI system's lifecycle, organizations can ensure that potential risks are adequately managed.

Regional and Geographic Factors: AI systems might operate across diverse geographical locations, exposing them to regional influences that can impact risk assessments. Recognizing and accommodating these regional and geographic factors is essential for a comprehensive AI risk management strategy.

Propagation of Risk in AI Ecosystems: AI systems interact with organizational environments, contributing to the propagation of risks that might lead to unexpected systemic impacts. The processes that AI systems engage in, transforming inputs into outputs and interacting with other systems, necessitate vigilant risk analysis. This is achieved by continuously assessing risks tied to inputs and outputs and fostering a feedback loop to ensure ongoing risk management.

Hierarchy and Feedback Loops: Hierarchy refers to the organizational structure of AI systems, where components may operate at different levels of abstraction or importance. Feedback loops involve information loops where outputs from one part of the system cyclically influence inputs. Understanding the hierarchy and feedback loops within AI systems is crucial for risk management. Risks can emerge if feedback loops amplify errors or if hierarchical misalignments hinder efficient decision-making. Organizations should assess these structural aspects to prevent risks associated with suboptimal system behavior.

Multidisciplinary Collaboration: Multidisciplinary collaboration involves engaging experts from various fields such as data science, ethics, law, cybersecurity, and domain-specific knowledge to address AI risks comprehensively. Risks in AI often transcend technical boundaries and require expertise from multiple domains. Effective risk management involves cross-functional teams working collaboratively to identify, assess, and mitigate risks from various angles.

By embracing these principles of systems theory in AI risk management, organizations can adopt a proactive and holistic approach to identify, assess, and mitigate risks inherent in the complex AI landscape. At the same time it should not be taken as an ultimate solution, we will also guide risk management processes with other cornerstones of our AI Governance Approach.

Have Questions?

Talk to us. AdalanAI is building an end-to-end solution for AI Governance: SaaS platform and AI Governance Approach - novel ways to govern entire AI development life-cycle. 

  • LinkedIn
  • Twitter
Image by Milad Fakurian

CONTACT

Let’s Work Together

Your contact person: Ana Chubinidze, CEO 

ana.chubinidze@adalanai.com

bottom of page