Cornerstone: Documenting Governance Cards for Transparency and Compliance
This is one of the 6 cornerstones of our Manifesto. Read the Manifesto here
Established Practices In AI documentation
While AI systems continue their widespread proliferation across diverse industries and domains, the establishment of standardized procedures for documenting these systems remains sporadic. In response to this challenge, an ever-growing body of research has emerged. This research often draws inspiration from well-established documentation standards found in other complex and risk-laden industries, such as the electronic hardware sector.
Notably, Gebru et al.'s seminal work, 'Datasheets For Datasets,' that draws parallels with relevant practices in the electronics industry, stands as a pioneering effort in bridging this documentation gap. However, this work primarily focuses on establishing standardized documentation practices for data governance within the realm of machine learning models, particularly concerning the datasets upon which these models rely, rather than encompassing the broader scope of AI systems as a whole.
While the significance of data governance within AI governance is undeniable, the imperative for a standardized approach to documenting AI systems as a whole is persistent. In a similar vein, Mitchell et al. introduced a framework called "model cards" in their paper "Model Cards for Model Reporting." Model cards serve as a means to inform users about various aspects of the AI system, including specifics about the AI model, its intended applications, performance metrics, evaluation data, quantitative analysis, and ethical considerations. Although Mitchell et al. 's model cards are predominantly fairness-based, focusing on detecting bias in AI models while somewhat sidelining other risk categories, they still stand as a notable reference point in the discourse surrounding AI system documentation.
Furthermore, several diverse approaches to AI documentation have emerged, each highlighting distinct aspects of documenting AI systems. These approaches encompass a spectrum of perspectives. On one end, we find specific regulation-focused viewpoints like Hupont et al. 's Use Case Cards, which place a strong emphasis on intended purpose and operational use of AI systems, aligning closely with the stipulations set forth in the EU AI Act. Similarly, Brajovic et al.'s Use Case and Operation Card method is tailored to meet the requirements of the EU AI Act.
In contrast, there are domain-specific lenses through which AI documentation is viewed, as demonstrated by Wahle et al.'s "AI Usage Cards," which delve deep into AI's application within scientific research contexts. Beyond this, other, more critical perspectives also come into play, as exemplified by Mehta, Rogers, and Gilbert's Dynamic Documentation approach, which critiques static documentation practices and proposes a more adaptable approach suitable for the dynamic and complex nature of ML models.
Nonetheless, a comprehensive model that effectively unifies these diverse approaches, extends their core principles, and constructs a holistic framework that seamlessly integrates the technical components of documentation with regulatory, risk and ethical considerations remains absent. It is within this contextual backdrop that our approach takes shape, drawing inspiration from the most prominent theoretical and practical paradigms. We expand upon and synthesize these concepts to introduce what we term a "Comprehensive System Description Card (CSDC)." This multifaceted card not only encompasses technical documentation and data governance specifics but also extends its purview to include regulatory, risk and ethical prerequisites.
Our approach is aimed to impact not only academic discourse, but also actual industry practices, currently under the sway of Big Tech’s preferences. Big Tech giants, including Meta, IBM, Microsoft, and Google, have implemented AI documentation practices like Model Cards, System Cards, AI Factsheets, AI Fairness Checklist, etc. to enhance transparency. However, these practices often suffer from technical complexity, making them challenging for end-users to understand. At the same time, they tend to be somewhat superficial, potentially overlooking vital components of AI systems. Furthermore, what is perhaps the most important thing is that they do not comprehensively address the extensive spectrum of risks actually posed by their systems, ranging from individual-level risks to more expansive societal and environmental concerns.
Our new, unified approach is not just a response to these limitations; It is a catalyst for a future where AI is not just a technological marvel but a force for good that truly understands and mitigates its own potential risks. This is our commitment to not just shape the discourse but to reshape the landscape of AI, ensuring that its evolution aligns with the best interests of humanity.
Our novel and unified approach to AI system documentation is:
More Interactive and Comprehensive: The established methods for AI system documentation exhibit limitations in terms of 1) Interactivity- they lack the capacity to adequately and effectively accommodate the dynamic nature of complex AI systems; 2) Comprehensiveness- they tend to provide only surface-level explanations, often overlooking crucial aspects and essential features necessary for a comprehensive and impactful model description. To bridge this gap, we are introducing a mechanism that is 1) more interactive- it adopts to the complex nature of AI systems and 2) more comprehensive- it consists of comprehensive description of the system’s inner workings, intended use cases, evaluation metrics, detailed explanations of data governance practices, and model impact assessment features that address model’s impact on individual, society and environment.
Clearer and More User-Friendly: Balancing comprehensiveness with practicality, clarity, and user-friendliness is a formidable challenge. We have successfully harmonized these elements by recognizing that comprehensiveness can enhance, rather than hinder, clarity and user-friendliness. Our documentation is designed to be crystal clear and user-friendly precisely because of its comprehensiveness, making it accessible to a wide range of stakeholders.
More Risk-aware: Unlike traditional approaches that often provide very general and superficial answers about potential risks the AI model actually imposes, our card includes a thorough examination of various risk scenarios and enables companies to answer in-depth questions about different risk possibilities.
More Regulations and Ethics Oriented: While many existing documentation methods adopt a generic and regulations-blind stance, our approach aligns seamlessly with the requirements outlined in the EU AI Act, NIST RMF, and other established and emerging legislations governing AI. This commitment to compliance and ethical considerations sets our approach apart as a responsible and forward-thinking solution.
Our novel approach to AI system documentation is a game-changer. It offers interactivity, comprehensiveness, clarity, user-friendliness, risk awareness, and regulatory alignment in a single package. By adopting our approach, organizations can navigate the complexities of AI with confidence, transparency, and ethical integrity, ensuring the responsible deployment and management of AI technologies for a better future.
Talk to us. AdalanAI is building an end-to-end solution for AI Governance: SaaS platform and AI Governance Approach - novel ways to govern entire AI development life-cycle.