top of page

Why Now is the Right Time to Build and Implement AI Governance Framework

AdalanAI_Logo_edited.jpg

Publication date: 16.02.2022

In anticipation of upcoming global tech regulation, Microsoft is preparing its legal divisions and plans to expand by 20%. The company recognizes that AI and privacy are part of the increasing broader tech regulation. The EU AI regulatory proposal has already provided a snapshot of what Big Tech companies will have to face.

​

The EU AI regulatory proposal has already provided a snapshot of what Big Tech companies will have to face. But there are more urgent needs in the corporate world rather than upcoming regulation.

​

But there are more urgent needs in the corporate world rather than upcoming regulation. Incorporation of powerful AI tools and technologies that support new ways of working, hiring, managing, analyzing and sourcing has been accelerating at larger scales. This technological acceleration carries a corresponding call for accelerated actions to govern them for the greater impact. However, practical work underway lags behind the need and misses the opportunity of strategic leadership and long-term gains. At the same time, it exposes society, employees and customers to harmful consequences emanating from biased, erroneous and opaque algorithmic decision-making. Organizations put forward various reasons to delay implementation of comprehensive governance frameworks, but they do not hold up. Where there is AI in use, there is a need for its oversight and now is the good time to deploy future-proofed, robust structures to mitigate risks and unleash AI’s best potential.

​

Moreover, Public trust in AI technologies is still volatile as AI technologies are relatively new. Public witnessed numerous harmful accidents/failures these technologies have posed to individuals or society and are reactive sometimes. 

 

​

Increasing Traction

​

AI is already a multi-billion-dollar global market and it is expected to be ever-increasing. Gartner Forecasts Worldwide Artificial Intelligence Software Market to Reach $62 Billion in 2022. This forecast is based on the data analysis such as: measuring the amount and timing of business value and the use cases. Although the AI maturity still needs to be improved (i.e. companies experiment with AI, but many of them don’t use it to fulfil regular operations), Gartner suggests that by 2025 half of the companies will have reached the “stabilization stage” of maturity.

​

Meanwhile, a considerable number of new policy initiatives emerge regarding AI usage. According to the OECD, there are already over 700 AI policy initiatives from 60 countries, territories and the EU.

The tech as a lobby sector is also significantly growing: 612 companies, groups and business associations lobbying the EU’s digital economy policies together spend over €97 million annually. This makes tech the biggest lobby sector in the EU by spending, ahead of pharma, fossil fuels, finance, or chemicals.

​

Hundreds of new associations, non-profit and intergovernmental organizations are emerging around the world to deal with the complexities around these technologies. Partnership on AI, Global Partnership on AI and All Tech is Human are few of such examples.

 

​

​

How to go forward

​

This means companies need to take action today as AI is increasingly posing unfamiliar risks over the past two-three years while best practice examples or established management models are lacking. 

​

There are three key actions companies can start with.

​

First: Define company values, ethical principles and make them publicly available so that your customers or any interested parties have information about your priorities, recognized risks and boundaries. However, certainly, more detailed guidance should be provided for senior executives, so that they can explain to the staff how the company will use AI, data and analytics. The value with which these tools contribute to the company, should be also clearly stated.

​

Clearly defined values and ethical principles build the basis of trust not only externally but also internally as employees are increasingly becoming sensitive to company values and choose employers accordingly. That’s where explainability and transparency principles come into play too: when the employees are informed regarding the operation and the application of the AI, they work more efficiently and effectively. In certain AI applications this ensures their physical safety as well. If the AI system suggests using potentially risky equipment, the worker needs to be assured that the decision of the system is reasonable and harmless. In addition to that, the lack of explainability might pose some risks. This concerns all industries, but mainly areas such as healthcare in which AI can give recommendations to the patients when the life and well-being of a person are at stake. The lack of explainability might also lead companies to refrain from adopting AI solutions, which in turn will create a peril of falling behind in market competition.

​

companies face not only technical but also organizational challenges when it comes to the AI systems. The main issues that the majority of the companies have to deal with are purely human aspects of the AI

​

Thus, companies face not only technical but also organizational challenges when it comes to the AI systems. The main issues that the majority of the companies have to deal with are purely human aspects of the AI: the standard AI execution processes and, more importantly, the coordination of the senior executives around the AI strategy. In these processes, together with defining and communicating company values, talent upskilling is particularly crucial. However, while upskilling takes time and much effort, it might not be enough to conduct training and educational courses. At some occasions inviting external domain experts will be the best solution in problem-solving process as they are trained in specific areas and have relevant extensive experience. Whether it’s data scientist, philosopher or social scientist, it is always safer to seek for additional advice rather than risk your organization’s reputation due to high-risk AI products.

 

Second: Apply robust and comprehensive governance framework. In order to be able to build robust governance frameworks, organizations need to employ number of approaches: structural, procedural and relational.

​

While defining company values and ethical principles can spark the basis of trust, comprehensive governance approaches are essential for building and strengthening it. It’s not only about risk mitigation, it’s about reputation and public trust, about creating tangible benefit for the society, otherwise new technology such as AI will suffer from disapproval from the public.

​

It’s not only about risk mitigation, it’s about reputation and public trust, about creating tangible benefit for the society, otherwise new technology such as AI will suffer from disapproval from the public.

​

However, the biggest challenge to the companies comes from the risk management: since AI is a complex technology, the traditional Model Risk Management (MRM) is not sufficient. The first set of reasons of this insufficiency are connected to the MRM itself: MRM usually stays unaltered between reviews, it’s sequential and needs some time for deployment. Additionally, it targets the familiar and traditional risk types and for this reason it’s completely useless when it comes to the risks associated with NLP, HR analytics and chatbots.

​

The second set of reasons concerns the new challenges that AI has introduced to our world: AI poses new risks and it also poses the old ones in a new way. Bias can serve as an example: before the adoption of the AI, the companies had to deal with the individual cases of bias, whereas now, if bias becomes a part of the AI algorithm, this means the company has systematized and institutionalized it. Targeted advertising is another good example: if political actors had to go door-by-door to adjust political promises to certain groups, now with micro-targeting it's easily possible to even intervene another country's politics. On the one hand, this is a technological problem and there are scientists who work to solve it. On the other hand, this puts an additional burden on risk management teams, so the company leaders have to figure out who is in charge of the AI risks.

​

Moreover, since all the industries use AI at least at some level, it is extremely hard to track the risks. This is how new, potentially unchecked risks emerge in industries; In addition to that, since the old risks which are not necessarily connected to AI, still remain (for instance, data privacy, cybersecurity, and ethics), the companies have to make a lot of design choices to combine the mitigation of these risks to the mitigation of the new, AI-connected risks. So, how do the companies cope with the risk-mitigation?

​

The companies should pre-determine risks, i.e., they should embed risk identification and assessment into the development and procurement cycles. This will accelerate the checkup before the actual implementation of the AI system, since by the time of the checkup a large number of risks will have been already identified and alleviated. Companies should categorize the risks first, afterwards prioritize them, i.e., classify from the most to the least significant ones. 

​

In order to cope with the issues mentioned above, companies need to re-design the workforce. This includes the creation of new roles within the teams (for conducting model review or fulfilling other AI-related function) as well as the creation of a cross-functional committee and the support of the independent audits. The cross-functional committee will ensure the secure and fair deployment of AI system. It should consist of the professionals from legal, cybersecurity, technology and other different areas. The aim of this governance body is to establish AI risk standards that other teams have to adhere to. It will also give recommendations to business and development teams regarding the choices they have to take in order to satisfy organizational or regulatory standards.

​

Since the legal and risk-management teams will have to work in coordination with the data-science team, each of these groups will have to expand their skills and knowledge. This means that the analytics team has to understand the effect of the risk models on business results, while risk managers need to gain some knowledge in data methodologies and concepts and machine-learning and AI risks. This is essential for the successful and effective cooperation of analytics and risk-management teams. It is important to take into consideration that this coordination takes place at all levels of AI development: ideation, data sourcing, model building and evaluation, industrialization, and monitoring.

​

Additionally, the multidisciplinary team has to be properly organized: the companies have to define the leaders of analytic and management teams, clarify their functions and responsibilities in regard to AI controls. Risk managers must receive the appropriate guidance and training in analytics, so that they can foresee the risks in case of the complex AI models.

​

The coordination between various teams requires the shared technology platform too. The platform must incorporate the following elements: the same documentation standard for all which meets the necessities of the stakeholders (developers, risk, compliance, validation); a single workflow tool for documentation and coordination of the full cycle of the model development; and the access to the same technology stack, the same data and the development environment for reexamination and testing. In this process it is essential to create the AI model that is easily understandable, since the same data scientist who has developed it might not be available at the company by the time the issues arise (the job market for data scientists is highly competitive. Thus, they often change their work positions).

​

Third party relations are very important. The multidimensional character of AI services requires involving third-party relations and include benefits, such as outsourcing tasks like availability and scalability. However, this also comes with its risks such as losing overseeing how the AI model is trained or how the organization's data is used.

​

Feedback response is also essential part of the AI deployment. Companies should build channels to gather feedback from the end users of their AI products. This will be enabler for analyzing both positive and negative user experiences and reflect on them. This means, the AI system should be responsive to the feedback so that the users can engage in the AI development cycle. This engagement will in turn additionally improve transparency as well as increase trust in product.

 

Third: Pre-prepare for compliance. The EU has already drafted the AI-specific regulations which will soon enter into force in all EU member states. Even though exact dates of implementation are unknown, the timeline of GDPR adoption might give us some clues. Considering that GDPR was proposed in 2012, adopted in 2014 and implemented in 2018, it will not take too long for the draft EU AI Act to be enforced. Hence, now is the reasonable time for the companies to consider the regulatory framework in the AI development process. This can help them understand the upcoming impacts of AI and take action to adapt to the new challenges and laws. Additionally, there are already many other regulations in force that are not necessarily AI-specific but can be applied to AI usage. The governments of Canada and the US also initiated binding instruments for companies to assess algorithms used in decision-making process, mitigate bias and other possibly harmful actions by AI system. China has multiple initiatives in place too.

​

While it is hard to precisely forecast the final output of AI regulatory drafts and proposals, there are several steps companies can take now to be prepared

​

While it is hard to precisely forecast the final output of AI regulatory drafts and proposals, there are several steps companies can take now to be prepared: 1) Stay on top of the ongoing discussions around regulations in your region of interest, especially follow high-level discussions and digital policy-making; 2) Get directly engaged in discussions as various platforms are appearing as an opportunity to give voice to different stakeholders; 3) Map AI regulatory proposals with existing laws and regulations such as general Human Rights conventions, as majority of AI legal proposals are built upon existing human rights approaches. These three steps will help companies reduce confusion about the future of AI regulations and avoid million-dollar compliance problems in the next 5-10 years.

 

 

Finally, the companies are only beginning to engage in AI-related risk-management, whereas AI can generate value and ensure the competitiveness of the company only if it is effectively applied. Thus, the sooner the companies adopt the effective and dynamic frameworks to mitigate AI risks and comply to the regulations, the more successful they will be on the market and the least material loss they will suffer.

AdalanAI_Logo_edited.jpg

Have Questions?

Talk to us. Adalan AI is a management and policy consulting firm in the field of Artificial Intelligence. 

  • LinkedIn
  • Twitter
bottom of page