top of page
  • Writer's picturePeter Sondergaard

AI Risk is much more than Ethical AI.

So much has been written about AI Risk over the last few years. It is often assumed that AI Risk equals topics like Ethical AI, AI Governance, AI regulation, and AI compliance. Each of these topics is critical to organizations looking at scaling the use of AI. But they may not fully reflect what buyers of AI view as AI Risk. A detailed survey of AI usage in 265 organizations in Europe and the United States by Reply showed that organizations using artificial intelligence view AI risk far broader than what the general conversation in the press and the technology provider industry would portray. The results indicate that technology and service providers must be mindful of these broader issues as they constitute reasons “not to buy” AI products and services. For organizations deploying AI, all ten issues must be considered as part of buying and deploying AI-based solutions.

 

In the survey, ten different areas of concern regarding AI Risk were identified. When asked, organizations include areas such as talent, budget, provider risk, and data governance as part of what they see as risk. Therefore, the view of AI risk is broader than what most technology and service organizations will portray. Equally, when adopters of AI or end-user organizations consider AI Risk, they will need to ensure that all functions across the organization are involved in the scaling of AI. Talent requires the involvement of the HR organization, budget requires the Finance organization, provider risk needs the Purchasing organization to expand its analysis, and Data Governance requires the IT organization. As AI increasingly becomes a business capability, the complexity and, thus, risk to the organization increases. Business unit leaders and P/L owners must consider how to address this now.

 

Source: Reply. N=265. See Survey Methodology details later.

 

These areas will allow us to consider how they vary in three critical ways.  First, their complexity for the organization as a whole; secondly, time, meaning the time it takes to complete this. In most instances, the risk factor is either continuous or episodic, and lastly, the functional areas are primarily responsible for addressing the risk for the organization.

 

  • Compliance and Security Challenges: As the use of AI increases, compliance with regulations and maintaining security are becoming more complex and sensitive, especially in industries like financial services and the pharmaceutical industry. New AI regulation frameworks globally will challenge organizations in the next 3-5 years.

Complexity: High.

Time: Continuous.

Responsibility: CISO, Head of Audit & Compliance, Product Managers, and CHRO.


  • Privacy and Security Concerns: The expansion of AI applications could expose organizations to greater data privacy and security risks if appropriate controls and safeguards are not implemented. This will remain an issue that organizations continuously will need to address.

Complexity: Medium.

Time: Episodic.

Responsibility: CISO, Business unit managers.


  • Data Sensitivity: Mishandling or inaccuracies in data due to AI can have serious repercussions, particularly in operations that involve sensitive policy information extraction or similar use cases. This makes data governance even more critical for organizations, as well as the capabilities of all managers in the organization regarding data and AI.

Complexity: Medium.

Time: Episodic.

Responsibility: CIO, CDO, and CHRO.


  • Talent Shortage: Organizations need help attracting and retaining the skilled workforce to leverage AI effectively. They equally will be challenged by how AI changes work within the organization. This requires the direct involvement of HR in the AI transformation of the business to assist with talent strategy and talent development.

Complexity: Medium.

Time: Continuous.

Responsibility: CEO, CHRO, and Business unit managers.


  • Budget Limitations: Limited budgets may hinder the development of multiple proofs of concept (PoCs) and validating AI applications within the organization. How fast organizations will increase AI investment in the next two years will determine the speed at which the changes with AI scale.

Complexity: Medium.

Time: Episodic.

Responsibility: CFO, CIO, and business unit managers.


  • Unknown Risks: Some uncertainties arise as organizations continue to adopt AI, which could represent unforeseen risks or drawbacks. “Black Swans” may occur through the scaling of AI across organizations and society as a whole. Equally, the technologies around AI present internal risks that the IT organization constantly needs to track.

Complexity: Medium.

Time: Continuous.

Responsibility: CEO & Board as well as the CIO.


  • Ethical AI Practices: Ensuring ethical practices in AI development and usages, such as addressing AI transparency, fairness, and non-discrimination, is both a challenge and a risk.

Complexity: High.

Time: Episodic.

Responsibility: CEO, CHRO, CLO, CIO, and business unit managers.


  • Dependence on Solution Providers: Organizations depend on solution providers, and risks might be associated with their capabilities and approach to AI implementation. Monitoring this initially becomes the responsibility of the IT organization, but as AI is infused into all products and services, the issue will become a broader management issue.

Complexity: Medium.

Time: Episodic.

Responsible: CIO, Purchasing, and Business unit managers.


  • Regulatory Compliance Complexity: Complying with a patchwork of AI regulations is complex and costly, which can significantly hinder organizations looking to scale AI applications. New AI regulation frameworks globally will challenge organizations in the next 3-5 years.

Complexity: Medium.

Time: Episodic.

Responsibility: Legal & Audit and business unit managers.


  • Potential for AI-Generated Errors: AI may produce incorrect outputs, such as "hallucinations,” especially when dealing with multiple versions of the same document, leading to operational risks.

Complexity: High.

Time: Continuous.

Responsibility: All managers


It is evident that the comprehensive view of AI risk that businesses and government institutions have led to complexity across an organization. This will require collaboration, transparent governance, and a vision for senior leadership. This is why AI is not a project; it’s a continuous (re)evolution of the organization's business model. Ensuring the organization is focused on all aspects of AI Risk now is critical for future success. For technology and service providers, understanding that their clients view AI Risk much broader than just ethical AI, AI compliance, and AI regulation will allow for deeper and more valuable customer relationships. Also, in the future, selling AI solutions will require broader and deeper relationships with non-IT executives. The higher complexity impacts the current approach to technology adoption for both the user and vendor AI-based solution. It is the leadership team’s responsibility to understand where the impact is most immediate and act!

 

 

The Survey Methodology: The survey comprised five open-ended questions regarding AI. The responses, all long freeform verbal answers, were collected through conversations between clients in Europe and the United States and senior partners of Reply. The responses were entered into a spreadsheet file. Using OpenAI’s GPT 4, specific prompts were written for all the analyses made on the answers. Human experts checked insights, ranking of the answers, and summarized results.

 

66 views0 comments
bottom of page