It's a relatively simple question, but the answer is not straightforward. First, perhaps unfairly, let’s broaden the question to cover other constituencies than just your customers. The issue also applies to your partners, suppliers, employees, investors, and other interest groups part of your organization's ecosystem. Secondly, let me be clear; you should tell them! However, what exactly do you tell them, and how and when? Lastly, once you have told your customers, what are their options, and what actions do you take?
Let’s break down the answer to this seemingly simple question. There are three categories of responses. First, we could not provide any information about the usage of AI within our organization. After all, in many situations, customers or other constituencies, don't know what technology we use and often don't care as long as the technology provides a positive experience for them. So, your first option is, do nothing. Secondly, we could follow the regulatory environment at any given time and geographic location. It is plausible that most companies will take this approach in the future. As outlined in the recent article about multinational AI governance, following the regulatory environments for AI will be complicated for most multinational organizations. But necessary. The last option or answer would be to take a proactive and outside-in driven approach to AI, or specifically an outside-in driven Explainable AI or AI governance approach.
This latter approach of taking a proactive stance regarding your usage of AI is the most interesting but also the one with the highest complexity. For a moment, think about what your reaction is to engage with an intelligent chatbot, seemingly acting like a human? Or think about whether you trust recommendations for, say an insurance product, generated by detailed algorithms determining your needs without you knowing that behind this artificial intelligence that is involved? Or do you trust all autonomous car features, including not allowing you to take control in specific instances? Your customers, suppliers, and employees will face many specific situations involving AI, and they will demand that you are public and explicit about what you do with AI. It will become a question of trust in your brand, your product, your organization, and the leadership team. You need to consider the following points in your approach to a proactive and public AI charter and philosophy.
Right to know: Should your customers have a right to know when they encounter AI-generated data or intelligent algorithms using their data as input? If you believe the answer is yes, how do you tell them without scaring them away? The right to know is a critical point for organizations to consider as part of their AI governance strategy.
Notification of AI decisions: Should we, for example, notify employees when an algorithm makes a decision that determines their future career in the company? These are one of many considerations, not just concerning employees, but similar choices exist for customers, suppliers, and other constituencies. Notification of AI decisions is more comprehensive than mere notification of when the company uses personal data for AI decisions. All the organization's stakeholders will likely want to be notified about AI decisions.
Review of the AI decisions: The next level would be to expose someone to the decision and then allow them to determine if they are comfortable with the decision. The following action could then be to reject the decision, which would be akin to an opt-out clause. Customers could be allowed to invoke the ability to appeal the outcome of an AI decision or, in some instances, refer the decision to a human being in the organization.
Opt-out of systems using AI: In some situations, the organization could allow the user to up-front opt-out of systems or decisions involving the usage of artificial intelligence. While likely complex to manage, it would enable the user to feel more comfortable with specific environments. Another form of opt-out could be to opt-out of the ability to use your personal data in systems employing artificial intelligence.
Notification of bias: Companies could implement a policy or system notifying customers of issues with the data such as bias in the dataset or merely the possibility of bias. The user would then be allowed to determine whether they wanted to trust the decision even with biased data.
Remedy for misuse: If something goes wrong, what is your organization's stated approach to abuse of data and decisions caused by systems using artificial intelligence? In situations where there is, for example, misuse of data such as data representing the individual or aggregate data sets as well misuse of algorithms, the organization should give the user some level of appeal and ultimately a form of remedy.
The options above may be a stretch for your organization to implement. Or may take several years to realize. Irrespective, there is an urgency for the executive team to determine the organizational position around AI. In particular, how public or explicit the organization wants and needs to be regarding the detailed usage of AI. This is a conversation to be had as part of your organization’s overall position on AI governance. It may be worth considering actions such as:
An AI customer charter: A clear public statement around how the organization uses AI in customer situations, options customers have such as opt-out or review of AI decisions, and lastly, how it will treat bias or misuse in customer situations. No different than a B2C customer has a return policy online, all organizations should have an AI customer charter. That charter is likely supported by a dashboard, some of which should be made available to customers.
AI employee policy: A document that sets the standards, usage, processes, and rights of employees when it comes to usage of AI in the work environment. For example, can an employee be faulted for a decision made through AI? How does the organization treat bias when it may be challenging to ascertain whether it is a person or an intelligent algorithm as the root cause? The HR organization will need to take the lead at defining the AI employee policies.
Supply Chain AI policy: Suppliers and partners in your organization's end-to-end supply chain will need to understand your usage of AI. Suppliers may want the ability to review AI decisions, your approach to bias and other aspects around transparency, and lastly, what you require of the suppliers in terms of AI data and decision transparency. The supply chain leadership team needs to craft a Supply Chain or Supplier AI policy.
Each of these policies or charters will need to rest on corporate-wide AI Governance charter (read more here). Going back to our initial question, "Do you tell your customer (and other constituencies about your usage of AI?” I believe all organizations will have to, because of a combination of regulatory, brand value, ethical, and competitive reasons. The executive team will need to define what you do and how your approach evolves. And they need to start that journey now!
Comments