© 2020 The Sondergaard Group LLC

  • Peter Sondergaard

AI Ethics – An Executive Responsibility

It is no secret that artificial intelligence #AI has gained significant awareness with senior executives the last 24 months. Increasingly, CEOs and other senior executives are sponsoring a rise in AI proof-of-concept implementations in their organizations. In a subset of advanced organizations these AI platforms have become large, permanent and business critical implementations. However, in the rush to exploit the real (and perceived) opportunities of AI, executives are ignoring the need to establish their organizations position around AI trustworthiness and ethics. It is their responsibility no only to establish the ethical and trustworthiness position of the organization’s artificial intelligence usage but also to make those guidelines public to key the stakeholders of the organization, namely clients, employees, suppliers and investors. Ultimately executives need to demonstrate, that their organization take seriously, the challenges that artificial intelligence and machine learning pose to their organization and to society.

This is important because artificial intelligence and machine learning environments will now rapidly mushroom in organizations. Therefore unless they are carefully coordinated through a single senior executive and an established set of principles-of-usage, the risk of bias, maleficence, lack of traceability, etc. will end up constituting a significant business risk for organizations. CEOs need to sponsor an AI Ethics and Trustworthiness charter for their organization. Larger technology organizations such as Google and Microsoft have already been explicit in their public position. This is obviously important for them since they make a business from AI and are key stakeholders in the success of AI. All technology providers using AI in their products or offering AI and machine learning software solutions should follow their lead.


However, non-tech industry businesses such as banks, pharmaceutical companies and government agencies, etc. have been less public in their position around artificial intelligence. Some have established internal positions and/or debates through the CIO or Chief Digital Officer (CDO), but very few have been publicly explicit around their organizations position. As software and data increasingly defines an organization, it becomes paramount for all organizations to consider an external position on AI and ML.


A potential source of inspiration for CEOs and government agency leaders is the European Commission’s recently release of a draft proposal for Ethics Guidelines for Trustworthy AI. This is a document for inspiration, not intended as a policy or regulatory document but instead as a living document for businesses and government institutions to adopt or to gain inspiration from. The document contains 10 elements of trustworthy AI:


1. Accountability

2. Data Governance

3. Design for all

4. Governance of AI Autonomy (Human oversight)

5. Non-Discrimination

6. Respect for (& Enhancement of) Human Autonomy

7. Respect for Privacy

8. Robustness

9. Safety

10. Transparency


Not all these elements may apply to your organization, but they are a great starting point. The document outlines the specifics of each of the 10 elements allowing for your organization to take a position on each. The document equally has a set of principles for AI’s ethical purpose through a suggestion for the fundamental rights, purpose and values of AI.


As a senior executive you have, in the document from the European Commission, all the components for internal discussion in your organization. You should enable such a conversation around AI in your organization immediately. You should ensure that the outcome is a public document and that all of your organization’s stakeholders are made aware of your position. A public, transparent & ethical AI position will become a measure of corporate responsibility and should become mandatory.


Peter Sondergaard

374 views