How stakeholder capitalism and AI ethics go hand in hand


Elevate your enterprise data technology and strategy at Transform 2021.


At a 2020 meeting of the World Economic Forum in Davos, Salesforce founder Marc Benioff declared that capitalism as we have known it is dead. In its place now is stakeholder capitalism, a form of capitalism that has been spearheaded by Klaus Schwab, founder of the World Economic Forum, over the past 50 years. As Benioff put it, stakeholder capitalism is a more fair, a more just, a more equitable, a more sustainable way of doing business that values all stakeholders, as well as all shareholders.

Unlike shareholder capitalism, which is measured primarily by the monetary profit generated for a business shareholders alone, stakeholder capitalism requires that business activity should benefit all stakeholders associated with the business. These stakeholders can include the shareholders, the employees, the customers, the local community, the environment, etc. As an example, Benioffs approach includes homeless people in San Francisco as stakeholders in Salesforce.

While believers in stakeholder capitalism have been working on the idea for some time now, an important milestone was reached in early 2021. Following discussion at the 2020 meeting led by Bank of America CEO Brian Moynihan, a formalized set of ESG (environmental, social, and corporate governance) metrics were announced that business can report, indexed around 4 pillars:

  • Principles of governance
  • Planet
  • People
  • Prosperity

These metrics are important because they make it possible to easily audit a business compliance to the principles of stakeholder capitalism.

Given the role that technology has within business, it is impossible to overlook the growing impact that artificial intelligence will have in society and the parallels to the discussion of stakeholder capitalism. Many businesses are transitioning from a goal of pure profit to more inclusive and responsible goals of stakeholder capitalism. In AI we are also at the start of a transition one that moves from the goal of maximizing pure accuracy to goals that are inclusive and responsible. In fact, given the prevalence of AI technologies across businesses, they will become critical components of stakeholder capitalism.

Also present at the 2020 meeting was then IBM CEO Ginni Rometty, who, when questioned about stakeholder capitalism in the context of the 4th Industrial Revolution, said that this is going to be the decade of trust. It is critical that all stakeholders trust in business and the technologies that they use. With respect to AI, Rometty said it is important to have a set of ethical principles (such as principles of transparency, bias mitigation, and explainability) and that you should audit your business to them.

Not all organisations will have adopted stakeholder capitalism principles as vocally and publicly as the likes of Benioffs Salesforce. However, businesses still have traditional CSR (corporate social responsibility) requirements and in the context of AI, existing and proposed regulation also contain similar themes as those discussed in the context of stakeholder capitalism at the World Economic Forum meeting.

Shortly after the stakeholder capitalism ESG metrics were announced in January of this year, the U.S. Department of Defense announced its AI ethical principles in February. The European Union followed with proposed AI regulation in April (which affects business both inside and outside of the EU), and then the UK announced its guidance on the ethics of algorithmic decisioning in May. Look at these announcements (and the 2019 proposed Algorithmic Accountability Act in the United States) and you will see many requirements, including those for governance, transparency, and fairness requirements that align clearly with the goals and metrics of stakeholder capitalism.

So just over a year into this decade of trust, what should businesses be doing? IBM has introduced the role of a Chief AI Ethics Officer, and Deloitte give plenty of detail on what this role entails. Not all business will quite be ready for this role, but they can start by documenting their ethical principles. As Rometty pointed out, it is important to know what you stand for as a company. What are your values? These lead to the formation of a set of ethical principles, which can lead you to form your own (or adopt an existing) AI ethics framework for your business.

Again, drawing a parallel to the ESG metrics announced in January that take stakeholder capitalism from talk to auditable action, you must test and audit your AI systems against your framework to move beyond talk and demonstrate your AI systems compliance (or lack thereof) with hard metrics.

Thorough, auditable ethics for AI should not be seen to be at odds with your business goals. As Rometty put it, it is not good for anyone if people do not find the digital era to be an inclusive era where they see a better future for themselves. Effective governance of AI ethics provides benefit to all stakeholders and that includes the shareholders too.

Stuart Battersby is Chief Technology Officer of Chatterbox Labs, where he leads a team of scientists and engineers who have built the AI Model Insights platform from the ground up. He holds a PhD in Cognitive Science from Queen Mary, University of London.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *