The Opportunity of AI: How to Maximize Value Creation and Minimize Downside Risk

With the launch of ChatGPT in November of 2022 and its record-breaking user growth, the topic of artificial intelligence (AI) has permeated conversations from break rooms to board rooms. Read on as Meghan Anzelc and Christina Fernandes-D’Souza examine three common flawed beliefs about AI.

November 27, 2024

By Meghan Anzelc, Ph.D., Founder and CEO, Three Arc Advisory, and Christina Fernandes-D’Souza, VP of Data Science, Three Arc Advisory

 

With the launch of ChatGPT in November of 20221 and its record-breaking user growth,2 the topic of artificial intelligence (AI)3 has permeated conversations from break rooms to board rooms. As AI capabilities are democratized and become available to the everyday user for low or no cost and increasingly embedded in all kinds of vendor offerings, AI is a topic for board directors. Regardless of the organization type, size, industry, or geography, AI will have cascading influences across society. In our conversations with board directors, three common flawed beliefs about AI arise. The first assumes that AI is a fad and the second believes, incorrectly, that AI doesn’t impact their business. These first two often lead organizations toward inaction. The third flawed belief is that AI will easily solve many problems and in their excitement are acting too quickly.

 

AI is a Fad

“AI is a fad and we’re going to wait and see if it sticks around before we take any action.” We hear this often, yet the fact is that AI has been used for value creation for decades and isn’t going away. While it may feel as though AI appeared overnight in late 2022, the reality is that many organizations, particularly those in highly regulated industries, have been using AI for decades. In 2023, Visa celebrated its 30th anniversary of using AI models in the payments industry,4 and in 2016 AT&T noted they had already been using AI for decades.5 The heavily regulated insurance industry has also been using AI and Machine Learning (ML) for multiple decades, with a 2013 survey finding nearly half6 of property-casualty insurers using these techniques for personal auto insurance; today, you would be hard pressed to find an insurer not using AI in some capacity for their business.

The substantive and effective use of AI within major industries should give private companies reassurance that there are unrealized opportunities to utilize AI to create meaningful value while being compliant and responsible in its usage. Additionally, the long history of businesses successfully using AI should give companies confidence that AI is not a “flavor of the month” or passing fad, rather a phenomenon that cannot be ignored without risk. The anticipated possibilities for AI to impact businesses in the next few years is enormous. PwC estimates AI could contribute over $15 trillion by 2030 to the global economy7 and Goldman Sachs suggests over 60% of jobs could be at least partially automated by AI.8 

 

We Have No Need or Plan to Use AI

We often hear, “we have no plans to use AI for our business. AI doesn’t impact us, and we don’t need to take any action.” These organizations will be unprepared for the impacts of AI used by their vendors, partners, customers, and employees. Because AI is being embedded in vendor tools, enterprise software, and is readily available to every employee, it will invariably impact your organization regardless of your size, industry, geography.

An alternative and equally important aspect is that your competitors likely are or will soon be using AI for their businesses, putting your organization at a disadvantage. There are many areas where small and medium-sized businesses are making use of AI to improve not only their financial performance, but their customer and employee experience as well.9 For example, the disadvantage of being the only competitor without an online product catalog or worse still requiring new hires to fax in paperwork is clear. Competitors who leverage AI to quickly and easily surface products customers desire and who use AI to streamline their employee experience will build additional competitive advantage, leaving others further behind.

As of May 2024, 75% of knowledge workers10 are using AI at work, yet 78% of those employees are using their own AI tools, adding a wide range of hidden risks to their organizations.11 Common vendors we all know and use are embedding AI across their offerings, from Microsoft’s mission to have a “Copilot on every desk, every device and across every role”12 to SAP’s declaration that “we’re determined to become the #1 Business AI company.”13 There is no avoiding the reality that AI is impacting every one of us, in both our personal and professional lives. 

While AI is not new, the pace at which capabilities are maturing can be an added challenge. If you evaluated an AI capability six months ago, it’s likely that your conclusions are now out of date. Over the past year, we’ve seen the ability to clone voices14 with a very small amount of input audio, the ability to generate believable cloned video of individuals15 with little input data, and resolution of images and film16 increase rapidly. Many of these capabilities are at or reaching enterprise-grade maturity, which was not true even a year ago.

 

Acting Too Soon or Without Forethought

Contrary to the first two misconceptions, we see some organizations that move too quickly to employ AI. They often start with the solution, meaning an AI tool, rather than identifying and framing the right problem or problems that AI might address to improve their business operations. This approach can lead to actions that divert finite resources and, in the end, achieves limited to no measurable impact. Other organizations misunderstand the range of risks or don’t adequately address potential risks, leading to action with negative and costly consequences.

Companies who invest in AI and see no return generally don’t share those experiences publicly. Occasionally, missteps of an organization’s use of AI do become public, from Samsung employees inadvertently sharing sensitive information17 to Air Canada losing a court case over their customer chatbot.18 Even organizations viewed as leaders in the space can experience negative and costly impact, from Google’s own ad launching Bard19 to Microsoft delaying20 their Recall feature over security concerns, and Meta’s AI chatbot falsely accusing elected state officials of harassment.21 

 

We See AI Potential: Where to Start

If you have started exploring the topic of AI at your company or are ready to do so, you may also be struggling with where to start. It is important for both the board and management team to thoughtfully consider how AI can help achieve the organization’s goals, how the competitive landscape is changing, and prepare to adjust their AI strategy based on outcomes and how the external environment continues to change. Based on our decades of experience building and implementing AI successfully (and sometimes not so successfully), we recommend considering three key questions at the start:

Question 1: “What is our business strategy? What problems are we trying to solve? Of those, where might AI be an appropriate solution?” We see many organizations jump into AI without first considering their “why” and focusing on what’s most important for their goals.

The Athena Alliance’s AI Governance Playbook, of which one of us was a co-author, recommends ensuring “the AI strategy follows from the business strategy, including people, operations and social impact strategies. Consider how responsible innovations through use of AI might boost brand value and competitive standing.”22

Question 2: “How is AI already impacting our business, partners, employees, and customers? As a board, what guardrails do we need to have in place to appropriately address potential downside risk and maximize the upside opportunities?”

We have seen many organizations completely ban AI tools, as a shortcut to dealing with the quick rise in readily available tools. However, this is not an approach we recommend. Given the vast majority of employees making use of AI tools at work, with an outright ban employees just move to using AI tools on their own devices. This does not mitigate the risk companies intended to avoid, and instead puts the organization at greater risk, having cut off their ability to view or monitor AI usage entirely. Instead, we recommend, at minimum, light-touch governance approaches including providing an approved set of AI tools to meet the top use cases for employees and providing clear guidance on appropriate and inappropriate usage of AI tools and the data that is provided to them.

Question 3: “How will we measure success? How will we know if the execution of our strategy, whether around building custom AI or just implementing governance for AI tools our employees use, is achieving our intended goals?” We often see organizations skip this step and then struggle to evaluate their efforts and effectiveness.

Often organizations are thoughtful about their approach to defining the problem, crafting a solution to meet the specific need, and implementing that solution successfully. Yet many of those same organizations do not consider how they will measure the success or failure of their efforts. For boards, it is critical to ask questions about how usage and impact will be measured, how outcomes will be monitored, and what steps management is taking to ensure unintended consequences are considered and managed appropriately.

For organizations seeking to understand whether now is the time to embark on initiatives that involve AI, it can be helpful to conduct a readiness assessment. This assessment can be done at both the board level to provide appropriate oversight of the business and AI strategies, and at the management level to more deeply assess the viability of the strategy and execution plan. This AI readiness assessment should include:

  • A review of the business strategy, competitor moves in AI, and a view towards what the future may hold for the organization to achieve long-term value creation. AI is likely not the right solution for many aspects of the business strategy but may be critical to other components of the strategy.
  • An evaluation of the current state of the organization, including its enterprise data assets and external data sources, its vendor and infrastructure partners and capabilities, and the current state of the talent in the organization – both their level of interest and knowledge, as well as the level of need for talent to adapt and adopt AI for continued success.
  • Clarity of the risk tolerance of the organization and the level of risk awareness throughout the organization. This can include organization’s stance on build vs. buy, appetite for M&A or partnering with very early-stage companies, and the current and probable future regulatory environment. Additionally, boundaries for which use cases may not utilize AI or which sensitive data cannot be used as part of AI initiatives could be included.

We have often seen success with organizations who start small and with low-risk initiatives, learn from those experiences, and then grow into larger (and likely more impactful) initiatives with learning incorporated to raise the likelihood of future successes. For example, an organization might choose to approve a vendor AI tool to help draft internal communications, with clear guidelines to have a skilled employee review and adjust the outputs before publishing.

With our backgrounds as AI practitioners, throughout our careers we have seen both public and private organizations from highly to less regulated benefit from the potential of AI. With the evolution of AI capabilities over the past few years, the opportunities are greater than ever before. We know from experience that the risks are both real and can be managed and mitigated. We know too that the topic of AI can be daunting and overwhelming. Our hope is that this piece has helped give you greater confidence to steer your organization towards success with AI.

 

__________________________________________________________________________________

1 https://openai.com/index/chatgpt/
2 https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
3 We define AI as an umbrella term describing machines that can think and act like humans. Machine learning (ML) and generative AI are subsets that are under the broad umbrella of AI.
4 https://usa.visa.com/visa-everywhere/blog/bdp/2023/09/13/30-years-of-1694624229357.html
5 https://about.att.com/innovationblog/ai
6 https://www.carriermanagement.com/assets/Earnix-ISO-Predictive-Chart-1.png, https://www.predictiveanalyticsworld.com/machinelearningtimes/use-of-predictive-models-widespread-in-pc-insurance-survey/2631/
7 https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html
8 https://www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
9 https://www.forbes.com/sites/glenngow/2024/05/12/ais-competitive-advantage-for-small-and-medium-enterprises/
10 https://en.wikipedia.org/wiki/Knowledge_worker
11 https://news.microsoft.com/2024/05/08/microsoft-and-linkedin-release-the-2024-work-trend-index-on-the-state-of-ai-at-work/
12 https://blogs.microsoft.com/blog/2024/04/24/leading-in-the-era-of-ai-how-microsofts-platform-differentiation-and-copilot-empowerment-are-driving-ai-transformation/
13 https://www.sap.com/docs/download/investors/2023/sap-2023-integrated-report.pdf
14 https://techcrunch.com/2023/12/06/respeechers-ethics-first-approach-to-ai-voice-cloning-locks-in-new-funding/
15 https://www.bloomberg.com/news/videos/2024-05-09/ai-generated-human-hoffman-talk-tech
16 https://www.cnn.com/2024/06/25/tech/toys-r-us-sora-ai/index.html
17 https://fortune.com/2023/05/02/samsung-bans-employee-use-chatgpt-data-leak/
18 https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case/?sh=6d9c8a1a696f
19 https://www.npr.org/2023/02/09/1155650909/google-chatbot–error-bard-shares
20 https://www.pcmag.com/news/microsoft-delays-windows-recall-ai-rollout-amid-security-concerns
22https://www.cityandstateny.com/politics/2024/04/meta-ai-falsely-claims-lawmakers-were-accused-sexual-harassment/396121/#:~:text=Their%20respective%20committee%20and%20subcommittee,about%20non%2Dexistent%20harassment%20allegations.
22 Free to download: https://athenaalliance.com/ai-governance-playbook/

Share this article

Latest insights

Building Community Within Community

READ MORE

The Opportunity of AI: How to Maximize Value Creation and Minimize Downside Risk

READ MORE

Why Experience Is the Secret Weapon in Entrepreneurship

READ MORE

A Path to Inclusive Leadership

READ MORE

Navigating Disruption: Insights on Leadership and Governance from the 2024 NACD Summit

READ MORE