The state of AI in 2021
December 8, 2021 | Survey
About the research*
The results of our latest McKinsey Global Survey on AI indicate that AI adoption continues to grow and that the benefits remain significant. As AI’s use in business becomes more common, the tools and best practices to make the most out of AI have also become more sophisticated.
We looked at the practices of the companies seeing the biggest earnings boost from AI and found that they are not only following more of both the core and advanced practices, including machine-learning operations (MLOps), that underpin success but also spending more efficiently on AI and taking more advantage of cloud technologies. Additionally, they are more likely than other organizations to engage in a range of activities to mitigate their AI-related risks—an area that continues to be a shortcoming for many companies’ AI efforts.
Table of Contents
2. The differentiators of AI outperformance
1. AI adoption and impact
3. Managing AI risks
1
AI adoption and impact
A majority of survey respondents now say their organizations have adopted AI capabilities, as AI’s impact on the bottom line is growing.
Findings from the 2021 survey indicate that AI adoption is continuing its steady rise: 56 percent of all respondents report AI adoption in at least one function, up from 50 percent in 2020. The newest results suggest that AI adoption since last year has increased most at companies headquartered in emerging economies, which includes China, the Middle East and North Africa: 57 percent of respondents report adoption, up from 45 percent in 2020. And across regions, the adoption rate is highest at Indian companies, followed closely by those in Asia–Pacific.
As we saw in the past two surveys, the business functions where AI adoption is most common are service operations, product and service development, and marketing and sales, though the most popular use cases span a range of functions. The top three use cases are service-operations optimization, AI-based enhancement of products, and contact-center automation, with the biggest percentage-point increase in the use of AI being in companies’ marketing-budget allocation and spending effectiveness.
57%
of respondents in emerging economies report adoption, up from 45 percent in 2020
27%
of respondents report at least 5% of EBIT attributable to AI
Finally, respondents say AI’s prospects remain strong. Nearly two-thirds say their companies’ investments in AI will continue to increase over the next three years, similar to the results from the 2020 survey.
AI adoption trends and impact
2
The companies seeing the biggest bottom-line impact from AI adoption are more likely to follow both core and advanced AI best practices, including MLOps; move their AI work to the cloud; and spend on AI more efficiently and effectively than their peers.
The differentiators of AI outperformance
We sought to understand more about the factors and practices that differentiate the best AI programs from all others: specifically, at the organizations at which respondents attribute at least 20 percent of EBIT to their use of AI—our “AI high performers.” With adoption becoming ever more commonplace, we asked new questions about more advanced AI practices, particularly those involved in MLOps, a best-practice approach to building and deploying machine-learning-based AI that has emerged over the past few years.
While organizations seeing lower returns from AI are increasingly engaging in core AI practices, AI high performers are still more likely to engage in most of the core practices. High performers also engage in most of the advanced AI practices more often than others do.
3
Risk management remains a shortcoming for most companies’ AI efforts, but a set of emerging best practices can help.
Managing AI risks
When asked why companies aren’t mitigating all relevant risks, respondents most often say it’s because they lack capacity to address the full range of risks they face and have had to prioritize. Notably, the second-most common response from those seeing lower returns from AI adoption is that they are unclear on the extent of their exposure to AI risks (29 percent versus only 17 percent of AI high performers). And by geography, respondents in emerging economies are more likely than others to report that they are waiting until clearer regulations for risk mitigation are in place, and that they do not have the leadership buy-in to dedicate resources toward AI risk mitigation.
Additional survey results suggest a way forward for companies that continue to struggle with risk management in AI. We asked about a range of risk-mitigation practices related to model documentation, data validation, and checks on bias. And in most cases, the AI high performers are more likely than other organizations to engage in these practices.
About the research
The online survey was in the field from May 18 to June 29, 2021, and garnered responses from 1,843 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 1,013 said their organizations had adopted AI in at least one function and were asked questions about their organizations’ AI use. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.
* Some data and analyses were updated in September 2022.
Back to top
About the author(s)
The survey content and analysis were developed by Michael Chui, a partner of the McKinsey Global Institute and a partner in McKinsey’s Bay Area office; Bryce Hall, an associate partner in the Washington, DC, office; Alex Singla, a senior partner in the Chicago office; and Alex Sukharevsky, a senior partner in the Moscow office.
The authors wish to thank Jacomo Corbo, David DeLallo, Liz Grennan, Heather Hanselman, and Kia Javanmardian for their contributions to this article.
This article was edited by Daniella Seiler, a senior editor in the New York office.
Use cases by function
Use cases by function
Top use cases
Top use cases
¹Out of 39 use cases. Question was asked only of respondents who said their organizations have adopted AI in at least 1 business function.
Most commonly adopted AI use cases,¹ by function, % of respondents
14
16
16
17
17
18
20
22
22
27
Creation of new AI-based products
Customer segmentation
Risk modeling and analytics
Fraud and debt analytics
Predictive service and intervention
Customer-service analytics
Contact-center automation
Product-feature optimization
Service-operations optimization
New AI-based enhancements of products
Top use cases
Service operations
Product and/or service development
Marketing and sales
Risk
¹Question was asked only of respondents who said their organizations have adopted AI in a given function.
²Eg, field services, customer care, back office.
Most commonly adopted AI use cases within each business function, % of respondents
¹
Capital allocation
7
Treasury management
6
Strategy and corporate finance
Risk modeling and analytics
16
Fraud and debt analytics
14
Risk
Customer-service analytics
17
Customer segmentation
16
Marketing and sales
Service-operations optimization
27
Contact-center automation
22
Service operations
²
New AI-based enhancements
of products
22
Product-feature optimization
20
Product and/or service development
Logistics-network optimization
11
Sales and demand forecasting
11
Supply-chain management
Predictive maintenance
12
Yield, energy, and/or
throughput optimization
11
Manufacturing
Optimization of talent management
8
8
Performance management
Human resources
Use cases by function
The most popular AI use cases span a range of functional activities.
Revenue increase
Revenue increase
Cost decrease
Cost decrease
¹Question was asked only of respondents who said their organizations have adopted AI in a given function. Respondents who said “no change,” “revenue decrease,” “not applicable,” or “don’t know” are not shown.
10
15
15
11
13
12
15
2
13
15
16
18
25
25
15
25
32
21
38
34
30
38
26
27
30
33
33
65
63
63
74
64
54
70
67
67
10
13
10
10
19
8
16
13
10
18
19
11
26
16
26
19
24
20
43
25
35
43
33
38
30
36
36
57
71
56
79
68
72
65
73
66
Increase by >10%
Increase by 6–10%
Increase by ≤ 5%
Revenue increase from AI adoption by function,
% of respondents
¹
Revenue increase
¹Question was asked only of respondents who said their organizations have adopted AI in a given function. Respondents who said “no change,” “cost increase,” “not applicable,” or “don’t know” are not shown.
37
27
23
51
24
12
40
26
20
27
35
21
41
20
17
36
27
15
23
24
22
30
28
10
33
28
18
87
87
86
83
78
78
69
68
79
30
17
7
8
16
28
11
8
33
7
9
25
12
18
16
6
6
44
7
7
12
12
3
20
8
11
25
52
54
52
41
46
56
26
35
44
Cost decrease from AI adoption by function, % of respondents
¹
Decrease by <10%
Decrease by 10–19%
Decrease ≥20%
Cost decrease
Fiscal year 2020
Fiscal year 2019
Service operations
Manufacturing
Human resources
Marketing and sales
Risk
Supply-chain management
Product and/or service development
Strategy and corporate finance
Average across all activities
In certain functions, respondents report lower levels of cost decreases from AI adoption in the pandemic’s first year, while revenue increases held steady.
¹Practices shown here are representative of those with the highest deltas between AI high performers and other respondents. Not all practices are shown.
²Respondents who said that at least 20 percent of their organizations’ earnings before interest and taxes (EBIT) in 2020 was attributable to their use of AI.
AI high performers²
All other respondents
User enablement
User enablement
Advanced models, tools, and technology
Advanced models, tools, and technology
Advanced data
Advanced data
Core
Core
A dedicated training center develops nontechnical personnel’s AI skills through hands-on learning
There are designated channels of communications and touchpoints between AI users and the organization’s data science team
Users are taught how to use the model
Users are consulted throughout the design, development, training, and deployment phases
Users are taught the basics of how the models work
57
50
46
39
34
35
50
45
20
14
User enablement
Use a standardized end-to-end platform for AI-related data science, data engineering, and application development
Use external third-party services to test, validate, verify, and monitor the performance of our AI models
Design AI models with a focus on ensuring they are reusable
Refresh our AI/ML tech stack at least annually to take advantage of the latest technological advances
Have techniques and processes in place to ensure that our models are explainable
Regularly refresh our AI models, based on clearly defined criteria for when and why to do so
Take a full life-cycle approach to developing and deploying AI models
57
49
45
45
43
35
32
26
23
31
16
27
28
21
Advanced models, tools, and technology
Generate synthetic data to train AI models when
we lack sufficient natural data sets
Have well-defined processses for data governance
Have scalable internal processes for labeling AI training data
Rapidly integrate internal structured data to use in our AI initiatives
Have a data dictionary that is accessible across the enterprise
53
51
48
45
27
29
32
22
37
27
Advanced data
Have well-defined capability-building programs to develop technology personnel’s AI skills
AI-development teams follow standard protocols for building and delivering AI tools
Have a clear framework for AI governance that covers the model-development process
Have protocols in place to ensure good data quality
Have well-defined processes for data governance
Track the performance of AI models to ensure that process outcomes and/or models improve over time
Test the performance of our AI models internally before deployment
Use design thinking when developing AI tools
60
57
46
45
40
38
36
36
46
43
35
37
42
20
33
20
Core
Organizations seeing the highest returns from AI are more likely to follow both core and more advanced best practices.
Share of respondents reporting their organizations engage in each practice,
¹
% of respondents
There’s evidence that engaging in such practices is helping high performers industrialize and professionalize their AI work, which leads to better results and greater efficiency and predictability in their AI spending. Three-quarters of AI high performers say the cost to produce AI models has been on par with or even less than they expected, whereas half of all other respondents say their companies’ AI project costs were higher than expected. Going forward, the AI high performers’ work could push them farther ahead of the pack, since both groups plan to increase their spending on AI by roughly the same amount.
¹Figures may not sum to 100%, because of rounding.
²Respondents who said that at least 20 percent of their organizations’ earnings before interest and taxes (EBIT) in 2020 was attributable to their use of AI.
2
20
55
23
8
8
34
51
²
AI high performers
All other respondents
More than expected
About the same
Less than expected
Don’t know
Compared with their peers, the high performers’ AI spending is more efficient and predictable.
Typical costs for AI model production, compared with expected,
% of respondents
¹
The survey results also suggest that AI high performers could be gaining some of their efficiency by using the cloud. Most companies—whether they are high performers or not—tend to use a mix of cloud and on-premises platforms for AI similar to what they use for overall IT workloads. But the high performers use cloud infrastructure much more than their peers do: 64 percent of their AI workloads run on public or hybrid cloud, compared with 44 percent at other companies. This group is also accessing a wider range of AI capabilities and techniques on a public cloud. For example, they are twice as likely as the rest to say they tap the cloud for natural-language-speech understanding and facial-recognition capabilities.
64%
of high performers’ AI workloads run on public or hybrid cloud, compared with 44 percent at other companies.
No matter a company’s AI performance, risk management remains an area where many have room to improve—which we have seen in previous survey results. Cybersecurity remains the most recognized risk among respondents, yet a smaller share says so than did in 2020, despite the rising threat of cyberincidents seen throughout the COVID-19 pandemic. On a positive note, respondents report increasing focus on equity and fairness as a relevant risk and one that their companies are mitigating.
Across regions, survey respondents report some notable changes since the previous survey and very different opinions on cybersecurity risks. In developed economies, their views on the biggest risks have held relatively steady since 2020, though 57 percent (versus 63 percent last year) cite cybersecurity as a relevant AI risk. In emerging economies, respondents report a more dramatic decline in the relevance and mitigation of several of the top risks. Yet, they also report personal and individual privacy as a relevant AI risk more often.
The results also suggest that AI’s impact on the bottom line is growing. The share of respondents reporting at least 5 percent of earnings before interest and taxes (EBIT) that’s attributable to AI has increased year over year to 27 percent, up from 22 percent in the previous survey.
Meanwhile AI’s revenue and cost saving benefits have held steady or even decreased since the previous survey—especially for supply-chain management, where AI was unlikely to compensate for the pandemic era’s global supply-chain challenges.
¹
²
“Emerging economies” includes respondents in Association of Southeast Asian Nations, China, India, Latin America, Middle East, North Africa, South Asia, and sub-Saharan Africa, and “developed economies” includes respondents in developed Asia, Europe, and North America. Question was asked only of respondents who said their organizations have adopted AI in ≥1 business function. Those who answered “don’t know” are not shown.
That is, the ability to explain how AI models come to their decisions.
Mitigated risks
Mitigated risks
Relevant risks
Relevant risks
AI risks that organizations consider relevant, % of respondents by headquarters¹
Relevant risks
59
37
31
33
35
26
19
19
12
11
63
51
43
41
29
32
24
22
16
8
59
37
31
33
32
35
19
19
12
11
63
51
43
41
26
24
22
29
16
8
47
40
34
45
31
24
30
30
18
18
16
57
50
44
41
24
37
22
12
7
47
40
34
45
24
30
30
31
18
18
16
57
50
44
41
37
24
22
12
7
AI risks that organizations are working to mitigate,
% of respondents by office headquarters¹
Mitigated risks
29
25
31
19
18
11
14
14
15
6
5
4
54
52
44
33
29
24
16
16
21
11
28
19
24
17
10
14
6
5
4
51
50
41
27
32
23
15
20
15
13
11
40
27
22
16
18
21
9
9
56
44
33
33
27
23
19
15
4
3
29
28
20
30
17
14
19
14
36
24
15
16
8
8
50
39
24
21
4
3
2021
2020
Cybersecurity
Regulatory compliance
Explainability²
Personal/individual privacy
Organizational reputation
Equity and fairness
Workforce/labor displacement
Physical safety
National security
Political stability
In emerging economies
In developed economies
The management of AI risks remains an area for significant improvement, as respondents report a waning focus on cyber—especially in emerging economies.
¹Practices shown here are representative of those with the highest deltas between AI high performers and other respondents. Not all practices are shown.
²Respondents who said that at least 20 percent of their organizations’ earnings before interest and taxes (EBIT) in 2020 was attributable to their use of AI.
AI high performers²
All other respondents
Training and testing data
Training and testing data
Model documentation
Model documentation
Measuring model bias and accuracy
Measuring model bias and accuracy
Document model architecture
Document model performance on an ongoing basis
Record information about both the training data set and the model-training process
Document data flows
Document known issues and/or trade-offs with the model
Documentation enables a clear understanding of the relative weight that our data’s inputs have on the model’s output
Document the risk-mitigation strategies applied to both the model and its underlying data
59
53
52
52
43
30
17
43
43
34
42
30
28
11
Model documentation
Regularly monitor for data drift and/or concept drift
Retrain our models when issues are detected
Have a human-in-the-loop verification phase of model deployment that expressly tests and controls for model bias
Model users are taught how to monitor for issues
Test for different outcomes based on a change to protected characteristics
Have mechanisms in place to monitor for model bias specifically
Refresh our models based on clearly defined criteria for how frequently they need to be updated
43
42
39
39
36
36
31
27
25
30
21
21
20
19
Measuring model bias and accuracy
Data professionals actively check for skewed or biased data during data ingestion
Scan training and testing data to detect the underrepresentation of protected characteristics and/or attributes
Increase the representation of protected characteristics and/or attributes in our training and testing data as needed
Data professionals actively check for skewed or biased data at several stages of model development
Legal and risk professionals work with data-science teams to help them understand definitions of bias and protected classes
Have a dedicated governance committee that includes risk
and legal professionals
47
47
43
36
24
23
33
27
23
24
26
17
Training and testing data
Organizations seeing the highest returns from AI engage in risk-mitigation practices more often than others.
Share of respondents reporting their organizations engage in each practice,
% of respondents
We define “adoption” as the use of AI capabilities (e.g., machine learning, computer vision, natural-language processing) in at least one business function.
[1]
Michael Chui
McKinsey commentary
Michael Chui, Jacomo Corbo, Kia Javanmardian
McKinsey commentary
Keys to maximizing returns from AI
Jacomo Corbo, Kia Javanmardian
McKinsey commentary
The growing role of the cloud
Michael Chui, Liz Grennan
McKinsey commentary
The state of AI risk management
¹
5
3
8
2
5
4
2
1
4
14
9
9
5
13
4
7
7
8
36
29
28
24
28
34
18
30
27
55
41
45
31
46
42
27
38
39