Senior partner and global leader of QuantumBlack, AI by McKinsey
Alexander Sukharevsky
McKinsey commentary
There is broad awareness about the risks associated with generative AI. But at the same time, the prevailing anxiety and fear is making it challenging for leaders to effectively address the risks. As our latest survey shows, just a little over 20 percent of companies have risk policies in place for generative AI. Those policies tend to focus on protecting a company’s proprietary information, such as data, knowledge, and other intellectual property. Those are critical, but we’ve found that many of these risks can be addressed by making changes in the business’s technology architecture that reflect established policies.
The real trap, however, is that companies look at the risk too narrowly. There is a significant range of risks—social, humanitarian, sustainability—that companies need to pay attention to as well. In fact, the unintended consequences of generative AI are more likely to create issues for the world than the doomsday scenarios that some people espouse. Companies that are approaching generative AI most constructively are experimenting with and using it while having a structured process in place to identify and address these broader risks. They are putting in place beta users and specific teams that think about how generative AI applications can go off the rails to better anticipate some of those consequences. They are also working with the best and most creative people in the business to define the best outcomes for both the organization and for society more generally. Being deliberate, structured, and holistic about understanding the nature of the new risks—and opportunities—emerging is crucial to the responsible and productive growth of generative AI.
