Lareina Yee
McKinsey commentary
Senior partner, McKinsey; chair, McKinsey Technology Council
Responsible AI needs to start on day one, and there is still much work to be done in terms of education and action. It begins with a company’s values—organizations must establish clear principles for how they apply generative AI (gen AI) and set up guardrails to ensure its safe implementation. For example, recognizing the importance of data security means that company-level data and prompts remain within the enterprise walls. For that to happen, the enterprise must have secure contracts with large language model and application providers, as well as robust training, to make sure employees understand the difference between enterprise tools and public tools so that code or proprietary data are not inadvertently shared in public models.
Responsible AI also starts upstream of compliance and monitoring. Leading companies in deploying gen AI incorporate risk practices in the development of their AI applications. This includes ensuring that technical teams understand risk and mitigation practices. Gen AI solutions are probabilistic models that can make mistakes or inadvertently amplify biases in training data, so testing models before they are deployed is essential. Without a robust testing approach, it is hard to deliver on responsible AI.
Finally, companies must develop a clear governance model to help ensure that gen AI applications conform to governing principles. What we see in the survey results and in our conversations with clients is a growing awareness of responsible AI and an urgency to get it right. Still, even with increasing understanding, a little less than one-quarter of the respondents in our survey report having a clear process to embed risk mitigation in their solutions. Moving from awareness to action will be critical.