Building Responsible AI
Business leaders, data scientists, and engineers have
a responsibility to guide AI product development toward responsible outcomes. Here's ten guidelines—grouped by the key phases of a product development lifecycle—to help leaders responsibly navigate AI product development.
Assess and prepare
Design, build, and document
Validate and support
1
1
2
2
10
10
3
3
4
4
5
5
6
6
7
7
8
8
9
9
Assess merit
Assess merit of developing the product, considering organizational values and business objectives.
• What are the primary use cases and benefits
of the proposed product? Which uses are explicitly out of scope?
• What is the desired business outcome for this product? How will the business impact be measured?
• How might the operation of the product and use of its outputs for business decisions encroach on core organizational values?
What to ask
Assemble team reflectingdiverse perspectives withclearly defined roles and responsibilities
• Do we have a diverse (e.g., gender, age, ethnicity), multidisciplinary team with a range of functional expertise?
• What perspectives or expertise are missingand how can we introduce them, includingsources outside the team or organization?
• Is the team structured so that domainexperts can impact relevant design choices?
Diverse perspectives, defined roles
What to ask
Assess potential productimpact by including input fromdomain experts andpotentially impacted groups
• What are the foreseeable modes of failure for this product? What edge scenarios could lead to failure and harm?
• What are the societal and environmental implications of foreseeable product failure, misuse, or malicious attack?
• What are the product’s potential unplanneduses?
Assess impact
What to ask
• What external SMEs or groups could provide input informing design choices that would reduce the risk of negative societal impact and harm to individuals directly or indirectly affected by the AI product?
Evaluate data and system outcomes to minimize the risk of fairness harms
• What fairness metrics, tests, and shipping criteria will we use? How will the product team validate that the training data, including data collected via APIs, captures the different groups and types of people likely to be impactedby the system's output?
• How will the product team measure whether the AI product’s outcomes are consistent with the chosen objective (i.e., avoid target leakage), fairness metrics, tests, and shipping criteria across a wide variety of potentially impacted groups or intersections of groups?
• How will the product team ensure continued adherence to fairness metrics, tests, and criteriapost-deployment?
Minimize risk
What to ask
Design AI product to mitigate the potential negative impacton society and the environment
• If negative impact
(e.g., from system failure, unplanned use, abuse, attack, or simply side effect of normal use) is possible, what design processes (e.g., human centered-design) and choices can reduce, mitigate, or control them?
• What design choices will help minimize the adverse environmental impact of the product outputs and related decisions? What design choices are critical to ensuring proper use, legitimate and transparent data collection, and respect for user privacy?
Mitigate impact
What to ask
Incorporate features to enable human control
• How is the product team designing the product to empower humans by augmenting their decision making, streamlining tasks, or otherwise making them more effective? Which decisions or functions require human oversight as a critical component of the AI product?
• What mechanisms (e.g., interpretability) will supportend user comprehension of the system to enable continuous audit, monitoring, and human intervention?
• What product features allow users to customize AI performance?
Enable human control
What to ask
• What channels will the product utilize to collect live feedback?
• What product features will ensure inclusive experiences for people with disabilities?
Take measures to safeguard data and AI products
• How will the product team prioritize data privacy across product design with respect to cloud infrastructure, encryption, anonymization, and access control?
• How will the product team make sure the product doesn’t inadvertently disclose sensitive or private information during use (e.g., indirectly inferring user locations or behavior)?
• What methods will the team use to identify and address security vulnerabilities, including those such as data poisoning that are unique to AI products?
Safeguard
What to ask
Document throughout the development lifecycle to enable transparency
• What standards and processes are in placeto ensure the entire product team consistently documents design and development choices, rationales, and assumptions?
• How can the product team best keep track of data sources and their authorized uses?
• What types of models, tools, or techniques will we use to document product behavior?
Enable transparency
What to ask
• What are the target environment and conditions under which this product can be expected to function properly and safely?
• How will the product team validate the AI product’s performance against technical standards and benchmarks?
• How will the product team validate the AI product's performance against agreed-upon business KPIs and metrics, tests, and criteria?
What to ask
Validate product performance and test for unplanned failures as well as foreseeable misuse unique to AI products
Validate performance
• How will the system be tested and evaluated for safe and effective operations (e.g., graceful failure) in both business-as-usual and edge-case scenarios?
• What are the mechanisms for continuously monitoring business, technical, and fairness performance, as well as for modeling drift post-production?
• How will the outputs of the system be communicated in a way that helps end users understand how the system works?
• How will the product team make sure end users understand the primary use case, underlying assumptions, and limitations of the product?
• What information and instructions should the product team provide to the end user(s) to enable safe and reliable use?
What to ask
Communicate design choices, performance, limitations, and safety risks to end user
Communicate
1
2
3
4
5
6
7
8
9
10
2
Business leaders, data scientists, and engineers have a responsibility to guide AI product development toward responsible outcomes. Here's ten guidelines—grouped by the key phases of a product development lifecycle—to help leaders responsibly navigate AI product development.
Click the steps
Source: Ten Guidelines for Product Leaders to Implement AI Responsibly
Source: Ten Guidelines for Product Leaders to Implement AI Responsibly