It might feel like every conference, webinar, and white paper in the compliance industry is talking about AI and its transformative potential for financial crime risk management.
But ‘AI’ is an umbrella term for a suite of technologies and – more importantly – practical applications that firms are evaluating. At ComplyAdvantage, we see use cases and challenges that broadly fall into three buckets: efficiency, explainability, and regulation.
Using data from our global survey of 800 compliance leaders, firms can benchmark how their organization approaches AI-based technologies for financial crime compliance across these three categories.
Efficiency
89%
With the rising complexity and volume of financial crime, doing more without investing in more resources is a top priority…
But how does this measure up against the expectations of regulators and customers?
Compliance leaders have a range of views on who their most critical stakeholders are when it comes to providing evidence of decisions made by AI-based systems. This lack of consensus reflects each group's importance – and interconnectedness.
Unhappy customers could, for example, trigger negative media coverage, which invites greater scrutiny from board directors and regulators.
Explainability
Whose understanding of your AI-based financial crime solutions concerns you the most?
Customers (e.g. onboarding; access to new products/services)
53%
Auditors (e.g. showing workdone on a case)
40%
Internal stakeholders (e.g. other complianceteam stakeholders)
53%
Board of directors (e.g. demonstrating value for money)
52%
Potential investors (e.g. showing innovation or efficiency)
51%
Regulators (e..g. demonstratingcompliance)
51%
Regulation
68%
of compliance leaders say they have a ‘good understanding’ of how AI will be regulated in the primary jurisdiction they operate in.
However, whereas firms told us they’re comfortable with some level of compromise when it comes to explainability, regulators have taken a different tone:
In the US, the Biden administration included ‘Notice and Explanation’ as a pillar in its AI Bill of Rights: “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
The Monetary Authority of Singapore has said that to increase public confidence, the use of AI and data analytics should be “proactively disclosed to data subjects as part of general communication.”
59%
‘Very well prepared’
39%
‘Somewhat prepared’
So when firms are asked how prepared they are to meet legislative changes related to AI usage, and answer with…
Download The State of Financial Crime 2024
Get the full guide
…is there a disconnect that could become a tension point in 2024?
Take an in-depth look at the trends and challenges our experts believe will shape 2024, built on a survey of 600 senior compliance leaders.
Mythbusting AI for AML:
Efficiency, explainability, and regulation
United States
Singapore
of compliance officers are ‘very or somewhat comfortable’ compromising explainability in exchange for greater automation and efficiency.
Using data from our global survey of 600 compliance leaders, firms can benchmark how their organization approaches AI-based technologies for financial crime compliance across these three categories.
It might feel like every conference, webinar, and white paper in the compliance industry is talking about AI and its transformative potential for financial crime risk management.
But ‘AI’ is an umbrella term for a suite of technologies and – more importantly – practical applications that firms are evaluating. At ComplyAdvantage, we see use cases and challenges that broadly fall into three buckets: efficiency, explainability, and regulation.
Using data from our global survey of 800 compliance leaders, firms can benchmark how their organization approaches AI-based technologies for financial crime compliance across these three categories.
89%
Efficiency
Explainability
With the rising complexity and volume of financial crime, doing more without investing in more resources is a top priority…
But how does this measure up against the expectations of regulators and customers?
Compliance leaders have a range of views on who their most critical stakeholders are when it comes to providing evidence of decisions made by AI-based systems. This lack of consensus reflects each group's importance – and interconnectedness.
Unhappy customers could, for example, trigger negative media coverage, which invites greater scrutiny from board directors and regulators.
Auditors (e.g. showing workdone on a case)
40%
Regulators (e..g. demonstratingcompliance)
51%
Potential investors (e.g. showing innovation or efficiency)
51%
Board of directors (e.g. demonstrating value for money)
52%
Internal stakeholders (e.g. other complianceteam stakeholders)
53%
Customers (e.g. onboarding; access to new products/services)
53%
Whose understanding of your AI-based financial crime solutions concerns you the most?
68%
of compliance leaders say they have a ‘good understanding’ of how AI will be regulated in the primary jurisdiction they operate in.
Regulation
However, whereas firms told us they’re comfortable with some level of compromise when it comes to explainability, regulators have taken a different tone:
In the US, the Biden administration included ‘Notice and Explanation’ as a pillar in its AI Bill of Rights: “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
United States
The Monetary Authority of Singapore has said that to increase public confidence, the use of AI and data analytics should be “proactively disclosed to data subjects as part of general communication.”
Singapore
‘Somewhat prepared’
39%
‘Very well prepared’
59%
So when firms are asked how prepared they are to meet legislative changes related to AI usage, and answer with…
Take an in-depth look at the trends and challenges our experts believe will shape 2024, built on a survey of 600 senior compliance leaders.
Download The State of Financial Crime 2024
Get the full guide
It might feel like every conference, webinar, and white paper in the compliance industry is talking about AI and its transformative potential for financial crime risk management.
But ‘AI’ is an umbrella term for a suite of technologies and – more importantly – practical applications that firms are evaluating. At ComplyAdvantage, we see use cases and challenges that broadly fall into three buckets: efficiency, explainability, and regulation.
Using data from our global survey of 800 compliance leaders, firms can benchmark how their organization approaches AI-based technologies for financial crime compliance across these three categories.
With the rising complexity and volume of financial crime, doing more without investing in more resources is a top priority…
But how does this measure up against the expectations of regulators and customers?
Compliance leaders have a range of views on who their most critical stakeholders are when it comes to providing evidence of decisions made by AI-based systems. This lack of consensus reflects each group's importance – and interconnectedness.
Unhappy customers could, for example, trigger negative media coverage, which invites greater scrutiny from board directors and regulators.
of compliance officers are ‘very or somewhat comfortable’ compromising explainability in exchange for greater automation and efficiency.
The Monetary Authority of Singapore has said that to increase public confidence, the use of AI and data analytics should be “proactively disclosed to data subjects as part of general communication.”