Share

We are seeing an explosion of companies speaking about replacing risk and compliance manual work done today by risk and compliance analysts and even 2nd line of Defence roles. Whilst we see the product development – we see very little related sales today as regulated entities have requirements around accuracy, transparency and also model risk which is unlikely to be purchased from 3rd parties.

You are hardly going to win a regulatory audit by argument of a third party product AI assessment that let AML or CFT cases pass through the gate. AI will have to be incredibly accurate and will not pass vendor due diligence with directional correctness. There remains a strong reluctance on the buy side from smaller to larger regulated entities.

We sat through RFP’s with some one the largest banks and fintechs on here is why most of the proposals have failed at either vendor due diligence, models risk or other risk and compliance committees.

Complexity of Regulatory Interpretation

Regulations often involve complex, nuanced language that requires interpretation beyond mere text analysis. Legal and regulatory texts are not just sets of rules but frameworks that need contextual understanding. Human compliance officers bring years of experience, legal knowledge, and the ability to interpret these frameworks in the context of a specific organization. They understand the intent behind regulations, which allows them to make judgment calls on ambiguous or evolving issues. AI, while powerful in processing large volumes of data, lacks the ability to understand context, intent, and nuances that are critical in interpreting complex regulations.

Ethical and Moral Judgments

Compliance is not just about following the letter of the law; it also involves ethical decision-making. Human compliance officers are required to navigate gray areas where the law may not provide clear guidance. These situations often require moral judgments, weighing the potential risks and benefits of certain actions against ethical standards. AI, which operates based on predefined algorithms and data, does not possess the ability to make ethical or moral judgments. Its decisions are based on patterns in data rather than a deep understanding of human values, which are essential in maintaining an organization’s ethical integrity.

Responsiveness to Regulatory Changes

Regulatory environments are dynamic and can change rapidly in response to new risks, technological advancements, or shifts in political landscapes. Human compliance professionals are adept at quickly adapting to these changes, interpreting new regulations, and integrating them into the organization’s processes. While AI can only be updated with new data, it may struggle to keep up with the rapid pace of change, especially in situations where new regulations require immediate, nuanced interpretation and implementation. AI systems need continuous retraining and data updates to remain relevant, which can be resource-intensive and time-consuming.

Human Interaction and Relationship Management

A significant aspect of compliance involves interaction with regulators, internal stakeholders, and external parties. Human compliance officers engage in negotiations, discussions, and relationship management, which are critical for maintaining an organization’s reputation and ensuring a mutual understanding of compliance expectations. These interactions often require empathy, negotiation skills, and the ability to navigate complex interpersonal dynamics. AI, which lacks emotional intelligence and the ability to build trust, cannot replace these human-centric functions.

Accountability and Liability

In compliance, accountability is crucial. Organizations must be able to attribute decisions to responsible individuals or teams, particularly when things go wrong. Human compliance officers are accountable for their decisions, which are informed by their expertise, judgment, and understanding of the organization’s risk appetite. AI, however, introduces challenges in accountability. If an AI system makes an incorrect compliance decision, determining responsibility becomes complex, potentially leading to legal and reputational risks. The lack of a clear chain of accountability with AI systems can be a significant barrier to fully automating compliance functions.

Limitations in Data and Algorithm Bias

AI systems are only as good as the data they are trained on and the algorithms that drive them. In compliance, biased data or flawed algorithms can lead to incorrect or unfair decisions, potentially putting the organization at risk of regulatory breaches. Moreover, AI may struggle with incomplete or poor-quality data, which is a common challenge in compliance, where information may be scattered across various sources. Human compliance officers can apply their judgment to fill in gaps, assess the credibility of sources, and make informed decisions despite imperfect data, a capability that AI lacks.

Conclusion

While AI has the potential to greatly enhance compliance functions by automating routine tasks, analyzing large datasets, and identifying patterns, it cannot fully replace the role of human compliance professionals. The complexity of regulatory interpretation, the need for ethical and moral judgments, the dynamic nature of regulations, the importance of human interaction, accountability concerns, and the limitations of AI in dealing with biased or incomplete data all underscore the irreplaceable value of human involvement in compliance.

Organizations should view AI as a powerful tool that can support and augment human compliance efforts rather than as a replacement. By combining the strengths of AI with the expertise, judgment, and interpersonal skills of human compliance officers, organizations can achieve a more robust, effective, and ethical compliance function.

AI Risk and Compliance