Dear Compliance Officers,
This is for you, not your product or innovation teams, and certainly not your interns.
You’ve probably recently attended a fintech event where you heard a speaker talk about the use of AI in compliance and the wonders it can do to strengthen and automate your compliance program. Or perhaps, you saw a demo of a new software vendor demonstrating AI-driven compliance capabilities. You’re pumped and excited about the possibilities of ending hours of manual work with a new AI solution. You are sure that it is going to solve all your problems.There is no doubt about it: AI is the future of compliance in financial services.
It’s great to see companies like Ascent Regtech, Cube Global and APIAX leading the growth of AI in compliance with regulatory interpretations and cognitive mapping of regulatory content. However, despite the advancements of applied artificial intelligence, there is a general reluctance to trust outputs, analyses, and conclusions that are purely computer-driven especially in an industry where the stakes and penalties are high. The mistrust stems from common myths about AI, a lack of understanding of how AI systems function, and also differing ways to use AI for maximum benefit. This is exactly why you’ve made little progress towards using AI for compliance.
It’s not your fault. In fact it’s an industry-wide problem . Overcoming the trust factor and outlining key steps to adopting AI is critical to the widespread and practical application of AI for compliance in all financial services companies beyond globally systemic banks. Deepening your understanding of how intelligent systems work also empowers faster and proactive adoption instead of the “wait and see” approach that is already common in many financial institutions.
But, before we can discuss how artificial intelligence truly works, we have to break the myths.
False. There are multiple dimensions and measurements of intelligence. AI that is designed to do one task well will unlikely be suitable or deemed intelligent for other tasks. This is similar to believing the myth that artificial intelligence will outpace human intelligence. Read more about these myths here.
2. AI solutions will tell compliance officers exactly what to do
Another lie. AI cannot tell compliance officers exactly what to do because it lacks the context to do so. Regulatory context is very nuanced and must be provided to AI algorithms for successful outputs. Basic context may include the type of organization and business activities, but deeper levels of context include liberal or conservative approaches to compliance, resources available to fulfill those requirements and so on. Instead AI can work alongside compliance officers to establish better compliance frameworks for faster responses to regulatory changes. However, remember when the regulators come knocking on your door, it isn’t the AI who will answer their questions — you will. So unless you are looking for early retirement then AI should not and will not tell you exactly what to do.
3. AI will “know” and “understand” things that you don’t
AI doesn’t “know” anything. Period. But, it can process multitudes of data to perform lengthy analyses that would take hundreds of human hours. Therefore it is a way to work faster provided that there is a specific desirable outcome for the AI. But in the end, AI doesn’t “understand” or “know” what it has analyzed. It is simply measuring its results against predetermined goals.
Now, with fear out of the way, you can really start to understand AI as a support system for the work and processes you as a compliance officer are managing on a daily basis.
No. Sending out multiple emails to your stakeholders, or formatting reports probably doesn’t need AI. If you have a problem that needs to be done repeatedly at scale AND involves a lot of data calculations to arrive at reliable results then the chances are it can benefit from AI. Attempting to use AI for everything is probably overkill and a waste of time.
2. Effective AI requires a specific goal, clear context, and A LOT of data.
Using AI requires a clear data-driven goal, a lot of data. Data is how algorithms learn and drive better results through repetition. In addition being clear about the context of the data and the application of AI is just as important for success as a specific goal. Context may be who is reading the outputs, geography, industry, sentiment, or bias that can be used as a lens to view and analyze the data.
3. AI systems and self-learning algorithms are based on processes
The key here is that there is a well-defined process that generates results, and the algorithms learn from those results to generate new, better working algorithms. Without a well-defined process to start, AI won’t be of much help.
In short, most problems don’t require AI, the AI is only as good as the data it receives, and AI can only interpret data within the narrow confines of a supplied context.
Acknowledging that data and context are the two most important constraints is critical to setting realistic expectations for how well compliance teams will be able to use and benefit from AI solutions. Furthermore it helps you establish stronger frameworks to evaluate the capabilities of vendors and their technology.
The key to success is looking for a practical application of AI that solves one pressing problem very, very well. If you have a challenge in mind, or are struggling on where to start Compliy would love to help. Get in touch with us.