Building a Customer Risk Model for AML
Every customer you have represents a risk. Today most banks, casinos and money services businesses (MSBs) adopt technology solutions to monitor transactions, but what about having the full picture of a customer to determine more accurately the risk they pose?
If I asked you how risky any given customer is to you, right this second, would you be able to tell me? Would you have a clear picture of what the customer’s typical activity patterns look like, if they are on a sanctions list, if they are a politically exposed person (PEP), if they have particularly risky relationships, or if they’ve ever been investigated by regulators? All of these factors directly influence the risk the customer presents to you.
Most financial institutions use a model in their AML program to determine a customer’s risk level. I spoke with a few customers to determine what they consider most important and consolidated the insights below.
How do you balance complexity and interpretability?
When it comes to risk scoring, you have to consider the interpretability of the model versus the mathematical accuracy. Can you reasonably defend why one customer’s risk score is significantly higher than another? Some models are “black box” and—while mathematically accurate—can be difficult to interpret unless there is a concerted effort to validate the score.
Some more interpretable models are made up of risk factors, weights, weighted factors and logic to calculate the score. Risk factors customers shared with me may be grouped into three areas:
The profile category includes factors specific to the customer: their industry, nature of business, profession, country of residence, citizenship, length of relationship and so on.
Activity includes their level of cash transactions, alerts generated from the transaction monitoring system, suspicious activity reports (SARs) and currency transaction reports (CTRs) filed, etc.
Relationships examine the strength of relation between customers. This can be determined using demographics like house holding or based on the parties’ transactions that, directly or indirectly, imply a connection.
What are some of the nuances in making models operational?
Models will not always score every customer accurately, so most organizations need a way to adjust the score for select customers. Obviously, this can be risky so it must be controlled and auditable.
Another consideration is decaying some risk scores over time. For example, if your Compliance team filed a number of SARs for a customer two years ago and that resulted in a high-risk score, should that still be the case today if the customer’s activity since then has been low risk?
The final issue is changing the model and assessing the impact before moving it into operations. Do you know how a change will affect the portfolio?
Can third-party risk intelligence assist?
Will you be considering only internal data, which will give you information on your customers’ activities and insights into their relationships based on demographics and transactions? Or will you also be using third-party sources? These sources (e.g., World-Check or Dunn and Bradstreet) bring useful insights ranging from sanctions lists to negative news. It can also give you insights into risky relationships that may not be apparent in your internal data.
What risk factors are most relevant to you in the areas of customer profiles, activity and relationships? Are there other considerations built into your models?
The term ‘financial crimes’ often brings to mind issues such as fraud, lottery scams or credit card skimming, but today there are far-reaching crimes that many fail to consider. This includes money laundering, human and drug trafficking, the financing of terrorism, and the bribery of public officials, all of which are much broader financial crimes that impact organizations, governments and even individuals. The sinister and disruptive nature of these crimes affect the most fragile economies at a high level – and the most vulnerable persons at the lowest level. It clearly impacts the reputation of countries and how their citizens are viewed internationally. But worst of all, it reduces the sense of natural justice that most people feel when crimes such as these escape detection and punishment.
Because most organizations are acting reactively and are always in a state of ‘chasing’ instead of taking real-time preventative measures make detection and punishment unlikely. But things are changing: technologies such as artificial intelligence (AI) and machine learning are moving beyond academic theory and beginning to support initiatives such as the fight against financial crimes.
Why can’t organizations prevent financial crimes?
There are many reasons that organizations are unable to prevent financial crimes. The first is that the profitability objective of key stakeholders may conflict with preventing these crimes. Secondly – and most critical – is that organizations are working in silos; for example, assigning responsibility for detecting and preventing financial crimes to a single department instead of enrolling the entire organization in the fight. The third is that the approaches being taken in terms of data analytics is also terribly outdated compared to the sophistication we are seeing from organized criminals.
Keeping pace with the criminals
A significant problem that many organizations have is that they tend to establish specific rules, procedures and policies that – while effective years ago – are now out of date. Criminals, however, have found ways of testing the boundaries, to essentially reverse-engineer those rules and then break them. Businesses must adapt and develop more innovative approaches that allow them to learn from the dynamism of the criminal activity and so that they can become increasingly sophisticated as crimes become more complex.
Organizations must also examine their cultural stance on crime and how that permeates throughout the business. Technology is often viewed as a silver bullet to compliance challenges, but ultimately the people on the frontline and those conducting investigations are key.
For example, if a new customer is being onboarded and there is reason for suspicion beyond the data that is being collected, do you feel a responsibility to say or do something? Or do you prefer to keep your head down and do the bare minimum required for your job? Regardless of the technology in place, if a company has employees who don’t feel that compliance is their responsibility, they will continue to have problems. Every employee should be enrolled in this responsibility and feel that they cannot turn a blind eye to financial crimes because, ultimately, they affect everyone.
Monitoring is a catalyst for compliant behavior
While technology is not the only way to address these challenges, it is certainly a big part of the picture. The most fundamental way that certain technology can help bring about a change in process is by increasing visibility to compliance challenges and to what is being done to address them. For example, with transaction monitoring technology in place, employees are aware that non-compliance with rules, standards and ethics are being actively, efficiently and effectively monitored. This makes it much more likely that they will behave accordingly, and that they will become more engaged in compliance activities.
A great example of using technology to enroll the entire organization in compliance efforts comes from a CaseWare Analytics customer, Coca-Cola Amatil (CCA). One of the largest bottlers of non-alcoholic beverages in the Asia-Pacific region, CCA implemented technology to monitor its purchasing card (P-Card), accounts payable (AP) and payroll systems. The solution automatically analyzed all of CCA’s transactions, and when an irregularity was detected it would auto-assign it to key personnel for action through workflows and case management functions.
Ray Armstrong, Group Manager of Security and Fraud Control at CCA, noted to us that this technology “was one of the catalysts for the policies and new procedures that ultimately led to the behavioral changes we needed to implement.” By adopting new technology, the company created a change in culture where it everyone was aware that transactions were being reviewed and scrutinized.
Move from reactive to proactive
Technology is driving changes in how we detect early warning signs. Instead of waiting until the actual crime has been committed it is possible to detect patterns that predict a crime. Banks are using this approach to increase customer risk scores to trigger closer scrutiny before money laundering takes place. This allows organizations to shift from simply detecting and reporting to being proactive and preventing.
With technology such as machine learning, software is not being programmed to detect fraud and criminal activity – it’s being trained to do so. Based on the activity of specific customers or those in the same segment it is possible to detect anomalies and high-risk behavior. A key benefit to machine learning that is of note is its ability to generalize, meaning that it can detect highly suspicious activities and transactions despite not having seen that same pattern previously.
The human factor
Given the importance that technology such as artificial intelligence (AI) and machine learning will have in the future, there are naturally questions and debate around the role that humans will play in detecting financial crimes. While technology is certainly a crucial part of this activity, a human perspective will always be needed.
If the system generates an alert, for example, someone needs to investigate it and leverage their knowledge and judgment to resolve it. That response is then fed back into the model, making it smarter over time. In order to make the machine smarter, very smart decisions must be made by highly intelligent people. How well the machine learns is directly related to the quality of information it receives from which to learn.
Upcoming trends of note
Analytics allow us to read much more into what is going on within our organizations, but it seems that criminals are always one step ahead. To combat this, there are several trends that we should track over the coming years. The first is that we will see the fight against financial crime move toward being done completely in real-time, helping it become preventative rather than reactive. Much more AI and cognitive intelligence will also be deployed. This will be especially effective not just for detecting fraud and other financial crime but for leveraging computers and robotics automation to do an increasing amount of investigative actions automatically, and trusting these technologies to make extremely well-informed decisions.