Exec Guide to Using Machine Learning and AI for AML
Increasing use of machine learning (ML) and artificial intelligence (AI) in the detection and prevention of financial crimes is providing financial institutions the opportunity to perform massive computations and detect patterns that were previously undetectable with rules-based analytics.
In this webinar you will learn:
- How data science uses models and patterns to detect anomalies
- How the responses to these can be used to prevent future suspicious transactions or false-positives
This webinar is designed for senior compliance executives who are required to have sufficient knowledge of data science to enable a more data and analytic-driven approach to fighting money laundering and other financial crimes.
Full Webinar Transcript
Christine: Our presenters today are Kal Ghadban, director of analytics and data science consulting, and Josip Psihistal, data scientist for CaseWare Analytics. So we’re going to start the webinar now. And without further ado, I hand over today’s presentation to Kal.
Kal: Thank you, Christine. Hello everyone. Again, my name is Kal Ghadban, and I run the analytics and data science practice here at CaseWare, really excited to have everyone on this webinar to talk about artificial intelligence and the impact that it’s having on AML today. I want to set the stage by first, as you could see, the definition slide. I want to set the stage to first discuss and talk about artificial intelligence and machine learning defined, you know.
Discussions of artificial intelligence have created a certain amount of unease by those who fear it will quickly evolve from being a benefit to human society to taking over the world. However, we’re not all operating from the same definition of the term. And while the foundation is generally the same, the focus of artificial intelligence shifts, depending on the entity that provides the definition. I’m not gonna read verbatim as far as the definitions.
I’m sure you can see that on your own. But I think what’s important is to at least take a look at, you know, and look at what’s differentiating artificial intelligence definition and human intelligence, which is really about natural intelligence compared to machine intelligence. And as we talk a little bit more about this in the slides to come, you’ll start to kind of better understand, okay, well, where do we fit in the whole kind of ecosystem around artificial intelligence, and where do the artificial intelligence systems fit?
And one of the subsets of artificial intelligence, which you’re going to hear a lot about today and just hear a lot about in the industry is machine learning, you know, and, you know, the whole aspect of, you know, you have machine intelligence, okay, well, how do you train the machine to better understand data and look at data in different ways. So what I want to do is first start off by talking about the evolution of AML solutions. So as you could see, with regards to this slide, you know, we’re calling it the evolution of AML solutions for a reason.
You know, back when AML solutions first started, a lot of organizations were really looking at abiding by government or government regulations and really putting, you know, processes in place. And I want to talk about why good process alone can’t make AML stick anymore. You know, government agencies, as I was seeing in financial services organizations, primarily focused on rule-based transaction monitoring to fight money laundering. The types of solutions focused only on defensive capabilities such as process improvements in regulatory compliance, risk reporting, and mitigation.
So the disadvantage of rule-based AML solutions alone generate a large number of false positives and only detect what you’re looking for. And we call that in the industry the no factor, you know what you’re looking for based on those rules and there could obviously be a lot of things that could slip through the cracks, especially with the evolution of a lot of the machine learning and data digital footprint today with regards to a lot of the criminal organizations.
So I want to talk a little bit about the importance of data and what we call the three V’s. Variety, velocity, and volume of data has forced organizations to rethink their AML technology strategies and evolve from government regulation rule-based solutions to more sophisticated, AI-based solutions. So, if you look at the evolution of basically the solutions going from government regulations, where, you know, the focus was on customer data, RFID tags, and then you start getting into the Hadoop which is more around big data, unstructured data, then the mobile data started kind of creep into our data environment, point of sale, geospatial data, you see that on the screen, and then it was based on the emerging markets.
Now you got everyone on cell phones. You got people who are leveraging apps with regards to banking, apps, money service apps. So things are changing, you know, based on the data that’s driving those changes. As well you start seeing the volatility of information just starting to, you know, in a sense, really take over our daily lives. You got machine data that’s starting to kind of creep into everything now, time series, and what we call structured and unstructured data. And solutions evolved, you know.
AML solutions evolved to leverage a lot of this data, and we’re still kind of trying to sift through a lot of the information. And as time goes by, you start looking at social networking, and text, and potentially even weather data. To some of you, you may be looking at it saying, “Okay, well, that’s quite surprising, you know what,” whether we are using social networking data for, text data for AML, even weather data. And to most people surprised, you can leverage a lot of this data. Always try to look at it from that perspective, where data is driving a lot of the insights that you’re going to be getting. So why is this important to you? As long as we adhere to government bodies, what does it matter? I mean, at the end of the day, everyone’s like, “Okay, well, I’m abiding by government regulations. Why do I need to implement AI?”
So in the next slide, what we’re going to talk about is the evolution of financial crime. It’s very important to really look at it from the perspective of the criminal and, you know, criminal organizations out there who are now looking at the evolution of data and looking at the evolution of the solutions that are happening and what’s at their fingertips.
Evolution of AML Systems
The data itself is not just at our fingertips, which is, you know, the AML and financial crime experts like yourself, but it’s also at the fingertips of the criminals. They have access to a lot of this information. And what they do is they’ve deployed their own AI systems to intrude in your systems and to intrude in your customers’ daily lives by identity theft, by account takeover. And what they do is they employ these sophisticated techniques to do so. And as you can see there, you know, there’s some pretty sophisticated techniques where, you know, they try to mask their activities by behaving like normal law-abiding citizens. And by doing so, they fly under the radar screen. And this is where, you know, I talked about that known factor of what you’re looking for won’t catch these types of activities. And what it does is by doing so, once they actually infect and start probing your systems, then they start actually leveraging these AI systems to start, in a sense really, taking over certain things and behaving like your customers, any internal processes that are happening.
And what they do is then they also go even further. And when they take over the accounts and identity theft, they also steal information. And that’s, in a way, you know, it’s very important for AML organizations and compliance departments to really start thinking of data as an asset and really thinking about how do we leverage machine learning or artificial intelligence systems just to prevent them from infiltrating the systems. What it does is it impacts the bottom line. There’s brand exposure and there’s even more increased fines. I mean, everyone’s linked thinking, “Hey, you know, as long as I’m abiding by government regulations, hey, I’m okay.” But, you know, when these criminals are infiltrating your environment and taking over accounts, it’s a little bit more than that.
So, what I want to do in the next slide is talk about the interaction between artificial intelligence and humans. What is the best way for AML and compliance departments to move forward and leverage artificial intelligence systems? One of the most important things to understand is that we do not program AI to detect threats. As you could see, we train it to do so. And if we do that, then that really requires human interaction with these AI systems.
And that means that people like yourselves within your organizations who are subject matter experts and leaders within the AML compliance space and even the fraud space need to be part of the whole process in regards to training these AI systems to track and to be able to recognize these AML and suspicious behavior patterns or abnormal behavior patterns around transactions. So it’s very, very important to recognize that it’s not a system in itself. It requires human intervention. It requires the subject matter experts to work with these artificial intelligence systems.
So the next point that I want to make is that in doing so, what’s happening now is that the systems are designed to detect the real time unknowns. So we use these machine learning algorithms. And Josip, who’s our resident data scientist, will talk about, you know, the how. One of the things that we were focusing on here is, you know, what are we doing about it, basically.
And, you know, I’ll cover that in the later slide. But the next point is also important, which I’ve stressed, and I can’t stress it any more than as you could see it in there, were learn and interact to provide expert assistance. I mean, and that is really, really important in the sense that these AI systems and machine learning is called machine learning for a reason. It’s because it’s our expertise and your expertise in conjunction, working together to be able to make sure that the AI system is used in the most appropriate way in the most optimal fashion. So, and now, you know, okay, well, what it is, and where it’s being used, and the data that it’s leveraging. So then we say, “Well, okay, well, you know, what do we do about it and how?”
AI Capabilities in AML
And the next slide that I’m going to talk about is the what, what are the capabilities in artificial intelligence in AML that are being leveraged today and a lot of the buzz terms that most likely you’ve heard about. And I’ll talk a little bit about, you know, each of those, I would say, more strategic capabilities. So I’m going to drill down one more level to introduce several AI capabilities being used today by organizations such as yours. And if you haven’t, then, you know, I would always, you know, say it’s very highly recommended that you start thinking about it, if you haven’t done so. And if you are thinking about it, you know, take a look at some of the takeaways, we’ll talk about some of the takeaways at the end of the session of what you can do to implement these types of capabilities.
So the first capability is anomaly detection. What it is is it’s an advanced technique used to detect behavior that doesn’t fit within the normal data profile as we call it. So really, in a sense, you know, it’s pretty self-explanatory in the sense that anomaly detection is really about looking at anomalies that are not really normal, so within the data profile. People are behaving, you know, normally. They’re going in. They’re law-abiding citizens. They’re transferring money in the right way. They’re doing all the things that they need to do. And you have these criminal organizations or criminals, financial criminals who are going in and kind of looking at it and saying, “Okay, you know what, I know how certain people are behaving. I’ll try to behave the same way.”
What anomaly detection does is kind of identify that. It identifies the norm from the not so norm. And there’s unknowns associated with that. The other type of capability in artificial intelligence, which is also a subset is what’s called suspicious behavior monitoring. Now, everyone’s heard of, you know, transaction monitoring and suspicious monitoring. And so, you’re basically looking at transactions. I think what’s important about…and I want to stress the behavior aspect of this because that’s really what this is about. This is not about rule-based transaction monitoring, looking at, you know, suspicious transactions, you know, when you’re looking at the known factor.
What suspicious behavior monitoring does is it focuses on what we call known labels such as fraudulent behavior. So we just label something as being fraudulent behavior, meaning that we understand what fraud is, we know, you know, what the behaviors are within our own systems. I’m sure everyone on this call has really went through that whole type of, “Hey, we know what the use cases are associated with it.” So basically, what you’re doing, in this case, is you’re identifying certain scenarios around fraudulent behavior and you’re employing these types of suspicious behavior monitoring capabilities.
So the next capability, and if you haven’t heard of this as of yet, it’s, you know, what we call fourth-generation artificial intelligence systems. And what this is really leveraging is that it’s leveraging what’s called virtual assistants. If you’ve actually seen them today on your mobile device, you know, you got Google Assistant, you got Siri, there’s so many out there that are that are being deployed in the market.
What cognitive capabilities are and where it separates, you know, if you will, the difference is that it’s based on contacts, right? So such as cognitive virtual agents that basically leverage natural language processing to provide compliance [inaudible 00:16:40.335] insight into changes in regulatory requirements. That’s the context that we’re talking about is that we are also training these virtual assistants to really better understand regulatory requirements and be able to provide yourself, the SMEs, the leaders within the space, in your industry, or back at work is that, you know, these virtual assistants are asked a question, get an answer back within the context of the question that you’re asking. And that’s really important.
That’s where the separation is between, you know, a Google Assistant or Siri. The fourth capability, again, which is fourth-generation type of capability is automation robotic processing, ARP or RPA, if you’ve heard it in the industry as well. So what it does is it automates repetitive manual activities such as remediation and workflow processes. So where the difference is that, you know, there’s a lot of questions. Well, we do have, you know, automated processes and whatnot.
The difference here is that this has an artificial intelligence capability behind the scenes where it’s also learning and it’s leveraging machine learning to do so. So all of these capabilities have two things in common, two dimensions in common, one of them is machine learning, as you’ve heard throughout, you know, the webinar that I’ve talked about machine learning, and data. You need those two to be able to deploy these types of artificial intelligence capabilities. What I’m going to do right now is I’m going to hand it off to Josip, our resident data scientist, who will talk about, you know, how you can deploy a lot of these capabilities and explain a little bit more in detail some use cases that we’ve deployed at certain customers use. Josip, I’ll hand it off to you. Thank you, everyone.
Our Machine Learning Approach
Josip: Okay, thank you very much. Thank you, Kal. My name is Josip Psihistal, and I am the data scientist here at CaseWare. And today, I will try to be explaining to you sort of the mumbo jumbo behind all of this AI machine learning stuff. The theory behind it is it’s actually not that difficult. The actual implementation, you know, there’s a lot of small knots and stuff behind it. We will not be getting into any of that.
Hopefully, what you can take away from this is just, you know, if you’re in a conversation about machine learning, you are able to listen, and you are able to follow the conversation. You can even add a little bit. So as long as you just know generally what’s going on, it’s going to help your knowledge and, you know, your ability to participate a lot. So let’s get into this.
The first thing we’re going to do here is, first of all, this is an entire process of a machine learning model being implemented. And what is very important here is this part actually right here is the actual model. What’s important to notice here is that there’s actually a lot of steps, and a lot of these require human intervention. It’s not just a sort of out of the box approach like you buy a piece of software, plug it in, and there, it’s magic.
A lot of work goes into this, you know, maintainability and everything. So we’ll sort of go a little bit step by step. This part later, we’ll go into greater detail. For self-exploratory analysis, what’s gonna happen is you’re going to get a big, basically dump of data…well, sorry, we will get a big dump of data from our client. And even before we even start considering anything to do with machine learning, we’re going to do what’s called exploratory analysis. And that’s basically we’re going to be looking at the data and see if there’s any patterns already there. And obviously, you know, us, as data scientists, you know, there’s a sort of a bit of a knowledge gap here. And we could see numbers, and we can see charts and everything, but it’s very important for us to work with the client one on one here and ask questions about it, you know, are these supposed to be here, are these graphs supposed to be this steep, and little things like that.
The next we’re going to do is what’s called data cleansing. And this is actually a two-fold process, so to speak. The first one very simply is if the data, you know, as a quick example, it has a bunch of null values, it’s just not going to work. Another one, machine learning models actually only work with numbers. They don’t work with texts or anything. So they have to be converted into a usable format. And, yeah, that’s pretty straightforward, I would say.
The second part, the really important part here is what’s called feature engineering. And the model is basically when you’re training and you’re teaching it to do what it’s doing, it’s really only looking at what you’re giving it. And sometimes, to humans, what seems very, very obvious is not going to be as obvious to machine in numbers. A perfect example would be, let’s say, date.
You have a bunch of transactions with some timestamps. So the machine learning, you know, will look and say, “Oh, okay, well, here are some times where this occurred.” But a human would say, “Hey, why are these transactions three seconds apart and those transactions, you know, two days apart?” It also would be extremely obvious, but maybe not so much to a machine.
So what we would do is we would feature engineer, we would add an extra column of data for let’s say just time between transactions. And that sort of just makes it really obvious in the machine. I mean, the numbers are already there the entire time, but sometimes we just have to make it a little bit more obvious to make sure that it’s picking up on it.
And the third part, which is actually the most important, this is the actual model, the ground truth model. On the next slide, because this is actually so important, I will spend the next slide talking a lot more about this. But basically what we’re doing is we’re creating a baseline, and this is the actual…you know, it’s the model. This is what it’s supposed to be happening. And basically, numbers, future numbers are fed through here after it’s been trained and then you basically get result sets back.
And since I will be talking a lot more about that later, we won’t go into too much detail here. The next one would be applying various models. Very rarely do you have one, you know, machine learning model doing everything. It’s usually a suite, an ensemble of a bunch of them. Either you have five models and you take the average score, or it’s a linear, one model will actually teach another model, and teach another model, and teach another model, and teach another model.
Basically, it’s not just one. Usually, these are very custom for the actual solution, you know, to get where you want to go. And the last one, obviously, is a results analysis. Some of them obviously, because, you know, these AI models, all they’re really doing is just looking at numbers. It really all it is is just simple math. It’s so high dimensional that humans, when we look at it, you know, it’s very difficult to comprehend. But if you actually just took everything, flattened it out, it is actually readable.
So when people say, let’s say, there’s a black box, so to speak, “Oh, you know, it’s just magic. You put it in and it comes out and nobody knows what’s going on,” that’s actually not true. You can flatten it out and expand it and see what it is. The problem is there are so many numbers and there are so many dimensions to it, that, you know, humans, it’s gonna overload their brain. So what we have to do is we have to take these numbers and sort of make them readable by humans, so to speak. And once you get all these, you know, results in these number sets and put them in human-readable form, then, you know, the process is complete and you can start deployment.
Another big one here in result analysis is the degradation of a model. And people say, “Oh, the model is broken. The model is old.” In fact, I mean, [inaudible 00:24:33.634] against, but we won’t get into that. You know, a model is basically, once it’s complete, it’s complete. It’s just going to sit there. And until you actually retrain it, it’s actually not going to change.
What is happening, actually, is the data being fed to it is changing. So, you know, let’s say you created a model 20 years ago and you want to use it today, while the model would still work, the problem is the actual data coming into it has changed immensely. I mean, just little things, you know, inflation, where these transactions are taking place, there’s a lot more in rural environments.
So, you know, let’s say it’s looking at numbers from 20 years ago and just simply due to inflation is like, “Whoa, well, all these numbers are all spiking off the chart. Why are all these transactions so high?” So that’s what we mean by data, you know, degradation. And it’s actually not very difficult to…again, I’ll go into some of the slide later. But it’s actually not that difficult to counteract.
Basically, you just retrain it. It’s the same as, you know, a child in school. Well, grade three math will get you so far, but you know, they have to learn a little bit more advanced. So in grade seven, and grade eight, grade nine, you just teach them more math. And you don’t get rid of the base knowledge that you have. So that’s basically the entire solution, and this is how it’s being deployed. And now, what we’ll do is we’ll go into sort of the ground truth and we’ll go into some actual examples of how this is actually working.
So the first one is what’s called the ground truth. The reason this is a slide by itself is because for all the future slides that are coming up, this is a very, very important term. Some of you may know it, some of you may not. So I’m going to spend a little bit of time here just so we all know what’s actually going on. What’s happening, we’ll use, you know, fraud and AML as an example here. But one of the big challenges that we as data scientists face is, usually when you’re training a normal model, it’s, you know, what animal is this?
When you’re training it, you know, you give it 50 pictures of a dog and 50 pictures of a cat and you balance them out and it sort of it learns them. And then when you show it, it’ll give you a [inaudible 00:26:37.123] “Oh, I think this is a dog. I think this is a cat.” Well, that’s all fine, you know, when you have pictures of each. The problem we’re getting into with, you know, AML and sort of fraud, for example, is the percentages are so low of actual fraud versus normal, that, you know, normal techniques, we can’t really use them anymore.
So, you know, let’s say you have a million transactions, maybe there’s about four of them are fraud. So, you know, you’re dealing with hundreds of a percentage point here. So we use sort of different technique, and that’s called the ground truth. And what it’s doing is it’s assuming that all the data it’s seeing is actual normal transactions. So it’s going to see them all, it’s going to group them here in a little group, “Okay, everything I see is going to be normal.”
And on the previous slide, I talked about data cleansing. And in this phase, this is very, very important, is you have to remove all sort of anomalous behavior or actual fraud because the method we’re using here is…because we’re using the ground truth, you know, it’s zeroing in on what a normal transaction is. And if you’re feeding it vast amounts of actual fraudulent transactions, the model is gonna slowly start to learn that, “Oh, well, fraudulent transactions are normal.” And so that’s going to be a big problem. So it’s very important.
You know, we’re working with our client to sort of get everything there. But what’s basically going to happen here is it’s going to group all of these into big, you know, nice little clump here. And it’s going to sort of give it a score once it comes out. We have this, you know, normal ground truth. And every substance…this is actually the training data here. You just feed a normal tabular data, and this is how it learns.
But once the model is complete, you can either feed it, you know, hundreds at a time, or you can feed them one by one. And each one will get a score. And what it’s doing here is it’s comparing it to this and how comfortable it is that, “Oh, you know what, this is a normal transaction.” And if it does end up, you know, popping out here, so to speak, then it will get a higher score. And this is what we’re talking about, the ground truth.
It’s basically the basis. And it’s very important to realize, you know, these are actual normal transactions that it’s trying to memorize. So we’ll get into some examples here, something a little bit more concrete and interesting.
So the first one we’ll do is just a very simple anomaly detection model. And so, here we have the ground truth right here. And what’s popping up here is sort of an outlier. And this is what we’re trying to look for.
And I said before, you know, because, you know, when we’re training, we have to actually tell it what’s going on. Kal went into it before about anomalous and actual, you know, AML detection, so to speak. What’s important to understand about anomaly detection is you’re using what’s called unsupervised learning. And in the machine learning world, the AR world, these are very, very important.
There’s really only two ways to train a model. One of them would be supervised and one of them would be unsupervised. The very simple difference here is supervised, you know exactly what you’re looking for. So as I said, before, with the cats and the dogs, you know, every picture of a cat you show it, you say, “You know what, this is a cat,” and you actually write down the word cat, and when a picture of a dog is showing, “You know, this is a dog.” On the other hand, what we have here is we have what’s called unsupervised.
This is actually a very powerful tool because what it does is you’re not telling the algorithm what to look for. You’re saying, “You know what, why don’t you look at this and tell me what you see.” And it’s sort of, you know, going back to the education, sort of what I was saying before is supervised would be normal high school. There’s a set curriculum and you do this, you do this. And sort of unsupervised would be the, I don’t know, Montessori way of doing it. You let the child sort of figure out what they want to figure out. And what’s going to happen here is one of the methods we use in the unsupervised is what’s called an autoencoder.
It’s not very, very important that you understand this, but I’ll just give you a brief introduction of what’s happening. So you give it a bunch of rows of data here, and it sort of scrunches it down into sort of a readable format. Here, there’s two being shown. It could be 10, it could be…you know, these rows are hundreds of them, thousands of them sometimes, and it’s reducing it down here. One of the reasons here we actually use two is so it’s human-readable afterwards. A lot of people is sort of worried about, “Oh, this black box is explainability of the system.” And so we use either two or three. So they’re very easily graphable.
If you have 10 dimensions, you know, as I said before, human can’t really understand. And so it’s going to go out here, and it’s gonna sort of expand and try to reconstruct what it knows. And this path it takes and all these numbers and weights, that’s what the actual model is, and that’s what’s being predicted. So what’s going to happen here is, you sort of you’re going to feed it and it’s going to give it a score.
The further away it is from here and sort of the more difficult the model is having understanding this transaction, the greater the score will be. And as we’re training it, you know, these numbers are actually arbitrary. You can say 0 to 100, but, you know, they’re really quite arbitrary. But what is very important is you’re going to what we call thresholds. As you can see here, about 98% of them were just clumped, you know, very neatly down here. And then the little ones, you know, maybe they were done a little bit earlier than usual time-wise, maybe they were a little bit larger, and they still get anomalies. And these are just, you know, irregular. You wouldn’t go as far to call them anomaly. And then you would set a line, a borderline here. Anything above here, let’s flag it as an anomaly and take action upon this.
And here’s an actual example of what we were going through. And, you know, using certain techniques, like let’s see here, if this is the date, you can actually go into each one and break it down and figure out more or less why the model is deciding what it’s trying to do. So, in this case, you know, we had a transaction and the model was predicting it. It was, you know, “Oh, this transaction took place in April or May-ish,” have a little bit of November, December, but it was, you know, pretty confident it was here.
The problem was the actual transaction had taken place in December. And so this actual air gap, this is what’s going to be, you know, coming up here. You know, what most likely probably happened here is that, you know, simple Christmas shopping, it was probably a company that was, you know, most of the transactions occurred during taxes in here so he wasn’t expecting this sort of user, so to speak, to be making transactions January.
So again, well this is not fraud per se, it’s just anomalous behavior. You know, if it was fraud, maybe one of the, you know, guys working there took their credit cards out, “I’m gonna buy my kids some presents.” Maybe it was just Christmas bonuses and, you know, they had more than large. But what you can see here is that it was having difficulty, you know, sort of relating to this. And now, what we can do is now that we have this model that can, you know, sort of understand what’s going on and sort of understand what’s supposed to be going on, we can actually start, you know, targeting specific actions.
And the next one we’re going to do here is actually we’re going to specifically target a target fraud in this case. And now, we’re moving back into the actual supervised. One advantage I talked to you before is the models can actually be stacked on top of one another. And so, the basis for this will actually be the unsupervised model we had before. And so basically, you have here and you have your model A, and this is going to be the unsupervised sort of autoencoder, the anomaly detector we had before. And we’re going to be using a technique here called transfer learning, which is actually, you know, one of the big buzzwords in AI right now. And what that means is instead of taking model A, taking the results, analyzing them, and then running it through a second model, you’re actually adding on to its knowledge here.
So here, you would have an unsupervised model, which is based upon all their past data. And then you would simply add sort of fraudulent and then consider this face as supervised. So here, you know, you’d be plugging…these are actual cases of fraud and more or less going through the same data and using all its previous knowledge that it already, you know, accumulated up until this point.
You can create a new model, which will actually be able to sort of target specific things. This is actually an example of what was happening through one of the models we made before. And sort of, this was actually the unsupervised model, the anomaly detection and sort of the series over time. This is after about, you know, 10 minutes of learning, and this is actually a couple of hours after,
What’s important here is every red dot was an actual transaction and every blue dot was an actual fraudulent transaction. These were superimposed afterwards. It’s very important because as it was learning, it didn’t actually know that these are fraudulent cases. And, you know, these are cash-ins and cash-outs, and it had its own grooves. But you can see, you know, the unsupervised model was already able to, you know, very cleanly sort of divide them into groups that it felt comfortable with.
There is a little one down here. This is, I believe, transfers to accounts. And the actual fraudulent cases here, it was nice and neat. And it noticed that, you know, these transactions all seem very, very similar to me. It didn’t know they were fraud, but it grouped them, you know, naturally here, and here as well. These I had a couple of trouble with and this whole chunk here is having trouble with.
So what we would do here is now that we would teach it after it knows all this, you know, these blue dots are actually what we’re looking for. And it would segregate them from here, and then you’ll be able to pick up actual fraud quite easily using this model. This is actually one of the ones we went through. And you can see here, this is the confidence factor. And it’s very important to know with these machine learning algorithms. You’re not going zero or one. It’s not binary. This is fraud, this is not fraud. It’s a scale from 0 to 100, from 0 to 1000. And you have to choose this line and say, “Okay, something over this, we should take action on.” This one, it so happened it was actually very, very confident and knew what it was.
But this, you know, using the transfer learning and all these models we’ve done before, we have this sort of model now that can predict this. And when these are put into production, these are basically the real time. Once the model is running, you leave it sitting there and you keep asking it questions. So you feed one at a time. “So what about this one? What do you think?” and it’ll give you an answer back. And a lot of these, you know, they’re done in milliseconds, so very real-time capable. And you feed another, “What do you think about this? What do you think about this one?” And here, if it does come out, “Oh, this is a big one,” after that, you would, you know, put it through another one of your, you know, regulatory process and say, “Okay, now what do we do with it?” The model’s job is basically done. It gave you your scoring. So you can run it through another model. You can have a human check it. You can add some rule base to see if anything was happening here. But there’s a whole lot of processes that can go down.
The next one is actually…here, we’re going to…user behavior. Okay, we’re going to go into user behavior here. Sorry, something popped up here. We’ll try to leave it. This is now what we’re going to do is we’re going to be pushing the boundaries now of AI and machine learning. And these are, you know, very, very, you know, recent techniques that have come from actual different from…online shopping was a big one.
A lot of, you know, engineers just playing around and seeing what happens. But they are slowly, slowly being fed into the AML and fraud space. This is what we’re going to be considering, you know, predictive analysis. On the past slide, you know, these are transactions in a vacuum, so to speak. And what we’re going to be doing here is now we can actually extend this. So taking these models and targeting very specific users, so to speak, well, this guy, what was he actually doing?
So you can take his transactions and apply the model to that. And you’ll see, you’ll get these sort of predictive graphs. And these are his transactions over time, and, you know, whatsoever. Basically, you know, this is what he should be doing in a couple months down the road.
So normally, let’s say, you know, withdrawals from a bank account very simply, okay, or deposits in this case. Okay, he’s slowly depositing, you know, at this rate, at this rate, at this rate. Well, if you know what he’s supposed to be doing here, what happens if it sort of all of a sudden it sort of spikes or you do have, you know, this transaction that does not fit this graph? And even with the predictive, what we can do is as it’s, you know, slowly increasing, normally, what’s going to happen is you’re going to get this, you know, six months later and say, “Oh, well, it looks like he did something six months ago that maybe we should have caught.” And if there would be, you know, a slight little jump, it is going to come out here and the reaction time to this is actually going to be a lot quicker than it would have been normally. And yes…sorry, this is bothering me here. Yes, it’s another one. And the other one I want to talk about, and this is, you know, the real one that’s really pushing it here is what’s actually called a GAN, and this is a general adversarial network.
And this is more or less the future of sort of rule-based analytics. Normally, what happens is you have a specific action that’s being taken place, let’s say over 100,000. People who withdraw large amounts of money should be flagged. The problem with these is people are actually, you know, picking these rules and deciding these rules, whereas the criminals are looking for loopholes, rules that have never been found before.
And what we have here in the GAN model is you have a model here that’s the normal predictive model, this is fraud. And you actually train it against the second model and… Sorry. And the second model’s job is to create fraud. And you have these two models sort of battling each other. And this model is trying to stop, and stop, and stop, and this model is actually trying to create fraud. And they’re sort of teaching each other as they go along because as the second model here is slowly learning techniques to sort of break the first model, you feed it and you feed it back into and say, “Hey, this is how you were, you know, discovered.” And what this is doing is this is teaching us, basically, you know, fraudulent techniques that have never existed before. Are they going to come into existence? We don’t really know. But because that loophole exists right now, there’s a good chance that it can be caught.
So up until now, instead of sort of reacting to everything, “Oh, here’s a rule. Here’s, you know, actions that are taking place, let’s create a rule for it,” we do now have the ability to actually create models that would discover rules in the future. You know, these are rules that will probably be taking place in the future, let’s create safeguards against them now. And that’s something, you know, up until now we really haven’t had. And so, you know, as I went through there, those are basically, you know, the three sort of phases of how it’s going down sort of past, present, and future, so to speak. I hope this little explanation helped you understand a little bit. I know a couple of these went into detail. But sort of, you know, this is where it’s going. This is what the data is actually doing to your model and sort of, hopefully, a little bit more explanation.
Kal: Okay, thanks. Thanks, Josip. It’s Kal back again. We’re going to talk a little bit about some takeaways from the webinar. So the first point that I want to make is no artificial intelligence will take over the world, at least not now and not for the foreseeable future. You know, it requires, again, a lot of human interaction, at least within our space, within our domain around AML.
And the second point is, yes, implement AI and machine learning technologies to stay ahead. It’s important to do so today because the criminals are. I mean, if you’re not employing these systems, you know, then you’re going to be left behind, and it’s going to impact your bottom line, it’s going to impact your brand and, you know, lots of fines.
And the third point I want to make is, you know, we talked about initially, you know, the idea of rule-based technologies and, you know, taking a reactive, defensive approach to fighting financial crime. It’s not to say that stop using real-based technologies. You still need to leverage those real-based technologies.
So if anyone on the call is like looking at it and saying, “Oh, okay, so should we just go to AI and machine learning?” no, you need to leverage the end-to-end solution where you’re leveraging the real-based technologies or rule-based, you know, analytics, and also augmenting it with artificial intelligence systems to give you a holistic view of all the “known information” that you’re looking for and the unknown information so it comes together to give you better insights and intelligence. And this is really taking a rearview mirror kind of look.
You’re always looking in your rearview mirror because you’re trying to catch up. And the next point, and lastly, is to take a proactive preventative approach. Go on the offensive. You know, go on the offense by using AI to fight financial crime. You also talked about, you know, the predictive capabilities. But, you know, even if you’re not leveraging initial predictive capabilities, you’re leveraging the anomaly detection, and you’re leveraging some of the fraud detection capabilities, leveraging, you know, using artificial intelligence, you know, it’ll allow you to employ some preventative, sophisticated techniques to prevent people from infiltrating your systems. I want to thank everyone for joining the webinar. It was a pleasure to be talking about AI and how it’s impacting AML.