Case Study - ALM BFSI

Reduction of False Positives in SAS AML Data

Executive Summary

One of the leading Banks in the UAE uses SAS, a post facto platform to handle Anti Money Laundering (AML). Alerts are currently rule based. Transactions made by a customer could be flagged as suspicious based on various rules, scenarios and parameters. The flagged alerts are manually checked by a Maker –checker analysis for validity before closing the ticket with appropriate comments.

The compliance team of the Bank decides the Rule-scenarios. The parameters of the scenarios are periodically reviewed and updated by the compliance team.

For example, SAS maintains a watch list of countries, in case there is any transaction from this country an alert is generated. In future if the compliance team wants to add/delete a country in its existing watch list, it can do so by altering scenario parameters.

Currently SAS generates numerous alerts claiming those alerts to be FALSE POSITIVES (illegal), however post review 90%+ of the alerts are found to be normal (legal). Due to this scenario, the organization is prone to suffer from high operational cost.

Our proof of concept work is to automate the work of operations people (makers) who manually check the alerts by effectively using Artificial Intelligence.

Table Name

Comments and observations

FSK_ALERT

•ALERT_ID’s – primary key

•The important corresponding columns were SCENARIO_ID, MONEY_LAUNDERING SCORE, TERRORIST_FINANCING_RISK_SCORE, ALERTED_ENTITY

Other columns were DATE_KEY, TIME_KEY etc.

FSK_ALERT_TRANSACTION

•Important columns were TRANSACTION_ID, ALERT_ID, AMOUNT

FSK_ALERT_EVENT

•Important columns were ALERT_ID, EVENT_DESCRIPTION

Event description tells whether the ALERT is FALSE_POSITIVE or not. To find the TRUE POSITIVE LABELS we need to manually go through the comments from FSK_COMMENT which have ALERT_ID’s and corresponding description for alerts.

FSK_SCENARIO

•Total of 1024 scenarios

•SAS generates alerts based on these scenarios

These scenarios are subject to change over time

Approaches

Classification of Scenarios and Hierarchical segmentation

Using this approach, scenarios would be classified into two groups.

High risk scenario group: This will contain the scenarios that contribute to False Positives.
Low risk scenario group: This will contain the scenarios, which contribute to True Positives.
A risk scoring function is employed to give risk score for each the rules. We will restructure the rules or scenarios in a hierarchical order by assigning priority score obtained from the above-mentioned risk scoring function. When an alert is generated it will be checked for its risk and can be auto-closed as per the risk group.

 Transaction level scenario predictive modeling 

Using this approach, all the scenarios would be clubbed for a given transaction.
In addition transaction information would be added, and the model would then predict whether a scenario is True Positive (TP) or False Positive (FP).

To achieve this, we will collect as much transactions as possible and club the scenarios alerted for those transactions. Also, we will consider the scenario information (TP or FP) for the respective transactions from the previous records. By grouping this altogether we will be finding patterns in data using Machine Learning that makes a scenario FP or TP and storing it in a model.

Once the mode is ready, for any new transaction that is alerted with different scenarios the model would be able to predict the nature of scenarios.

Customer level scenario predictive modeling 

This is an advanced approach, here we will take the most repeating scenarios in case of TP as well as FP.

Assuming a top 50 in each case out of 1024, the training Data should have Transaction data of customers plus customer information and labels are whether a scenario is TP or FP. The intention is to be able to perform a multi-label classification.

Each data point should have the features as follows:

Transaction id

Customer information like age, gender, geography, high risk or low risk customer, amount transferred, account type, transfer mode, beneficiary account details etc.

Scenarios for alerts as labels whether TP of FP.

 Likewise we will have all transactions made by customers and the scenario details for each transaction. We will be predicting how likely each scenario is TP or FP based on customer information and transaction details.

Constraints with the current data

After analysis of the collected the data, each alert was passed through a NLP model build to analyze text. In addition, through negligible manual efforts the labels of True Positives were identified.

The data had 52,768 alerts for False Positives.

The approach was to map each alert with the transaction, customer id, event and scenarios.

Data Problem

*There are no common alerts between the above two files

*There are no common alerts between the above two files

Later, the SAS people informed MQI that they maintain another table called FSK_TRANSACTION_ALERT with Transaction details.

Since our approach mainly revolves around customer information and scenarios, data pertaining to customer information would be required to proceed further.
We recommend mapping transactions with TRANSACTION_ID and other customer information along with CUSTOMER_ID for the results to be more effective.

Work Done with Available Data

We have a done a prediction modeling for the alerts to be TP or FP using the Risk scores as mentioned below (generated by SAS):
MONEY_LAUNDERING_RISK_SCORE
TERRORIST_FINANCING_RISK_SCORE.

This is done on an alert level modeled as binary classification problem with one class as TP and other class as FP.

We have used an alert dataset of 97432 alerts with 10 cross validation.
The model resulted with an overall accuracy of 98.14% and f1-score of 0.98172. With current data and developed model we are able to close 97432 alerts within 15 minutes.

Current Status

 The results obtained from alert level modelling were promising and an indication of correct approach.

As the next step we moved these models on actual and live customer information and actual data.

Further analysis was done on the transaction details and customer specific data an enhanced outcome of the suspicious transactions was derived.

Acheivement

40% Reduction in alerts!

Lowered false positives from existing AML transaction monitoring system by applying additional layers of advanced analytics.The number of alerts generated was reduced by 40 percent.
 
Before :
On average 6 lakh false alerts are generated every 15 days.
After :
On average 3.6 lakh false alerts are passed ahead 15 days

30% increase in Disclosure rate of SAR!

Reduction in the false positives improved the disclosure rate of suspicious activity reports and thus leading to quick and efficient detection of real cases. Disclosure rate of SAR are increased by 30 percent.
 
Before : Out of all transactions which trigger alerts, 5 percent was actual illegal transactions. Here disclosure rate is 5 percent.
After : Disclosure rate became 8 percent, means a 30 percent increase

50% reduction in rework!

Minimized the need to retrace investigation steps by collecting comprehensive information
automatically by the NLP module. This reduced manual work done while investigating each of these cases, thus reducing the overall investigation time done by makers.
 
Before:
Investigation time = 10 minutes
After:
Investigation time = 2-5 minutes by using NLP engine.

Business Impacts

  1. Efficient use human expertise : By providing greater insight and eliminating duplicates, humans can focus more on the truly suspicious behaviors and quickly resolve low-risk alerts.
  2. Increased customer visibility: With a consolidated view of customer accounts, and KYC, the bank has improved their understanding of customer behavior and risk.
  3. Effective Prioritizing of risks: By clubbing the contextual information generated by the Analytic Engine and experience of Analysts, risks can be prioritized and resolved more quickly.
  4. Reduced Operation Cost : The operation cost involved is reduced by automating human efforts.

Future Work

Auditing of rules will happen once a year. It takes around 2+ months of human efforts (entire team) to audit and analyze the rules. Based upon their auditing, new rules were added or existing rules will be changed or removed.

The recommendation module would analyze the performance of AML engine, generate reports.
It would have two components
Auto audit module
Rules validation and recommendation module

Sub module 1:
Auto Audit Module:
The proposed module will audit the AML engine based upon its weights learned. It will function as a feedback system and generate performance reports of AML solution with the help of NLP Module.

Sub Module 2 :
Rules validation and recommendation Module
The proposed module will validate each rule and its parameters, based upon the reports generated by audit module. It will prioritize the scores to different rules and recommend the rules which would be added, changed, removed etc.

Thank You