Digital transformation has for years seemed like nothing but a buzzword. More recently, however, it has begun yielding the hoped-for results as banks, retailers and other businesses supplement traditional customer service channels with digital services.
All the while, an army of agile start-ups, FinTechs and other data-driven organisations have disrupted the financial services and retail landscapes. Through powerful business applications, they’ve brought a series of new and innovative services to the market that deliver on rising consumer expectations.
This has led to a digital-first mentality among customers who now expect online services as a default option – and younger generations in particular will have no problem looking elsewhere if this cannot be provided. While digital services no doubt allow for a more seamless journey, they also make us more vulnerable to fraudsters, opening up the number of potential avenues for attack.
Speeding up the drive to digital
In the early months of 2020, we saw a boom in digital services, while the traditional physical economy has slowed to a crawl. To stay in business, many companies are being forced to move services online faster than they had planned. In the rush to get these new digital services to market, there’s a significant risk that development teams will make mistakes and overlook the usual security checks. Unfortunately, the likely result is that fraudsters will have a field day as they find and exploit these new gaps in their victims’ armor.
Agility in fraud prevention
In a highly dynamic environment where fraudsters are discovering new attack vectors every day, it’s critical for fraud prevention teams to be able to detect threats and respond quickly. Artificial intelligence and machine learning (AI/ML) approaches can help by spotting patterns in previous fraud cases and using them to detect suspicious behavior by customers, employees or systems.
AI/ML is a vast and highly technical field, and it can be difficult for fraud teams to choose the best way to start their adoption journey. Nevertheless, at SAS we’re already seeing banks and other organisations put a variety of interesting AI/ML-powered anti-fraud solutions into production. For example:
1. Computer vision
Digital banks such as Monzo are using smartphone cameras with facial recognition technology to prevent unauthorized users from gaining access to customers’ accounts via their mobile apps. Today’s powerful facial recognition solutions are built using machine learning models that can tell the difference between a customer’s face and a photo or mask.
They can even detect when a person is sleeping or unaware that the camera is being used, potentially making them a much more powerful access control measure than traditional password-based login methods.
Banks are also using image recognition to streamline processes such as paying in cheques, where customers simply take a photo of the cheque and upload it via their banking app. Banks already use machine learning models to identify whether the image is a genuine cheque and extract the key information from it. It will be a natural progression to analyse signatures and detect more types of potential cheque fraud.
2. Natural language processing
Natural language processing and text analytics can help companies handle larger volumes of internal and external communications – such as phone calls, emails, SMS and instant messenger/chatbot interactions – while still maintaining robust anti-fraud measures. For example, in a banking context, many institutions already record the phone calls of their traders and other employees to provide evidence in cases of insider trading and other financial crimes.
By using natural language processing techniques, organisations can automatically transcribe these audio files into text. Then AI/ML models can recognize relevant keywords and topics, analyse tone and sentiment, and raise alerts to the fraud team when suspicious behavior rises above a given threshold.
3. Minimizing false positives
False positives are the bane of fraud investigators’ existence, diverting expert resources away from the true criminals and alienating innocent customers and employees. You can use AI/ML techniques to build models that can analyse previous cases and separate out the behavior patterns that are truly suspicious from the purely superficial anomalies.
4. Improving rule-based methodologies
Many current fraud detection systems use a defined set of business rules to assess the likelihood that a given case requires investigation. You can use AI/ML models to supplement and test these rule sets. This provides insight into the relationship and relative predictive power of each rule and even suggests new rules that can be added to increase the accuracy of the results.
5. Uncovering collusion
One of the most powerful tools in an investigator’s toolkit is network analysis, which provides tools to visualize and understand the relationships between the people, places and events surrounding a case under investigation. Just like human investigators, AI/ML models can be trained to interpret these complex networks, and can often identify patterns and relationships that traditional approaches might miss.
6. Monitoring network logs
The move towards providing digital services for customers and remote working capabilities for employees poses new problems for network security teams, who can no longer count on all sensitive activity taking place behind the corporate firewall. However, you can also use AI/ML solutions to process vast quantities of network logs and identify suspicious events at a speed and scale far beyond the capabilities of human network administrators.
Putting a platform into practice
Open source tools tend to be where most organisations begin their journey with AI and ML, and this works well for small-scale deployments. However, as businesses scale up to enterprise-grade deployments, the process become more complex and a robust strategy is required.
Taking a centralized approach is one way to drive success, whereby organisations deploy an analytics platform capable of supporting both orthodox statistical approaches and AI/ML techniques. Not only this, but businesses also require governance to ensure information is used appropriately and that model testing is carried out effectively, as well as ongoing monitoring to minimize.