
Banks and financial institutions that are not using AI to detect fraud are going to be outgunned because those trying to commit fraud are using AI to try to infiltrate their systems.
The warning came from David Asermely, the Global Lead of Model Risk and AI Governance at SAS, a leader in data and AI solutions for banking, financial institutions and insurers.
During a recent visit to Australia to talk with SAS customers, Asermely, who is based at SAS’ headquarters in North Carolina, said banks were more conservative when it came to AI because of what is at stake.
“Banks have a lot of data – personal data – so there should be controls. I am all for a cautious approach but if you are not using AI to help identify fraud but those trying to commit fraud on you are using AI, are you going to be outgunned? Can you compete with them or are they going to find holes using these tools which you have not enabled in your organisation to fight back?
“It is an arms race in the world of fraud and cyber,” he warned. “It is forcing organisations – even if they want to be conservative – to build their arsenal of these tools in a way that they can be successful.”
Asermely said he believes there will be failures in risk management on AI because of the lack of imagination.
“What I mean by that is that if you are moving an AI system into production you need to ask what are the risks if that AI went wrong? Who is going to be creating this list? Is it the developer? Is it the risk team? My view is that it should be a combination of a couple of different people within the organisation. If that AI used personal data to inform the AI, then there is a risk of data governance and personal/private or sensitive data being leaked out of that AI system.”
Asermely said business also needed to be cautious around the secrets going to be told to AI and what if those secrets were stolen by malicious users. “Those malicious users now have the blueprint on how to navigate around your fraud protection or your cyber security infrastructure.
“So, when you look at the risks associated with AI, you need to get the right people involved in the process of defining the risks and being imaginative of what could go wrong and doing that in a way that allows you to put in controls to make sure those things don’t happen.”
Asermely said having governance around AI is just part of the cost of doing business. ”It is not just checking the box and ensuring a company is doing it in a trustworthy way. What’s actually happening is that those governance principals are being used to improve their AI by being able to provide feedback. Through the governance process you identify gaps; you identify things that are not working well and that then can be used as information that can go back and improve the AI through that feedback loop.
“One of the interesting parts of AI is that it can learn over time ,but the learning will be dictated by the feedback you can provide as an organisation on how well it is doing, where it is doing poorly and then us that to drive the quality of AI.”