Saxo Bank’s own AI explorations are still at a relatively early stage, but we have clear proof of how AI can provide differentiated, bespoke, and high-quality services to clients in a way that humans cannot, even when supported by large-scale processing power. In large part, this is because AI-based tools can anticipate and facilitate fast-changing client needs and preferences that their creators could not have predicted. Their ability to learn on the go also means that not only are they faster, more flexible and more accurate, but they are also available on a 24/7/365 basis, without breaks for training or ‘downtime’.
A widespread example of this is chatbots used to automate simple customer service queries, which are rapidly evolving into ‘voicebots’ as natural language processing capabilities improve. At Saxo, we have piloted initiatives to send more targeted content to clients and to screen sales leads using AI-based programmes. In both cases, the initial roll-outs have been encouraging, with click-through rates considerably higher for individualised content and recommendations, while sales teams are now spending more time on value-added follow-up tasks, rather than administrative processes.
A quality problem
But this ability to anticipate and adapt should be treated with caution by those looking to expand their AI programmes into new areas. Because AI applications evolve and change as they consume data, firms need to monitor that consumption and its consequences. The data on which AI-based tools feed must be relevant and clean, which can be a major challenge, both when sourcing data internally from multiple legacy systems and newer third-party providers.
This problem of data quality is a universal and complex one at a time when innovation is placing myriad vast new sources at our disposal, partly due to sheer quantity but also due to their unstructured nature.
The AI challenge lies not just in data quality but also in setting appropriate boundaries. Trading algorithms, for example, have typically gone awry when they have been set insufficiently robust parameters. For this reason, Saxo only allows its AI-driven content feed to choose from carefully curated, credible information sources.
Into the unknown?
Further, the constant learning and adapting process that make AI programmes so valuable also make it hard to explain their decisions or recommendations – a process known as reverse engineering – that may limit their usage. Regulators are increasingly alert to the application of AI, aware that its use runs counter to requirements for banks to demonstrate oversight and understanding of tools and processes, notably in the realm of financial crime compliance and cyber security.
While it is acceptable for a service provider in the travel or entertainment sector not to fully understand how its recommendation algorithm comes up with relevant suggestions based on past browsing history, it is less acceptable if a bank cannot explain why its AI-enabled surveillance system picks up on some data flow anomalies and not others, even if its hit rate is high.
As such, regulators are working together with regulated firms on defining the scope of AI-based decision making, in order for all parties to become more comfortable with the potential risks and benefits. This is an important step towards the fuller exploitation of AI’s benefits in the finance sector within an appropriately clear but flexible legislative framework.
Over time, AI can help banks, their customers and regulators benefit from a clearer audit trail, including time-stamped details of information communicated between counterparties, as well as more consistent service levels that are less reliant on human understanding and interpretation. Already, regulators are exploring opportunities to reduce compliance costs via machine-readable rules, which could reduce banks’ legal bills.
Adapting business models
But the biggest obstacle to leveraging AI is the attitudes of the c-suite, not regulators. Firms must embrace AI from the top down, ensuring that their business model enables AI to be truly integral, not hidden away within the technology stack. To this end, pertinent questions include the following: which business cases have we identified for adoption of AI and which should we prioritise; to what extent does our data management strategy support use of AI; what AI-related resources should we source from partners and which should we develop in-house; how are we going to compete to recruit the talent needed to run a wide range of AI initiatives?
Critically, the answers to these questions must be informed by client focus. Over the next two to three years, we will increasingly see AI-based solutions allow for faster, scalable and accurate customisation. At last, banking services will evolve to meet individual client needs, rather than the client working hard to understand which of the products on offer actually suit his or her situation. Banks have worked to regain customer trust for the best part of a decade; AI can play a significant role in rebuilding that relationship.
Christian Hededal is Head of Data Science at Saxo Bank Group