Deloitte and UBS hosted a roundtable on Artificial Intelligence (AI) at the recent Innovate Finance Global Summit 2018 (IFGS18). We had representatives from across the FinTech ecosystem covering incumbents, start-ups, scale–ups, consultants and other service providers.
AI is clearly a hot topic and there are a number of challenges and opportunities to explore. We chose four key themes, crowdsourced from experts in the area:
- Navigating the hype
- Bias and transparency
- Role of the regulator
After a lively discussion, we used a voting system to identify the top messages by theme. The messages that earned the highest number of votes are summarised below.
Themes and key messages from the roundtable
How to navigate the hype
There is significant hype around AI and some firms are trying to use AI for AI’s sake. Equally there are many AI-driven solutions being pushed in the market and a plethora of use cases are being trialled by firms. All this leads to significant hype around AI and its potential benefits.
To navigate effectively through this hype, participants believed that firms should start with defining the business problem and the outcomes they want to achieve. Secondly, they should consider whether AI is the best technology to achieve these outcomes. Often, it may not be.
If firms go on to choose an AI solution, to derive the most benefit the design and implementation teams will require knowledge of both the technology and the business to which it is being applied. The technologists in the room strongly believed that the role of business representatives was critical. Equally important is the need for firms to build knowledge and familiarity with the spectrum of AI solutions more generally, and specifically at a use case level, across their business. Participants emphasised that AI cannot be applied generically across use cases. Its use has to be considered on a case-by-case basis and is specific to business models, products and services
How to manage bias and (lack of) transparency in decision-making algorithms
The consensus among participants was the need to standardise the vocabulary in relation to bias in AI solutions. A standardised and common vocabulary to describe what constitutes bias and different types of possible bias, for example, would help firms identify, manage and monitor unfair outcomes through the lifecycle of an AI solution.
In the absence of such a common vocabulary, at a minimum, firms should test for any possible discrimination against persons with legally protected characteristics, such as gender or disability, as part of their AI development lifecycle. In addition, the importance of effective controls to identify and manage bias and the application of multiple scenarios during testing were seen as key measures to avoid unfair outcomes.
Finally, clarity and transparency in the design and decision drivers of AI algorithms were also identified as key to minimising bias. And until an acceptable level of transparency of complex algorithms can be achieved, the role of human judgement was seen as vital. In fact, participants believed that AI should, for the time being, be used to inform and supplement human business decisions, rather than “make” them - especially when such decisions may have a significant impact on consumers. This reflects the fact that AI applications in financial services have a long way to go to match the human cognitive skills and judgement required in a highly regulated environment.
Diversity of talent, both in terms of gender and experience, involved in the various stages of the AI lifecycle was seen as critical to spotting and minimising unconscious bias.
How can market participants collaborate?
The next stage in the evolution of collaboration was to find areas of mutual benefit or aligned objectives between collaborators. Some participants felt that the power still resides with incumbents in a collaborative relationship, often with the incumbent dictating terms or holding an equity stake and influencing the direction of travel of the start-up’s strategy. True collaboration, according to the participants, happens when business problems are identified and openly discussed and innovators, at both incumbents and start-ups, work hand in hand, and on equal terms, to solve the problem. To this end, there was a call to the more established market players to hold “open days” where business problems can be shared across the FinTech community.
Participants felt that non-competitive use cases, such as the creation of a shared utility, are the most likely candidates for effective collaboration as it explores unchartered territory and the whole market would benefit from a successful outcome. Subject to the necessary safeguards in relation to data protection and privacy, the sharing of market data was also seen as a possible avenue of collaboration and a way to reduce one of the key AI barriers, i.e. the lack of large sets of high quality data.
Importantly, participants also saw the test of use cases in the public domain as an important channel to elevate public understanding, remove negative perceptions and ultimately help with the acceptance of AI. This in turn would increase transparency in the market around the solutions as well as the challenges AI poses.
Most interestingly though, the majority of participants saw the regulator playing a pivotal role in this area, which is a nice segue into our next question.
What should the role of regulators be?
Participants overwhelmingly believed that regulators were best positioned to act as neutral “conveners” to facilitate ongoing industry dialogue and collaboration and drive consensus in relation to definitions and common standards. This echoes our own thought leadership on AI, where we identified a potential role for the regulator to define the issues to be addressed, and then call on the industry to develop the relevant AI standards and codes of conduct. However, it was interesting to note how strongly the participants believed that the regulator was the right catalyst for greater collaboration.
Participants also believed that the regulator should collaborate with industry to identify areas that were real or perceived regulatory barriers to AI development and provide clarification.
In terms of the areas regulators should address, participants highlighted the need for regulators to develop further the concept of accountability in an AI environment, in particular at the point of handover from man to machine, and if and when man ever stops being accountable for the actions of the machine. However, our own view is that the direction of travel of regulators, specifically in the UK, is one of increasing individual accountability. A relevant case in point is the Prudential Regulation Authority’s proposed expectations regarding firms’ governance and risk management of algorithmic trading, according to which each algorithm will be required “to have assigned owners, who are accountable for the algorithm’s use and performance”.
Participants stressed the need for regulators to better understand the effect of greater AI adoption on the workforce. Reskilling workers whose jobs may be replaced with AI solutions, new skills required to manage and controls AI risks, and changes to the operating model and related impact on succession planning were some of the areas explored.
The discussion also touched upon how regulators and supervisors should use AI for their own work. As firms work to embed AI in their businesses, the extent to which regulators themselves will innovate will have important implications for the effectiveness of their supervision and their ability to formulate regulations that are fit-for-purpose.
Overall, the discussion at the roundtable brought forth many interesting views on some of the key issues and opportunities in relation to the use of AI in financial services. Many of these echo the messages in our recent paper on AI and Risk management: Innovating with confidence.
The consensus was that humans have a critical role to play in both the design of an AI application and also in any final decision taken in relation to the relevant business outcome. This in turn emphasises that, for now at least, AI will most often serve as an input into a decision made by a human, rather than a substitute for it. There are many areas to tackle to increase the acceptance and adoption of AI - bias and transparency being key ones.
The role proposed for the regulator as an accelerator and facilitator of collaboration and AI adoption, as well as a rule-maker, is interesting. Only time will tell whether regulators have the mandate and appetite to rise to this challenge.
 Suchitra Nair, Director, EMEA Centre for Regulatory Strategy and Peter Stephens, Head of UK Group Innovation, UBS co-chaired the session