Using Artificial Intelligence for Regulatory Investigations – The Benefits, Boundaries and Beyond
By Partner Richard Jeens, and Associate Natalie Osafo, at Slaughter and May
When a regulator starts using new technology it is important to take heed. A current trend we are seeing is that artificial intelligence (AI) is increasingly being used by regulators and corporates to conduct investigations more efficiently.
AI has a broad meaning: put simply, AI is the capability of computers to imitate human intelligence. However, this can take many forms. In this post, we consider and share our views on the opportunities, limitations and future uses of AI for investigations.
How is AI used for investigations?
Investigations are fundamentally about establishing what happened and why. Conventionally, this has involved searching for and manually reviewing data. However, date ranges and keywords cannot always reduce large datasets to a manageable size to review and not all matters can be defined or caught by keywords.
AI has diversified the possibilities available. In the legal industry, ‘AI’ has manifested itself as pattern finding algorithms, supervised and unsupervised machine learning, natural language processing and network analytics.
An example of this is the contract review tool Luminance, which is used by Slaughter and May’s lawyers to conduct M&A due diligence, amongst other things. In investigations, AI software which includes components of unsupervised and supervised machine learning can be utilised to facilitate faster and more effective document review.
Unsupervised machine learning is where an algorithm uses pattern recognition to cluster similar datasets together; without additional input from a human. This is often the first step taken by AI software, the result being that a lawyer can review a small number of documents in the cluster and then apply the same conclusion to other similar documents quickly.
Supervised machine learning involves someone teaching the algorithm how to identify certain data (e.g. documents, clauses or phrases) by attributing values to the data. The underlying algorithms are initially trained on a sample of documents (known as a ‘seed set’), the software then applies what it has learnt to a larger pool of documents and tags them accordingly. In the case of investigations, the teaching is often binary, such as “relevant” / “not relevant”, but it can be more complicated.
In the UK, the Serious Fraud Office (SFO) is a leading user of AI for investigations. The new SFO Director Lisa Osofsky’s first speech signalled that the SFO is going to expand its technological capabilities to other areas, such as for identifying proactive investigative lines, indicating that this trend will continue.
The SFO currently uses ACE RAVN both to identify privileged documents (that can then be set aside) and documents for its own investigation. Its first use of ‘ACE’ in a criminal case was in 2016 for its investigation into Rolls-Royce (advised by Slaughter and May) where it reviewed 5.04 million documents for privilege and identified 2.35 million documents for review by the case team, in one month instead of the more usual timeframe of two years.
Benefits and boundaries of using investigations AI
AI can help you investigate Big Data efficiently
Training AI to identify key pieces of data, as explained above, is useful for quickly prioritising what to review (and not to review) in massive datasets. AI can also save time and expense by providing insights which facilitate more efficient planning of investigation data reviews.
For instance, AI tools can assist with structuring a review so that senior lawyers are allocated documents with the highest probability of relevance to review, or by cross-referencing different datasets (such as structured trading data and unstructured messenger chats between traders).
AI can boost the defensibility of investigations
In the Pyrrho casethe High Court recognised that AI-assisted reviews can be more consistent than and just as accurate as human reviews. AI’s consistency derives primarily from applying the same ‘learning’ to the whole dataset.
Furthermore, collaborating with regulators using AI to identify privileged material was relevant to the Court’s assessment of Rolls-Royce’s “full and extensive” cooperation in the SFO investigation.
Evaluating the results of AI and defining what “accurate” means is nevertheless paramount – AI algorithms are not homogenous and are only as accurate as the humans who train them and the data they are trained with.
They also cannot react to the unexpected in the same way as humans – hence the need for a continuous feedback loop in any review. This symbiotic relationship, necessities an augmentative role for AI in investigations alongside a team of subject matter experts, but also helps to bridge the gap between what computers and humans can do. Key considerations include understanding the systems’ capabilities and limits, and how, by whom and on which seed set the AI will be trained.
Complex issues in investigations can be missed by AI
AI is still in its infancy and is not very good at detecting complicated matters or predicting the unexpected aspects of an investigation. If what you are looking for is not clear-cut or requires ‘unteachable’ aspects of human judgment, AI may struggle.
For example, only recently has AI developed the potential to read between the lines of a document to pick up on coded phrases in employee communications that could be used to conceal impropriety. Yet the ability to analyse opaque evidence is a trademark of investigation lawyers. The time required to supervise what AI cannot do in complex investigations must be weighed against any potential efficiency gains using AI.
Unclear interaction with data protection law
The General Data Protection Regulation (GDPR) and UK Data Protection Act 2018 both regulate, and provide qualified rights for data subjects for, the automated processing of personal data– a type of processing which many AI systems use.
Current EU and UK guidanceon automated processing focuses on commercial situations such as automating credit application decisions, but the impact on investigations will no doubt be felt as individual rights under the GDPR are exercised more fully.
Beyond – AI going forward
Clearly AI is a game changer. AI can be a fast, precise and cost effective way to plan investigations, but it is not a panacea. At the moment, significant resource can be required to compensate for AI’s limitations, which are amplified by the unpredictable nature of investigations, and the evolving data protection position.
Nevertheless, future uses of AI for investigations are significant – both in the initial review work but also the response to what those investigations find. For instance:
- using AI to evaluate AI algorithms being used by those who are subject to investigations. For instance, AI could be used to assess whether high-frequency trading algorithms work as advertised or whether insurance algorithms are discriminatory;
- using AI to join up and facilitate the analysis of unstructured data (e.g. emails, Word documents, chat messages) with structured data; and
- combining document analytics with document automation to pre-empt and automatically flag changes required to specific clauses in contracts and policies to comply with regulatory changes so that non-compliant clauses are flagged and re-drafting suggestions provided automatically. Using AI in this way could enable corporates to seek legal advice on how to address compliance irregularities earlier, and even before they arise or result in an investigation.
Richard Jeens is a partner and a member of the Innovation Committee at Slaughter and May and Natalie Osafo is an associate at Slaughter and May and President of the Junior London Solicitors Litigation Association.
Lisa Osofsky, Director, at the Cambridge International Symposium on Economic Crime 2018, Jesus College, Cambridge dated 3 September 2018: https://www.sfo.gov.uk/2018/09/03/lisa-osofsky-making-the-uk-a-high-risk-country-for-fraud-bribery-and-corruption/
Speech of Camilla de Silva at ABC Minds Financial Services dated 16 March 2018:https://www.sfo.gov.uk/2018/03/16/camilla-de-silva-at-abc-minds-financial-services/
Article 29 Data Protection Working Party ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679’, Working Paper number 215 (revision 1) and the Information Commissioner’s Office guide on ‘Big data, artificial intelligence, machine learning and data protection’ (version 2.2).