Legal AI Firm Evisort Responds to ‘Database Exposure’ Security Claims

In an unusual series of events, an anonymous tipster claimed to have found an ‘exposed database’ within US-based legal AI company Evisort which contained some ‘customer data’, and then passed on their findings to news site TechCrunch. Here, co-founder and CEO of EvisortJerry Ting, explains to Artificial Lawyer what has happened.

Statement to Artificial Lawyer:

Earlier today we learned that some information Evisort stored in an Elasticsearch database that is part of our internal development environment may have been accessible to external users. The database is not part of our production environment which we use for clients, but rather it is used internally by our engineers for development and testing purposes.

Almost all of the information in the database was test data not associated with any of our customers, and we have no reason to believe that any information was accessed by malicious persons or misused.

Upon learning of this we promptly commenced an internal investigation and made certain that the database was no longer accessible. We have retained a top outside forensic firm to assist in our investigation into this matter.

As part of our internal investigation, we are actively reviewing the documents contained in the development database, along with any available logging data, to determine what information may have been affected.

Although our investigation is ongoing, the vast majority of information contained in the development database was placeholder or benign information used for testing purposes, including publicly available contracts obtained from sources like the SEC Edgar database.

However, it appears that there may be a very small handful of legitimate documents in this environment, many of them our own documents that are immaterial like NDAs or proposals.We are happy to schedule a call to discuss the files that we have identified to date and will be proactively reaching out to customers who we believe may have been affected. We have no reason to believe there was a malicious data breach.

We take the protection of our customers’ information seriously and sincerely regret that this occurred as well as any concern it may cause to our customers.

We sincerely apologize for this and will be making significant changes in both process and personnel to ensure that we prioritize security as the number one most important part of Evisort. We will provide additional information once it is available from the investigation. ‘

What does all of this mean?

This is highly unusual in several ways. If you read the TechCrunch (TC) piece you can see that the anonymous tipster appears to have gone first to the US news site. Why do that? Why not go to the AI company and help them? And, how did the tipster know where this internal database even was or that it even existed? Evisort is still a relatively small company and legal AI doc review remains a niche area.

Naturally, this is not going to be good for Evisort, which recently bagged $4.5m in funding to help grow their platform (see story here). But, does one internal database used for internal development exposed like this create a major challenge? It’s hard to say right now, but legal AI companies have been at great pains to show clients that they are guarding client data, especially given that many of them are working in areas such as M&A due diligence for listed companies.

However, as Ting says in his statement the material that relates to clients was ‘a very small handful of legitimate documents in this environment, many of them our own documents that are immaterial like NDAs or proposals’. I.e. this was not – it appears – the data Crown Jewels of their clients. 

Clearly even an internal training database should not have been visible to anyone outside of the company, but it would be overly harsh to see this as an exposure of large volumes of truly sensitive client data – at least as far as can be seen right now.

Artificial Lawyer asked Nick Watson, managing director of virtual data room, Ruby Datum, about security issues of this type – something that his company spends a lot of time securing against – and he said the following: ‘Unfortunately many security attacks aren’t targeted attempts, rather they are scripts/bots that scan public IP addresses for known vulnerabilities such as an open MySQL database, common code exploits, or in this case an open ElasticSearch database. We see these attacks on our systems almost daily.’

Artificial Lawyer will keep you updated and bring you news of what the investigation finds.