Meet Counself – Addressing Legal AI System Risk & Compliance

Meet Counself, a company focused on helping legal departments and law firms to reduce vendor risk and understand legal tech provider compliance issues, including in relation to legal AI solutions.

They have developed a methodology that streamlines the ‘arduous and manual process’ of legal tech risk analysis and compliance. InfiniGlobe, the California-based group behind the platform, was started in 2012, and the Counself platform itself launched in 2017.

The company provides compliance teams, in law firms or in-house, a ‘secure, collaborative, and automated solution’. This includes a library of legal industry best practice forms, questionnaires, documents, and request templates to speed up compliance gathering and monitoring processes.

For example, its Counself Risk Module helps teams to keep up to date with financial and regulatory rules. It also helps with due diligence collection, risk review and monitoring processes. This in turn provides a veritable audit history for compliance and certifications.

One thing that comes up again and again when talking to legal tech vendors, especially when it involves data moving off-site in any way – which is very often the case with legal AI systems – is security, risk and compliance.

Clients are concerned, but there are not always the protocols in place to handle this issue. This slows things down. And this is bad for the vendors of AI systems, it’s bad for the law firms, and it’s bad for the clients who in fact just want the benefits of the technology – but realise they also need to cover off any risk issues beforehand.

The recent launch of Reynen Court, a platform that provides third party legal tech and AI solutions to the market, and also helps address security and risk issues as part of its offering, is a sign of the times.

Clearly then this area of vendor risk amelioration is a growth market. So, when Artificial Lawyer met up with Counself by chance recently, it seemed a good moment to ask some questions on this theme.

– Do you see legal AI systems as a risk to lawyers? 

We help clients manage the various risks involved when outsourcing work to vendors and outside counsel. Risk considerations could be applicable if a company uses AI technology from a vendor, incorporating it into their services or operations.

For instance, a particular risk associated with AI systems that should be considered is the verity of AI results: sometimes artificial intelligence algorithms can generate biased or unexpected outcomes, which can have the potential to impact corporate reputations and make it difficult for humans to explain the results.

In any situation in which there is responsibility, there is risk, and delegating uniquely intelligent tasks to AI systems comes with a proportionate risk that should be taken into account.

– How can lawyers use your system to help vs AI risk? 

Counself helps Legal and Compliance Departments perform thorough and pointed Due Diligence assessments and monitoring of all vendors, including technology providers, such as those who utilize AI, to make sure that they are compliant with information handling, safe guarding, and security standards to protect client data and reputations. AI is a valuable tool, but it isn’t an exception to security.

Just like anything else, AI systems have to be taken into consideration during risk and assessments and when audited during regulation compliance reviews.

On the other side, if a lawyer or firm uses AI technology internally and get a project with a client that doesn’t use that technology, they can use Counself to submit all the appropriate documentation to their clients to identify risks and pass the due diligence assessment to become an approved vendor.

– Do you work directly with insurance companies? 

We have worked directly with several large insurance companies, but currently have no active projects with them.

Artificial Lawyer also had to ask:  are lawyers actually LESS risky is they use AI? Which the company’s spokesperson replied to with: Hard to answer. Pass.

So, there you go. Is this the answer to all risk problems related to legal tech and AI systems? Not exactly, but it at least gives users of such systems some clarity and also a package of responses that meet their obligations that can be shared with any clients or other stakeholders.

For example, as the company says:

‘Responding to RFx and security assessments clearly and thoroughly is a must to retain client trust. With Counself, you can create a library of answers, certifications, and documents to be used by anybody in your firm who participates in the response process.’

And,

‘There aren’t enough hours in the day to respond to multiple client requests, but instead of wasting your time looking in emails or prior responses for answers to the same questions, Counself helps you to pre-populate client’s questionnaire templates with your firm’s pre-approved answers.’

And finally,

‘Maintain a centralised database of certifications and contracts. Set automated alerts and reminders in advance of expiration and renewal and communicate directly with client regarding specific documentation without tons of emails, calls, and questions.’

Although it’s tempting to ignore the risk and compliance area because, well, frankly it’s not a fun area for most people, it’s still got to be addressed. Anything that helps speed up the compliance process has got to be good for the legal tech and AI sector as it will accelerate user uptake of new technology.

2 Comments

  1. Well. This can also be seen as snake oil. Having a firm try to approve and audit a models selection and path to that selection when it uses Deep Learning and hidden layers. Is not something that is going to be productive. Even in SVM and linear regression. It’s hard to be deterministic. So it kind of depends on the algorithm used. And if they have access to all the data AND all the parameters. And no legal AI company is going to give that. As it’s our years of hard work.
    So i would then ask. How will they work out bias. How will they audit a systems outcome. Very hard and in some cases with DL. Impossible.

    Kevin Gidney CTO and Founder Seal Software

    • Regulatory bodies such as the OCC and FFIEC and legislation such as the SR 13-19 / CA 13-21 provide guidance on and requirements for Managing Outsourcing Risk, while Counself Risk™ is the tool companies or firms can use to streamline their Vendor Risk processes. We don’t take part in setting the guidelines or requirements against which Vendors are evaluated, these are usually either internal or regulatory. That being said, I recognize your points – that skating the line between information security and intellectual property can be a challenge for software companies. I’d like to share my two cents.

      Normally we develop trust for people and things over time, when they predictably and reliably behave as we expect them to. AI is not an exception to this practice and has successfully built consumer trust in many areas, such as weather forecasting, navigation, and recommended songs or movies.

      AI can also fail in suggesting the best option- it is, after all, constantly learning. Netflix recommends movies based on a complex algorithm and many data points, but if their algorithm “fails” and I don’t like their suggestion, I can just watch something else.

      Making the decision to accept “the risk” of using AI technology comes down to an assessment of what can be gained, what is at stake, what the impact could be, and how risk can be mitigated. In a simple example: planning an important dinner solely based on Siri’s suggestion could be risky, so to mitigate the risk, before making your final decision, you may decide to read a few reviews and call the restaurant yourself. Either way, you will have to deal with the food poisoning, not Siri.

      In the modern B2B world, the relationship between client and vendor is more of a partnership and wide reaching regulations such as GDPR (Article 22) specifically profiling automated decision-making put all in the same boat. I think future technologies handling data should be designed to not only think intelligently but which can be securely explained to ensure full transparency.

1 Trackback / Pingback

  1. Today in Legal Artificial Intelligence | Sportal Template

Comments are closed.