Clearbrief Launches Anti-Hallucination Feature

Litigation platform Clearbrief has launched a new feature to counter the AI hallucination minefield. The Cite Check Report lists ‘all factual and legal citations identified in [a] Word doc and flags a range of potential issues’. This is aimed at removing genAI-created hallucinations from legal work, as well as providing more general accuracy improvements.

Issues dealt with could include: missing cases or sources, formatting errors, and low semantic scores indicating the source does not support the writer’s assertion, the company explained. It also provides hyperlinks to each citation ‘that allow the person signing the brief to review each flagged issue in the context of the filing’.

Interestingly, Clearbrief highlighted that it uses ‘classic AI rather than generative’, i.e. machine learning and NLP, rather than LLMs, to do this checking.

As the company notes – and as the market knows well – plenty of lawyers have run into problems by using fake, or hallucinated, cases or other facts in their work after tapping LLMs.

Jacqueline Schafer, Founder and CEO of Clearbrief, commented: ‘Partners are being sanctioned and suffering reputational damage for citation errors they didn’t personally make. We built the Cite Check Report to give them what courts are demanding: documented proof that they satisfied their ethical obligations before signing that pleading.’

Thomson Reuters’ Westlaw launched something a bit similar back in September, at least in relation to case law – see AL article here – which provides not just increased clarity around citations during one’s own work on the platform, but can also reveal citations from others’ work product that can’t be verified.


Discover more from Artificial Lawyer

Subscribe to get the latest posts sent to your email.