Wolters Kluwer’s Human Solution to GenAI Inaccuracy

Wolters Kluwer in Australia has a novel solution to genAI hallucinations and errors: don’t entirely trust it on its own. Instead, the company is providing ‘editorially curated question and answer results’ which are ‘genAI-enabled’, i.e. there is a ‘human in-the-loop’ when it comes to knowledge retrieval.

Or, as the company’s Asia-Pacific team explained: ‘Wolters Kluwer now offers an enhanced search experience with the Questions and Answers feature in CCH iKnowConnect. Combining the power of the generative AI and deep domain expertise, the solution can harness an extensive AI-generated Questions and Answers repository.

Users will be presented with a relevant, editorially curated question and answer result, with answers that have been reviewed by Wolters Kluwer experts, with the goal of ensuring the quality and reliability of the provided answers.

‘For added transparency and depth, each answer includes direct links to the source material within CCH iKnowConnect, empowering users to verify information and explore topics further.’

Now, you could say that this defeats the purpose of using genAI, i.e. that you’re meant to jump in, tap the LLM’s capabilities directly in relation to a body of knowledge, and then get your answer – with perhaps some RAG methodology included to ensure the answer is right, or at least you hope so.

Instead, it appears that WK has decided to not leave any chance of doubt here. Instead of trusting the genAI to handle the whole thing, they have demanded a ‘human-in-the-loop’ – in this case a WK subject matter expert, who has curated the answers to legal knowledge questions.

Or, as Megan Mulia, Managing Director, TAA APAC, explained: ‘Our priority is to deliver the most accurate and reliable information to our customers possible. By employing GenAI as a tool to support our editorial team, we can enhance efficiency while maintaining the highest standards of quality control through expert review.’

And the key phrase here is ‘employing genAI as a tool to support the editorial team’.

It’s a very pragmatic approach, albeit one that is perhaps not in tune with how companies such as OpenAI and others see the future use of genAI for knowledge workers developing. It’s also not one that can be used in all use cases.

E.g. it’s fine for more ‘static’ Q&As, such as ‘How much maternity leave is applicable for employees in Australia?’ That has a relatively clear answer that will be of use to many people. That can be pre-made, genAI can be leveraged perhaps to improve the result, and that can also be checked by experts, then provided as a complete Q&A item that can be searched for.

But, a lawyer searching for related case law for a live client matter needs a very customised response that is relevant just to their search. Plus, they need the answer right here and now – there is no time for a subject matter expert at the vendor to get involved. And there is little chance pre-made Q&As will have covered that query, as it’s very specific to the client’s matter. So, you have to just take what the genAI tool gives you.

But, at least in the first example, users of the WK knowledge tool have a very high level of confidence in the results. However, as noted, this approach does have its limitations.

Interestingly, the global company, which is based in the Netherlands, added an extra statement about ‘Responsible AI’. Here is what it says:

‘Wolters Kluwer adheres to development standards and processes that promote responsibility and accountability for AI systems and their outcomes. The company addresses risk management and issue remediation during design, development and after deployment. In 2023, the company released its AI Principles, which support the development of secure, explainable AI that is rooted in its high-quality, expert-curated, domain content.’