Law firm referral network, Lex Mundi, in conjunction with US-based Morrison & Foerster (which is a Lex Mundi member), Microsoft Europe and Cambrian Futures, have produced a new AI Readiness Checklist for Corporate Legal Functions.
The aim of this practical checklist is to help GCs to identify key issues when navigating the integration of AI within the business and the regulatory environment, they explained.
The AI readiness checklist is segmented into five key areas, as set out below, with targeted questions addressing significant considerations for GCs, from the preparation for cyber/data breaches, through to the testing of machine learning models.
- Test, Audit, and Evolve
- Institution Building
Lex Mundi’s Head of North America, Global Markets, Jenny Karlsson, who led the project, said: ‘Without question AI is an essential part of how businesses will continue to adapt in response to the post-Coronavirus economic conditions, particularly as they structure new types of relationships with employees, suppliers, customers, investors, regulatory bodies, the public, and other key stakeholders.
‘The exogenous shock of COVID-19 therefore makes this checklist timelier than ever for business leaders, c-suites, and General Counsel.’
More broadly, is this guide of any use? It’s mainly aimed at the kind of tools that corporates may use to do things like filter job applications – which have caused a ruckus in New York because of fears over bias (see story), rather than being aimed at assessing how to use legal AI doc review tools for M&A projects, for example.
Much of it is looking at the legal responsibility around AI. Although…..and we don’t have room to get into that debate now….many of the tools used by corporations to sort data are not really machine learning systems, but are fixed algorithms, which is not really AI if the term is to have any meaning.
An example of part of the checklist is below:
‘With respect to [your] AI Legal Committee [in your business], have members received training on the application of law to AI in relevant business units, in order to advise the organization accordingly?
Some areas of law to be covered might include:
- Employment laws for AI tools in HR processes, such as background screening
- Anti-discrimination laws for automated processes with respect to employees, suppliers and consumers
- Tort liability for new product development, such as autonomous vehicles
- Disclosure laws for bots
- Data privacy for workplace / employee monitoring, such as GPS
- Professional misconduct for permitting AI bias in the practice of law
- Self-identification of automated software agents
- Intellectual property law in connection with co-development of new products and data
- Does the Board of Directors have an explicit mandate (statutory or otherwise) to oversee AI ethics and governance across the organization?
- Has the Board of Directors defined AI and data ethics for their organization?
- Are there members of the Board of Directors with competence to understand the potential issues that arise with the use of AI across products, services, operations, and relationships with customers, suppliers, employees and other stakeholders?
- Does the Board of Directors have training and guidance on how to deal with cyber and data breaches, including a cyber breach preparedness checklist or action plan?
- Does the Board of Directors have a protocol for obtaining or getting reporting on the AI and data health of the organization?
All in all, useful stuff for GCs. And as AL explored last week, not all automation (and for that matter the use of all AI tools) is necessarily a good thing, especially if people ignore the ethical implications around taking humans out of the loop in key processes. So, a helpful resource.
You can download it here.