In what may be a first for the legal AI industry Legal Robot has announced that it has committed to making its use of algorithms transparent and accountable.
As Founder, Dan Rubins, explains, the move follows the suggestions of the US-based Association for Computing Machinery (ACM) to ensure AI companies’ algorithms and their impact are well understood and transparent to users. This is in order to ensure the removal of any potential bias, prevent intrinsic errors built into decision-making systems and to allay any fears among users and the wider public.
The following is a statement Dan Rubins published this week, along with a Q&A with Artificial Lawyer conducted today, plus the new ACM guidelines.
We at Legal Robot were glad to see the Association for Computing Machinery (ACM) release a statement on Algorithmic Transparency and Accountability. As a young AI company, this is something we feel very strongly about and believe will become increasingly important as algorithms grow in their influence. ACM’s statement outlines seven principles: Awareness; Access and Redress; Accountability; Explanation; Data Provenance; Auditability; and Validation & Testing.
Algorithms of all kinds already impact our daily lives. We all know that credit scores are calculated with an algorithm, and thanks to recent news cycles, many more people know that social media feeds use algorithms to display posts that you will have a stronger emotional response to.
However, as algorithms become more advanced and more complicated they can be harder to explain. Is it easy to explain (without using calculus) how Legal Robot uses backpropagation to train its neural networks? No, but we don’t need to get all high and mighty about it either.
For us, committing to transparency and accountability in the algorithms we create means providing explanations of the algorithms themselves, but also the input to those algorithms, checks and balances on those algorithms, a way to question the results, and make things right again.
Legal Robot now publicly makes these commitments, which we believe uphold ACM’s well-crafted principles.
Furthermore, in each quarter’s transparency report, we will report on our progress towards implementation.
This will not be an overnight change, in fact, we are committing to a whole lot of effort here – but this is important. We will also share our experiences with other companies and institutions so they may do the same.
Finally, we call upon our fellow machine learning and legal tech companies to think about these principles and take action of their own.
There now follows an interview with Dan Rubins by Artificial Lawyer on this subject.
How do we risk creating bias in contract review?
Well, contract review isn’t the only thing on our roadmap. Even in that limited scope, Legal Robot uses machine learning algorithms to translate legal language into simpler texts. While most of us native English speakers don’t think about this daily, many other languages have gendered nouns. For example, if we’re simplifying or translating a legal text into a language like Spanish, we could accidentally introduce or reinforce gender bias. Even the default translation of Lawyer is male-gendered.
Perhaps a more relevant example: consider an investor evaluating potential real estate deals. If an AI company sells that investor a contract review/analysis software with an opaque rating of “Investability” and their model includes socio-economic factors, those effects could be easily magnified. But, there are myriad other situations with less obvious rules and more sinister impacts. The simultaneously wonderful and terrible thing about AI: it’s an amplifier.
Latent biases that exist in our daily lives may seem trivial, but can become extremely damaging when magnified and ingrained into society’s institutions. What we’re doing here is drawing a line in the sand for our company – we fully expect things will go wrong, but putting these principles in place will help us make things right when they do.
What inspired this move?
As the founder of an AI company I’ve been conscious of the increasing effect of algorithms for some time, but there were two recent events that really demanded action. First, a state agency in Michigan wrongfully accused 93% of unemployment claimants of fraud (see here).
More recently, established tech companies have been under fire for promoting social media stories that are optimized for an emotional response (emotional response -> higher engagement -> heavier weight in feeds -> more views -> ad revenue). News organizations have known this for ages.
So, without passing judgment on the large tech companies, it is clearly easier to course-correct a small company. We decided it was important to set the tone of our company before growing too large. ACM put a lot of effort into the principles they published yesterday and they closely mirror what we’ve been thinking about for a while. I think a larger part of these principles is in things like having procedures to trace where training data comes from, testing it for whatever biases we (and other smart people) can come up with, then publishing the results.
What about IP risk?
As a startup, you’re going to have a rough time if your main IP is just secret algorithms. There are constantly newer and better algorithms being developed and published by researchers. For us, it’s a non-issue to tell people how we get to a particular decision or even specifics about an algorithm (hey everybody, we use LSTM neural nets!). I think a larger part of these principles is in things like having procedures to trace where training data comes from, testing it for whatever biases we (or other smart people can come up with), then publishing the results.
Will other AI companies do the same?
I certainly hope so… and we’re here to help. The truth is, it’s going to be quite a lot of work and for many companies it’s just not going to be a priority unless their competitors are doing it, or their customers or regulators demand it.
—-
Statement on Algorithmic Transparency and Accountability [Abridged] by ACM
….An algorithm is a self-contained step-by-step set of operations that computers and other ‘smart’ devices carry out to perform calculation, data processing, and automated reasoning tasks. Increasingly, algorithms implement institutional decision-making based on analytics, which involves the discovery, interpretation, and communication of meaningful patterns in data…
Computational models can be distorted as a result of biases contained in their input data and/or their algorithms. Decisions made by predictive algorithms can be opaque because of many factors, including technical (the algorithm may not lend itself to easy explanation), economic (the cost of providing transparency may be excessive, including the compromise of trade secrets), and social (revealing input may violate privacy expectations). Even well-engineered computer systems can result in unexplained outcomes or errors, either because they contain bugs or because the conditions of their use changes, invalidating assumptions on which the original analytics were based.
The use of algorithms for automated decision-making about individuals can result in harmful discrimination. Policymakers should hold institutions using analytics to the same standards as institutions where humans have traditionally made decisions and developers should plan and architect analytical systems to adhere to those standards when algorithms are used to make automated decisions or as input to decisions made by people.
This set of principles, consistent with the ACM Code of Ethics, is intended to support the benefits of algorithmic decision-making while addressing these concerns:
Principles for Algorithmic Transparency and Accountability
1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.
6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.
1 Trackback / Pingback
Comments are closed.