Guest Post, Dan Rubins: Can AI Systems Really Behave Badly?

‘AI behaving badly’ is a growing theme that is hard to avoid if one reads the general press. But, is this all nonsense? Are people just missing the point that this is humans misusing a tool, not the AI software inherently having any fault?

Artificial Lawyer caught up with Dan Rubins, founder of legal AI doc review company, Legal Robot, to ask him about his involvement with the US Public Policy Council of the Association for Computing Machinery (USACM), which is developing new policies for ‘algorithmic transparency’, which it is hoped will ensure that AI is not mis-used or its output misinterpreted.


Thanks, Dan, for sharing your views. Let’s go straight to the big question: Can AI really be ‘racist’, ‘sexist’ etc? Or are we getting confused? Isn’t it just the way humans pick the dataset or create simplistic algorithms that create problems?

Dan Rubins, Founder, Legal Robot

Right, in the way we understand AI today, it really can’t be racist or sexist on its own.

Those are human qualities applied to human value systems. In that sense, the crowd that says “AI is just math” is correct.

However, that argument breaks down rather quickly against the ethical challenges of our very complex and very real world.

The problem is when we let humans build these systems many of our innate human biases creep into their designs – either through explicit selection by the creators working toward a specific outcome, or implicit inclusion through poor experiment design, dataset creation, model creation, or any other number of inputs.

I think of it this way: much of today’s AI systems is “just math” in the same way that the law is “just paper.” The design of the institutions and processes around the basic mechanism of action, be it math or paper, matter quite a lot.

Isn’t it unfair to say ‘it’s the AI’s fault’?

Perhaps; I would rather say it’s the AI creator’s fault for not designing thoughtfully enough, with enough research into adverse scenarios, without sufficient controls, with human biases, etc.

Also, isn’t there a big gap between data-driven AI looking for patterns and someone just making up an algorithm to sort data that they decided to make that way?

Yes, there is a huge gap, but even entirely data driven systems can still cause great harm when constructed poorly.

In the last week alone, we’ve seen two shocking examples of how algorithms can go wrong – and yet, neither one focused on the algorithm itself. First, we have an incredibly damaging failure of governance and basic security hygiene at Equifax.

Second, we have a less reported, but slightly scarier failure of imagination in computer vision research where researchers published their work on an algorithm could allegedly determine sexual orientation from a photo; it’s really surprising how many problems with that paper, beyond the obvious – like, we shouldn’t do this.

What is Legal Robot, as an AI company, doing about this?

Initially, we started down the path of Algorithmic Transparency when, in 2016, there were some very visible problems with automating investigations of unemployment insurance fraud in Michigan. Soon after, there were some reports of problems with bail decisions.

As founders of a company dealing building a lot of algorithms, we said to ourselves: we need to be sure we’re on the right side of history and worked to define our values and what kind of company we wanted to see in the world. I really don’t see many AI companies going to this effort to define, much less trying to guide the culture of their company and the industry they are in.

It’s great how easy it is to build incredibly powerful algorithms, but there are downsides. Unlike biomedical research, this industry doesn’t have Institutional Review Boards, or long standing institutions of ethical adherence like most professions. As developers, we can learn a lot from the institutions of science. Reproducibility is another a key pillar.

Our methods as humans evolve when we enable auditing and testing of our premises, and we question the methods that lead us to a decision. It seems to be a clearly rational and economic decision to build a company that uses experimentation to evolve faster than others. For these reasons, we are investing in Algorithmic Transparency.