In this piece written for Artificial Lawyer, Kenneth Grady, the Lean Law Evangelist for US law firm Seyfarth Shaw LLP and an Adjunct Professor and member of the LegalRnD faculty at Michigan State University College of Law, explores where the Internet of Things (IoT) currently stands and how it will change the legal world.
The IoT is going to have an impact on the legal world, just as AI will. The challenges are different, however. For now the expected issues are not so much about how legal work will change in terms of its production. It is instead about how core concepts in the legal world, such as what constitutes a ‘legal person’ and issues surrounding responsibility for the actions of machines will change, in what Grady terms the new ‘hybrid society’. However, this too may have a bearing on AI legal systems that provide clients and lawyers with guidance and advice.
It is the summer of 2016 and Uber is launching its first driverless cars in Pittsburgh. Riders in the community will be able to summon Volvo XC90 sport utility vehicles, which will pick them up and haul them to their destination without a driver, sort of.
While they will be autonomous vehicles, they also will have drivers just in case someone needs to take over the wheel. We all knew this was coming, but I bet that doesn’t make Pittsburgh Uber drivers feel any better.
This shot by Uber also launches another stage in the IoT era for lawyers, especially those in Pittsburgh. Driverless cars are a major step forward, but what will make them special is their ability to communicate with the world around them through IoT. Right now, autonomous cars avoid people, light posts, cars, and the other driving obstacles by using sensors to see them and sophisticated algorithms to avoid them. With the IoT, the car will talk to those obstacles and navigate in part through the conversations.
As you enter the crosswalk in front of an autonomous vehicle, its sensor array captures your image. Computers match what it sees to what is has learned and in an instant it decides ‘person – stop the car’. You wandered into the crosswalk because you were talking on your cell phone and not paying attention to traffic. The ever-vigilant autonomous car saves you, reacting faster than any human driver could.
In the IoT world, the car would have sensed your phone, captured its GPS location, and known that you were walking into the crosswalk. At the same time, the traffic lights would have known what you were doing and warned other cars approaching the intersection about the danger (i.e. you).
The slowdown in traffic would have been communicated by the cars to the traffic control sensors embedded in the pavement. They notified the traffic control computers about the issue in the intersection and the drag on traffic around the intersection, and the computers would have adjusted the timing of lights in surrounding intersections.
The fire engine screaming out of the station nearby would have known about the traffic congestion on its main route and selected an alternative route to avoid the congestion. The GPS systems in cars heading into the area would have gotten an update and adjusted their routes to avoid the problem.
Computers at the traffic control office recorded all these events and, along with data from other similar incidents, updated many algorithms including one that alerts people when they are about to walk into active intersections. In the future, your phone will issue an alarm if you try to cross against the light. And all of this happened because you wanted to talk on your phone and your phone is connected to everything through the IoT.
While this sounds wonderful or creepy, it also marks the point that lawyers must become more than tech comfortable. Product liability, personal injury, privacy, commercial, government, and other lawyers all have roles to play in the IoT story. When an Uber car hits another car, who will be responsible? Who does the personal injury lawyer file suit against? What are the standards? Was too little or too much information shared? Was there a breach of contract somewhere? What regulations were in place? Who gets the ticket?
These and many other questions all raise open legal issues. One group argues that we need a new area of law to address them, while another argues that they simply raise new questions that need to be addressed under existing law. The questions also raise issues that will challenge our global network of laws. Instead of laws being written somewhere and interpreted by humans given a set of facts, laws are embedded into the things in the form of hard-wired or constantly evolving algorithms. We have not established how these laws will work as they or their actions cross jurisdictions.
As autonomous cars roll out, we could be looking at a new era, much like what happened around the beginning of the 20th century when cars were replacing horses. Would there be a new law of cars or were the general laws, ones we had developed over centuries and applied to horses, sufficient?
Years ago, Frank H. Easterbrook, then and now a judge on the U.S. Court of Appeals for the Seventh Circuit in the United States, and Lawrence Lessig, then a professor at Stanford Law School and now Harvard Law School, debated what was called the ‘Law and…’ question.
Law schools had expanded their curricula over decades by adding what were called ‘Law and …’ courses (e.g., Law and Society). They combined the study of law with the study of some other field and, in Judge Easterbrook’s words, this lead to ‘multidisciplinary dilettantism’.
Judge Easterbrook gave a presentation at a cyberlaw conference at the University of Chicago Law School, re-published as a short article, titled, Cyberspace and the Law of the Horse. The ‘law of the horse’ phrase was first put forth by Karl Llewellyn, a distinguished professor at Columbia University and University of Chicago law schools, and had been picked up by Dean Gerhard Casper at the University of Chicago Law School. When Dean Casper used the phrase he meant, among other things, that ‘the best way to learn the law applicable to specialized endeavors is to study general rules’ rather than rules specific to one thing.
In his presentation, Judge Easterbrook argued that we should not attempt to develop a set of laws focused on the internet and call it the ‘law of cyberspace,’ any more than we should attempt to pull together all laws dealing with horses and teach them as the ‘law of the horse.’ Instead, we should focus on studying and improving general rules of law and then apply those rules to questions arising in cyberspace.
Professor Lessig responded in a long article titled, The Law of the Horse: What Cyberlaw Might Teach. He did not argue Judge Easterbrook’s basic contention, or more directly, he agreed that we did not need a law of the horse. But, he argued, studying cyberspace law would give us new insights into general laws because it presented many general law issues in unique ways. By studying cyberlaw we would better understand and improve general law.
The law of the horse question is alive and kicking. Ryan Calo, a law professor at the University of Washington School of Law and leading scholar in the area of robots and law (not ‘law and robots’) has suggested that the issues raised by robots may require a new body of law. He also has called for the formation of a Federal Robotics Commission in the U.S. Jack Balkin, a law professor at Yale Law School and director of Yale’s Information Society Project responded with the ‘law of the horse’ view: our general laws will be sufficient, if we continue to develop our understanding of them. Meanwhile, the European Union also has called for a body of law on robotics and the formation of a European agency to oversee robots and artificial intelligence.
IoT Brings Us Into the Hybrid Society
IoT requires new ways of thinking about our legal system. Until now, our world has been divided, if not neatly at least understandably, into two camps. In the first camp sit people. We have been in control for a long time and with control came responsibility for our actions and the devices we used.
In the second camp, sit the devices. Devices have not been responsible for the actions of people who control them. We don’t sue cell phones, car brakes, or microwaves even though they use sophisticated computers. These things are not persons.
But with IoT, that distinction is fading. This year, the National Highway Traffic Safety Administration (NHTSA), a regulatory agency in the U.S. Federal Government, responded to a request by Google to issue an interpretation of NHTSA’s Federal Motor Vehicle Safety Standards. Those standards were written for cars operated by people: a ‘driver’ sits in a seat before a steering wheel, gas pedal, and brake pedal. As Google pointed out, autonomous vehicles do not need steering wheels, gas or brake pedals. The computer (the ‘self-driving system’) steers, and operates the gas and brake.
NHTSA saw three ways out. First, interpret the term ‘driver’ as not applying to self-driving vehicles (SDV). Second, interpret driver to mean the self-driving system (SDS). Third, interpret where the driver must be in the SDV to include the SDS’ location. NHTSA choose the second path, and in that moment equated a computer with a person. At least for part of U.S. federal regulatory law, computers are people and join other U.S. laws that consider corporations and animals as people. This is the beginning of what I call the ‘hybrid society’ a world where people and computers have overlapping status under the law. To sum up, this summer in Pittsburgh we have computers, which are for some purposes ‘people’ driving cars where the real people are not in control.
More IoT Devices than People
While this all may sound very American and academic, it isn’t either. Every country must decide for itself and then in coordination with other countries and even the United Nations (which also regulates cross-jurisdictional vehicle issues), how the emerging technology of autonomous vehicles will work, whether under specialized or general law. The questions and answers will diffuse through many areas of law in each jurisdiction and across jurisdictions. Not everyone will feel the same way about computers becoming persons.
The story of cars, IoT, and law is one of many stories we will face as IoT grows. Today, there are merely hundreds of millions of devices connecting through the IoT. By 2020, the size of the market will reach 20 billion to 30 billion units (according to McKinsey & Company).
The number of interactions each day among 20 billion IoT devices which can communicate many times a second dwarfs the number of communications more than 7.5 billion humans can have during the same period. If the computer of a car is a ‘person’ for certain purposes, which of these other IoT devices, which includes robots, will be ‘persons’ and for what purposes? As Professor Calo has said: ‘Robotics blurs the very line between people and instrument.’
Where are the Lawyers?
Lawyers have not been key players in discussions about IoT and its implications for society at large. Many may agree with Judge Easterbrook’s comment at the conference that: ‘Beliefs lawyers hold about computers and predictions they make about new technology are highly likely to be false. … The blind are not good trailblazers.’
But this statement, and the idea that lawyers should sit back and let technology blaze ahead and then come in later to untangle the mess, comes from a type of laissez-faire approach to law that may have worked well in the 18th century, but will not suffice in the 21st century.
Societies have become far more complex and dynamic in the past 150 years. The costs and other risks of using court systems to resolve disputes have resulted in more parties using out of court mechanisms (before or while lawsuits are pending) to resolve disputes. While this helps relieve the burden on courts, it limits the development of common law, which means more legal issues are left open for longer periods. Government lawyers should not be left to bear the brunt of developing the regulatory framework for fast, emerging technologies without assistance from colleagues in the practicing bar and academia.
Recently, the State of California delegated to lawyers in the Department of Motor Vehicles the job of writing the laws to govern autonomous vehicles. The draft laws came out long after the deadline, were heavily criticized by all interested parties, and were out-of-date when released as technology had superseded them. This is not the approach to law making that will yield the best results.
Lawyers should recognize that technology is more than something to be reckoned with; technology will lead how society evolves. If lawyers fail to step up to the challenge of understanding technology and getting in front of the development of laws, they will leave a vacuum that will be (and already is being) filled by technologists. IoT will be regulated by codes built into devices, which will then evolve in ways that we can’t imagine.
Alternatively, lawyers can assist in designing into devices protections against many of the risks. While it helps if the lawyer has some familiarity with coding, the lawyer is not there to be the technologist. The lawyer brings to the technologist an understanding of the legal system, how law works, the potential pitfalls should the device fail to operate as planned, and ideas about how to limit exposure. Together, they will be able to design systems and data capture mechanisms that safeguard people and the organizations deploying the devices.
Law has been a reactive profession. But in the 21st century, lawyers must become proactive or they will be replaced by those who are. The IoT is an emerging force that will affect all of us and shape our daily lives. It presents a tremendous opportunity for lawyers to step forward, demonstrate they too can be technology aware, and that they can help clients and society by being at the cutting edge of where law and technology meet.