There is a growing debate at the moment as to whether lawyers need to learn to code. The question tends to lead to many other questions, such as: coding with what goal? What computer language would they use? How many lawyers really would benefit from this? How would a client benefit? And that’s just to start with.
Artificial Lawyer’s view is that it will remain a specialist area, i.e. some lawyers will learn to code, but the vast majority won’t, at least for now. However, that doesn’t stop the idea of ‘computable contracts’ and the development of a hybrid code/legalese language being developed, nor it becoming a very important part of the legal ecosystem.
Computable contract language becomes more valuable to the legal sector once we start using ‘smart contracts’ that are self-executing, whether they are connected to a blockchain or not. There is also already some interesting work in this area, namely by Legalese.com based in Singapore.
Now, a new voice has thrown its hat into the ring, that of pioneering UK scientist Stephen Wolfram, the man behind the Wolfram Alpha computational knowledge engine that launched back in 2009, which to put it in prosaic terms is a kind of science-based version of Google’s search engine, plus the ability to conduct complex calculations on the site.
Wolfram says the ‘long-term goal is to make all systematic knowledge immediately computable and accessible to everyone.
We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything.’
And thus, law cannot be excluded from this goal and if law is going to be made computable then the world needs two things: lawyers who can code and a legal computer language that is an improvement on today’s legalese. (That is, if you go along with this line of logic.)
To spread this new legal tech gospel (or heresy, depending on one’s point of view) Wolfram’s team reached out to Artificial Lawyer and asked if readers here would be interested. I hope you are, it’s certainly a well-argued case by Wolfram.
In this case, the epic blog post by Wolfram has been edited down a bit . If you’d like to read the whole thing, it’s here, entitled ‘Computational Law, Symbolic Discourse and the AI Constitution’.
—
Writing Computational Contracts
One might think that writing everything as code, rather than natural-language legalese, would be a burden. But my guess is that it will actually be a great benefit. And it’s not just because it will let contracts operate more easily. It’s also that it will help lawyers think better about contracts. It’s an old claim (the Sapir–Whorf hypothesis) that the language one uses affects the way one thinks. And this is no doubt somewhat true for natural languages.
But in my experience it’s dramatically true for computer languages. And indeed I’ve been amazed over the years at how my thinking has changed as we’ve added more to the Wolfram Language. When I didn’t have a way to express something, it didn’t enter my thinking. But once I had a way to express it, I could think in terms of it.
And so it will be, I believe, for legal thinking. When there’s a precise symbolic discourse language, it’ll become possible to think more clearly about all sorts of things.
Of course, in practice it’ll help that there’ll no doubt be all sorts of automated annotation: “if you add that clause, it’ll imply X, Y and Z”, etc. It’ll also help that it’ll routinely be possible to take some contract and simulate its consequences for a range of inputs. Sometimes one will want statistical results (“is this biased?”). Sometimes one will want to hunt for particular “bugs” that will only be found by trying lots of inputs.
Yes, one can read a contract in natural language, like one can read a math paper. But if one really wants to know its implications one needs it in computational form, so one can run it and see what it implies—and also so one can give it to a computer to implement.
The World with Computational Contracts
Back in ancient Babylon it was a pretty big deal when there started to be written laws like the Code of Hammurabi. Of course, with very few people able to read, there was all sorts of clunkiness at first—like having people recite the laws in order from memory. Over the centuries things got more streamlined, and then about 500 years ago, with the advent of widespread literacy, laws and contracts started to be able to get more complex (which among other things allowed them to be more nuanced, and to cover more situations).
In recent decades the trend has accelerated, particularly now that it’s so easy to copy and edit documents of any length. But things are still limited by the fact that humans are in the loop, authoring and interpreting the documents. Back 50 years ago, pretty much the only way to define a procedure for anything was to write it down, and have humans implement it. But then along came computers, and programming. And very soon it started to be possible to define vastly more complex procedures—to be implemented not by humans, but instead by computers.
And so, I think, it will be with law. Once computational law becomes established, the complexity of what can be done will increase rapidly. Typically a contract defines some model of the world, and specifies what should happen in different situations. Today the logical and algorithmic structure of models defined by contracts still tends to be fairly simple. But with computational contracts it’ll be feasible for them to be much more complex—so that they can for example more faithfully capture how the world works.
Of course, that just makes defining what should happen even more complex—and before long it might feel a bit like constructing an operating system for a computer, that tries to cover all the different situations the computer might find itself in.
In the end, though, one’s going to have to say what one wants. One might be able to get a certain distance by just giving specific examples. But ultimately I think one’s going to have to use a symbolic discourse language that can express a higher level of abstraction.
Sometimes one will be able to just write everything in the symbolic discourse language. But often, I suspect, one will use the symbolic discourse language to define what amount to goals, and then one will have to use machine-learning kinds of methods to fill in how to define a contract that actually achieves them.
And as soon as there’s computational irreducibility involved, it’ll typically be impossible to know for sure that there are no bugs, or “unintended consequences”. Yes, one can do all kinds of automated tests. But in the end it’s theoretically impossible to have any finite procedure that can guarantee to check all possibilities.
Today there are plenty of legal situations that are too complex to handle without expert lawyers. And in a world where computational law is common, it won’t just be convenient to have computers involved, it’ll be necessary.
In a sense it’s similar to what’s already happened in many areas of engineering. Back when humans had to design everything themselves, humans could typically understand the structures that were being built. But once computers are involved in design it becomes inevitable that they’re needed in figuring out how things work too.
Today a fairly complex contract might involve a hundred pages of legalese. But once there’s computational law—and particularly contracts constructed automatically from goals—the lengths are likely to increase rapidly. At some level it won’t matter, though—just as it doesn’t really matter how long the code of a program one’s using is. Because the contract will in effect just be run automatically by computer.
Leibniz saw computation as a simplifying element in the practice of law. And, yes, some things will become simpler and better defined. But a vast ocean of complexity will also open up.
What Does It Mean for AIs?
How should one tell an AI what to do? Well, you have to have some form of communication that both humans and AIs can understand—and that is rich enough to describe what one wants. And as I’ve described elsewhere, what I think this basically means is that one has to have a knowledge-based computer language—which is precisely what the Wolfram Language is—and ultimately one needs a full symbolic discourse language.
But, OK, so one tells an AI to do something, like “go get some cookies from the store”. But what one says inevitably won’t be complete. The AI has to operate within some model of the world, and with some code of conduct. Maybe it can figure out how to steal the cookies, but it’s not supposed to do that; presumably one wants it to follow the law, or a certain code of conduct.
And this is where computational law gets really important: because it gives us a way to provide that code of conduct in a way that AIs can readily make use of.
In principle, we could have AIs ingest the complete corpus of laws and historical cases and so on, and try to learn from these examples. But as AIs become more and more important in our society, it’s going to be necessary to define all sorts of new laws, and many of these are likely to be “born computational”, not least, I suspect, because they’ll be too algorithmically complex to be usefully described in traditional natural language.
There’s another problem too: we really don’t just want AIs to follow the letter of the law (in whatever venue they happen to be), we want them to behave ethically too, whatever that may mean. Even if it’s within the law, we probably don’t want our AIs lying and cheating; we want them somehow to enhance our society along the lines of whatever ethical principles we follow.
Well, one might think, why not just teach AIs ethics like we could teach them laws? In practice, it’s not so simple. Because whereas laws have been somewhat decently codified, the same can’t be said for ethics. Yes, there are philosophical and religious texts that talk about ethics. But it’s a lot vaguer and less extensive than what exists for law.
Still, if our symbolic discourse language is sufficiently complete, it certainly should be able to describe ethics too. And in effect we should be able to set up a system of computational laws that defines a whole code of conduct for AIs.
But what should it say? One might have a few immediate ideas. Perhaps one could combine all the ethical systems of the world. Obviously hopeless. Perhaps one could have the AIs just watch what humans do and learn their system of ethics from it. Similarly hopeless. Perhaps one could try something more local, where the AIs switch their behavior based on geography, cultural context, etc. (think “protocol droid”). Perhaps useful in practice, but hardly a complete solution.
So what can one do? Well, perhaps there are a few principles one might agree on. For example, at least the way we think about things today, most of us don’t want humans to go extinct (of course, maybe in the future, having mortal beings will be thought too disruptive, or whatever). And actually, while most people think there are all sorts of things wrong with our current society and civilization, people usually don’t want it to change too much, and they definitely don’t want change forced upon them.
So what should we tell the AIs? It would be wonderful if we could just give the AIs some simple set of almost axiomatic principles that would make them always do what we want. Maybe they could be based on Asimov’s Three Laws of Robotics. Maybe they could be something seemingly more modern based on some kind of global optimization. But I don’t think it’s going to be that easy.
The world is a complicated place; if nothing else, that’s basically guaranteed by the phenomenon of computational irreducibility. And it’s pretty much inevitable that there’s not going to be any finite procedure that’ll force everything to “come out the way one wants” (whatever that may be).
Let me take a somewhat abstruse, but well defined, example from mathematics. We think we know what integers are. But to really be able to answer all questions about integers (including about infinite collections of them, etc.) we need to set up axioms that define how integers work. And that’s what Giuseppe Peano tried to do in the late 1800s. For a while it looked good, but then in 1931 Kurt Gödel surprised the world with his Incompleteness Theorem, which implied among other things, that actually, try as one might, there was never going to be a finite set of axioms that would define the integers as we expect them to be, and nothing else.
In some sense, Peano’s original axioms actually got quite close to defining just the integers we want. But Gödel showed that they also allow bizarre non-standard integers, where for example the operation of addition isn’t finitely computable.
Well, OK, that’s abstract mathematics. What about the real world? Well, one of the things that we’ve learned since Gödel’s time is that the real world can be thought of in computational terms, pretty much just like the mathematical systems Gödel considered. And in particular, one can expect the same phenomenon of computational irreducibility (which itself is closely related to Gödel’s Theorem). And the result of this is that whatever simple intuitive goal we may define, it’s pretty much inevitable we’ll have to build up what amount to an arbitrarily complicated collection of rules to try to achieve it—and whatever we do, there’ll always be at least some “unintended consequences”.
None of this should really come as much of a surprise. After all, if we look at actual legal systems as they’ve evolved over the past couple of thousand years, there always end up being a lot of laws. It’s not like there’s a single principle from which everything else can be derived; there inevitably end up being lots of different situations that have to be covered.
[The debate about whether lawyers should (…or perhaps must…) code in the future or not, will run and run. It would be great to hear your views on the subject. Artificial Lawyer also welcomes input like this from other academics, tech companies and lawyers. Please see: Contact.]
6 Trackbacks / Pingbacks
Comments are closed.