Declare Your Legal Bot! New California Law Demands Bot Transparency

California has just passed into law a new Bill that demands that anyone presenting a bot – which would include legal bots – to the public, must make it very clear it is just software and not actually a person giving advice or information.

Although the law (see below in detail – and which comes into force 1 Jan 2019) has been designed primarily to prevent malicious bot use, for example in order to sway elections via abuse of social media platforms, the wording appears to also capture legal bots and much of the legal expert system industry.

A bot that would be covered by the law is ‘…an automated online account where all or substantially all of the actions or posts of that account are not the result of a person’.

And that would seem to mean all legal bots, plus more complex expert systems with Q&A capabilities that have pre-programmed behaviours/responses to client interactions. As mentioned, this is mainly aimed at bots that are active on social media, but a ‘static’ bot, which people could find on the web, and which gave automated information, would presumably also come under this definition?

Now, legal design experts and access to justice/digital divide campaigners will tell you that a lot of people have challenges understanding legalese and legal forms, and that many people also have problems dealing with digital platforms. Add in bots that ‘pretend to be human’ and it looks like a tricky situation, especially for the consumer-facing legal bots that deal with a wide range of people.

Luckily, the law gives a clear way out: good design, i.e. the bot UI has to make it 100% clear that it is just a bunch of computer code and not in any way actually a real person, especially if it is called ‘Parker’ or ‘Billy’. Although, of course, making it really obvious this is just code does spoil the fun a little….

An example of a very humanised legal bot, in this case the UK’s Billy Bot. Would this need a redesign in California? Luckily it won’t as it’s focused on England & Wales, but others could come under the new law’s remit.

The law then says: ‘The disclosure [of a bot] required by this section shall be clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.’

This should be quite doable, although the designers of the interfaces will have to be careful that everyone does really understand what’s going on. That could add some additional costs to bot design, as well as needing lawyers to draft a special bot disclaimer for them that they are not pretending to be human or misleading.

For example, you’d think that a bot called ‘Billy Bot’ is obviously just computer code – its name gives the game away – yet, what if someone who didn’t have English as a first language was using it, or had some mental/learning impairment?

Overall, this is not a major barrier to the growth of legal bots, but an interesting step in terms of regulation, as there is precious little regulation for things like automated information systems that anyone can point to.

Perhaps this will be the shape of things to come? I.e. it’s all about transparency. We have seen the same ideas mooted for NLP/ML doc review and algorithmic decision-making in as much as some legal experts have suggested that these approaches are acceptable, but need to be totally transparent about how they work and what is going on with the software.

The Bill is below. What do you think? A sensible precaution, or an unwanted barrier to the growth of legal bots and consumer/A2J-tech that can help people?

Senate Bill No. 1001
CHAPTER 892

An act to add Chapter 6 (commencing with Section 17940) to Part 3 of Division 7 of the Business and Professions Code, relating to bots.

[ Approved by Governor   September 28, 2018. Filed with Secretary of State   September 28, 2018. ]

LEGISLATIVE COUNSEL’S DIGEST

SB 1001, Hertzberg. Bots: disclosure.

Existing law regulates various businesses to, among other things, preserve and regulate competition, prohibit unfair trade practices, and regulate advertising.

This bill would, with certain exceptions, make it unlawful for any person to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. The bill would define various terms for these purposes. The bill would make these provisions operative on July 1, 2019.

DIGEST KEY

Vote: majority   Appropriation: no   Fiscal Committee: yes   Local Program: no

BILL TEXT

THE PEOPLE OF THE STATE OF CALIFORNIA DO ENACT AS FOLLOWS:

SECTION 1.

Chapter 6 (commencing with Section 17940) is added to Part 3 of Division 7 of the Business and Professions Code, to read:

CHAPTER  6. Bots

17940.

For purposes of this chapter:

(a) “Bot” means an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.

(b) “Online” means appearing on any public-facing Internet Web site, Web application, or digital application, including a social network or publication.

(c) “Online platform” means any public-facing Internet Web site, Web application, or digital application, including a social network or publication, that has 10,000,000 or more unique monthly United States visitors or users for a majority of months during the preceding 12 months.

(d) “Person” means a natural person, corporation, limited liability company, partnership, joint venture, association, estate, trust, government, governmental subdivision or agency, or other legal entity or any combination thereof.

17941.

(a) It shall be unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. A person using a bot shall not be liable under this section if the person discloses that it is a bot.

(b) The disclosure [of a bot] required by this section shall be clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot.

17942.

(a) The duties and obligations imposed by this chapter are cumulative with any other duties or obligation imposed by any other law.

(b) The provisions of this chapter are severable. If any provision of this chapter or its application is held invalid, that invalidity shall not affect other provisions or applications that can be given effect without the invalid provision or application.

(c) This chapter does not impose a duty on service providers of online platforms, including, but not limited to, Web hosting and Internet service providers.

17943.

This chapter shall become operative on July 1, 2019.

1 Comment

  1. Specifically regulating bots in this way is an interesting move from the California legislature. It’s not unlike suggestions in Europe to impose legal limits on how people can be led to believe that they are dealing with a human when they are in fact dealing with a machine. Although this new law does appear more limited in scope, referring only to deceptive practices influencing commercial transactions and voting in elections, rather than being based on wider principles of transparency for its own sake.

    While we don’t have legislation specifically addressing this yet in Europe, more general rules about collecting and processing data do apply where the identity or other information about the human participant is disclosed or obtained. This includes a requirement to be transparent about the nature of the processing in advance. The human should not be surprised later about the way their data has been processed, so there may be a need to disclose up front that it’s a bot on the other side of the conversation.

    The question that isn’t addressed by the new law is when in the course of the interaction should the disclosure of the bot be made? I think this will be crucial for all those seeking to comply. Telling users at the end might not be compliant, but telling users at the beginning might influence the conversation (or lead to abandoned conversations), so this is something that businesses will really care about. My view is that the right approach is going to be very context and audience specific. Tech savvy people might be happy dealing with a bot, and may even prefer it.

1 Trackback / Pingback

  1. Talking to the dead – Modern technologies, privacy law and the dead

Comments are closed.