White House AI Guidance – What the Market Thinks

The White House has set out its view of how regulation of artificial intelligence (AI) in America should look, publishing last month draft guidance on the subject.

Artificial Lawyer has gathered some feedback on what people thought (see below). But, first, what are these regulatory suggestions?

The newly proposed standards highlight the need for a regulatory approach that ‘fosters innovation, while protecting core American values,’ the White House statement said. 

But, and most importantly, it also mentions the need to reduce ‘unnecessary barriers to the development and deployment of AI,’ stating that Federal agencies should avoid ‘regulatory or non-regulatory actions that needlessly hamper AI innovation and growth’ – which can be interpreted as self-regulation, or a hands-off approach when it comes to government-level regulation.

However, it listed ten principles it said should be considered when it comes to formulating regulatory and non-regulatory approaches to AI. The list includes issues such as public trust and participation, flexibility, scientific integrity, fairness and non-discrimination.

The 10 White House AI Principles:

1. Public Trust in AI

It is important that the government’s regulatory and non-regulatory approaches to AI promote reliable, robust, and trustworthy AI applications, which will contribute to public trust in AI. The appropriate regulatory or non-regulatory response to privacy and other risks must necessarily depend on the nature of the risk presented and the appropriate mitigations.

2. Public Participation

Public participation, especially in those instances where AI uses information about individuals, will improve agency accountability and regulatory outcomes, as well as increase public trust and confidence. Agencies should provide ample opportunities for the public to provide information and participate in all stages of the rule-making process, to the extent feasible and consistent with legal requirements (including legal constraints on participation in certain situations.

 3. Scientific Integrity and Information Quality

The government’s regulatory and non-regulatory approaches to AI applications should leverage scientific and technical information and processes. Agencies should hold information, whether produced by the government or acquired by the government from third parties, that is likely to have a clear and substantial influence on important public policy or private sector decisions (including those made by consumers) to a high standard of quality, transparency, and compliance.

Best practices include transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results. Agencies should also be mindful that, for AI applications to produce predictable, reliable, and optimized outcomes, data used to train the AI system must be of sufficient quality for the intended use.

4. Risk Assessment and Management

Agencies should be transparent about their evaluations of risk and re-evaluate their assumptions and conclusions at appropriate intervals so as to foster accountability. Correspondingly, the magnitude and nature of the consequences should an AI tool fail, or for that matter succeed, can help inform the level and type of regulatory effort that is appropriate to identify and mitigate risks. Specifically, agencies should follow the direction in Executive Order 12866, “Regulatory Planning and Review,”4 to consider the degree and nature of the risks posed by various activities within their jurisdiction. Such an approach will, where appropriate, avoid hazard-based and 5 unnecessarily precautionary approaches to regulation that could unjustifiably inhibit innovation.

5. Benefits and Costs

When developing regulatory and non-regulatory approaches, agencies will often consider the application and deployment of AI into already-regulated industries. Presumably, such significant investments would not occur unless they offered significant economic potential. As in all technological transitions of this nature, the introduction of AI may also create unique challenges.

6. Flexibility

When developing regulatory and non-regulatory approaches, agencies should pursue performance-based and flexible approaches that can adapt to rapid changes and updates to AI applications. Rigid, design-based regulations that attempt to prescribe the technical specifications of AI applications will in most cases be impractical and ineffective, given the anticipated pace with which AI will evolve and the resulting need for agencies to react to new information and evidence.

7. Fairness and Non-Discrimination

Agencies should consider in a transparent manner the impacts that AI applications may have on discrimination. AI applications have the potential of reducing present-day discrimination caused by human subjectivity. At the same time, applications can, in some instances, introduce real-world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI.

When considering regulations or non-regulatory approaches related to AI applications, agencies should consider, in accordance with law, issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes.

8. Disclosure and Transparency

Agencies should be aware that some applications of AI could increase human autonomy. Agencies should carefully consider the sufficiency of existing or evolving legal, policy, and regulatory environments before contemplating additional measures for disclosure and transparency. What constitutes appropriate disclosure and transparency is context-specific, depending on assessments of potential harms, the magnitude of those harms, the technical state of the art, and the potential benefits of the AI application.

9. Safety and Security

Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process. Agencies should pay particular attention to the controls in place to ensure the confidentiality, integrity, and availability of the information processed, stored, and transmitted by AI systems.

Agencies should give additional consideration to methods for guaranteeing systemic resilience, and for preventing bad actors from exploiting AI system weaknesses, including cybersecurity risks posed by AI operation, and adversarial use of AI against a regulated entity’s AI technology. When evaluating or introducing AI policies, agencies should be mindful of any potential safety and security risks, as well as the risk of possible malicious deployment and use of AI applications.

10. Interagency Coordination

A coherent and whole-of-government approach to AI oversight requires interagency coordination. Agencies should coordinate with each other to share experiences and to ensure consistency and predictability of AI-related policies that advance American innovation and growth in AI, while appropriately protecting privacy, civil liberties, and American values and allowing for sector- and application-specific approaches when appropriate.

Artificial Lawyer gathered feedback from various individuals about the proposals:

Danny Tobey, Partner at DLA Piper : ‘They did a good job of looking at ways to balance risks and benefits in adopting AI. It’s a thoughtful approach that looks to reducing barriers to developing this technology and really focusing on whether or not to regulate where the risks really are. I think it’s going to be welcome in any highly regulated space where there [are] professional duties or regulatory oversight. I think industry needs clarity and rules of the road to adopt these tools while understanding what risks are. I think it’s a good first step in the right direction for providing the rules of the road for adopting AI.’

Albert Fox-Khan, a lawyer who leads Surveillance Technology Oversight Project (S.T.O.P.):
‘I think its deeply disappointing that the White House is largely giving industry a green light to police itself when it comes to some of the most impactful questions of regulation for the next 50 years. While these principles are not legally binding they do reinforce a message to federal agencies that they should look at voluntary standards … and other non compulsory measures as an alternative to regulation in the AI space. To me this is a moment where we are seeing everyday news stories that document the dangers that AI can pose to the public and ways that it can automate the worst forms of bias and bigotry.’


Andy Klein, CEO and Founder, Reynen Court‘It seems to me like it’s all very sensible. Exactly what it means, time will tell. The problem is that the government wants to innovate the business process, but there are public issues people want addressed such as issues around AI’s bias. It did not seem like the publication of the regulation was terribly shaking ground, from what I read in the little bit of conversation. But when things like this come from government with all this generality, there doesn’t seem to be black and white on the matter. I believe the guidance sets up a debate that will likely last for decades and decades.’

Aaron Rieke, Managing Director, Upturn: ‘If you read the document as a whole, especially in the introduction and the encouraging innovation and growth of AI, the story that the Trump administration tells here is: AI is this great thing and regulators should not needlessly get in the way. I think as kind of a framing document I am a little bit disappointed in that, however, this document doesn’t have a whole lot of binding effect, except to require some federal regulators to think through some of these principles when they are making regulations. So I don’t think it has a lot of teeth  but there is some kind of policy posturing in the way that presents itself.’

Dr Nicol Turner-Lee, Fellow at the Brookings Institution said: ‘I think that the new White House framework is at best an attempt to create a conversation around what makes for responsible AI development and use. I think that plan as a whole has good ingredients that should encourage substantive conversations. However, I do think that the details will matter going forward especially as tech is innovating faster than the regulation. But, I am not certain how the White House plan will work in an ever-changing digital environment. As we are talking right now, new technologies are being created. Will regulators be able to keep up and catch-up? Overall, the framework is needed to ensure that we work toward the ethical deployment of AI.’

1 Comment

  1. This is basically a nothing document. It does not detail what they can do with any information from the public. It does not cover what and how the data will be used in potentially unwelcome ways. It also covers its tracks with the term, “What constitutes appropriate disclosure and transparency is context-specific”. This can be then used and abused to mean that the creation of a system to detect fake news, can be bias to one party. but this will not be released as a bias as its not in the public interests.
    The issue the government has, is that the other powers, Russia and China, are already way ahead with the data they have and store. That they dont have any rules governing its use or storage. So they cannot place hard rules, as this will vastly limit the advanced they can make. So they produce a non document, for the common people to think some kind of regulation is in place. but its not.
    until some self governing system that end users can protect the data they own is in place, then no amount of regulation will be effective.

Comments are closed.