The 7 areas where the United States AI policy falls short

AI will change the way that we work, the way that we interact with each other and also the way that our society operates on a fundamental level. And based on how governments regulate AI or not, we will feel the impact in our day to day lives.

In my role as Technical Director at Google Cloud I have conversations with companies and governments from around the world on AI and how to use it to achieve high impact results. These are just my personal opinions. But I’d like to share some insights on AI regulation and how to make it useful for everybody.

The White House recently released guidelines for the use and regulation of AI by government agencies. The document is available here, and I have included a few excerpts below. In general, this is a good initial step to incentivize and structure the regulation of AI, but there are several areas where it falls short.

U.S. Capitol

1. It has a limited scope

Although Federal agencies currently use AI in many ways to perform their missions, government use of AI is outside the scope of this Memorandum

This is the first problem. Even though the introduction to the memorandum indicates how important it is to “[protect] American technology, economic and national security, privacy, civil liberties, and other American values”, this note complete ignores the fact that privacy and civil liberties are many times attacked by the governments’ own use of technology. A more comprehensive policy would have included suggestions on how agencies and governments at different levels should think about their own use of AI.

2. Lacks positive encouragement

To that end, Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth

This is also a good starting point. When discussing AI applications we should be mindful of the potential risks and benefits, same as with any other technology. From that point of view this policy does a good job indicating that agencies should not be overreaching, but it fails to mention what kind of positive encouragement can be used to foster AI innovation. In short, it helps with what they shouldn’t do, but it does not help with what they should do. For instance, it could emphasize the development of regulations that promote technology that could lead to competitive advantages.

3. It doesn’t incorporate expert opinion

Agencies should provide ample opportunities for the public to provide information and participate in all stages of the rulemaking process

This is sensible, and will improve trust in AI, but fails to take into account the fact that AI is a relatively advanced technology in which there are few experts, compared with the overall population. We certainly don’t expect the average citizen to give their opinions on federal aviation rules, or nuclear policy. It’s naive to assume that they can provide a well researched opinion on AI policy.

4. Ignores transformative change

Agencies should, when consistent with law, carefully consider the full societal costs, benefits, and distributional effects before considering regulations related to the development and deployment of AI applications. Such consideration will include the potential benefits and costs of employing AI, when compared to the systems AI has been designed to complement or replace

In general this is a very solid advice. And it’s reasonable to expect AI systems that replace other systems to behave in a very different way, with a different risk profile and potential benefits. For example, self-driving car technology may help save tens of thousands of lives, but cause accidents that simply do not happen today. But it doesn’t go far enough. In some cases it may not be enough to limit new legislation. In order to foster truly innovative and beneficial change it may be necessary to alter or eliminate current rules to open the door for these new technologies.

5. It doesn’t put the right value on transparency

What constitutes appropriate disclosure and transparency is context-specific, depending on assessments of potential harms, the magnitude of those harms, the technical state of the art, and the potential benefits of the AI application

As AI systems become more ingrained in our lives it’s going to be critical that we develop the technology and tools to make sure they are easy to understand. We wouldn’t take advice from a doctor who can’t explain why they recommend a certain drug. And we wouldn’t accept a bank declining a loan simply based on the intuition of the loan officer. AI systems are no different. AI explainability is critical and it should be an area of focus for regulation.

6. Lacks a collaboration framework with industry

Agencies should provide clarity where a lack of regulatory clarity may impede innovation. This may also include work done in collaboration with industry, such as development of playbooks and voluntary incentive frameworks

The memorandum suggests a few potential opportunities to collaborate with industry or research communities on AI, but these seem very limited. A solid framework of collaboration would include topics such as grants or financial incentives, other non financial support, access to experts or even the creation of public-private partnerships or steering committees.

7. It does not include procurement

The US government, across its many agencies is one of the largest consumers of information technology in the world. And because of that it has a tremendous impact in the industry. An encouragement to buy technology that incorporates advanced AI, or that will lead to the development of AI would be very significant. Unfortunately, the policy doesn’t include any reference to US purchases of technology, and how to use those to foster innovation.

What do you think? Is this policy enough? Or is it missing important elements to foster AI innovation in the USA?

Written on February 13, 2020