White House outlines 10 principles for AI development

Despite the Trump administration stating it would never meddle in artificial intelligence, the White House has outlined 10 commandments for agencies to create rules and regulations.

President Trump has previously promised the White House would not implement a national AI strategy to dictate how the technology is implemented. This was counteracted with the creation of the American AI Initiative, announced in August, though now the U-turn is complete with the emergence of this draft document.

The presence of such guidelines is not necessarily a bad thing for industry or the US Government, it simply depends on the attitudes of the agencies. Some technophobes could use the principles to erect such high-barriers to entry it becomes a redundant exercise, while on the other side of the coin, it could accelerate the introduction of AI in public services.

“The deployment of AI holds the promise to improve safety, fairness, welfare, transparency, and other social goals, and America’s maintenance of its status as a global leader in AI development is vital to preserving our economic and national security,” the document states.

“The importance of developing and deploying AI requires a regulatory approach that fosters innovation, growth, and engenders trust, while protecting core American values, through both regulatory and nonregulatory actions and reducing unnecessary barriers to the development and deployment of AI.”

The objective for the White House Office of Science and Technology Policy (OSTP) is to ensure engagement with and education of the general public, prevent overreach or overregulation and promote AI which is safe and of benefit to all.

Ultimately, the White House is attempting to guide the agencies towards creating a framework so some sort of element of control is created. As with many of these memorandums, the wording is concise enough to keep the various agencies in-line, but there is enough wiggle-room for the nuances of different industries.

The ten principles are as follows:

  1. Public trust in AI
  2. Public participation
  3. Scientific integrity and information quality
  4. Risk assessment and management
  5. Benefits and costs
  6. Flexibility
  7. Fairness and non-discrimination
  8. Disclosure and transparency
  9. Safety and security
  10. Interagency coordination

Although these principles are perfectly sensible for the pursuit of AI which benefits business and society, it is another example of world becoming increasingly regionalised.

At CES, LG discussed the standardisation framework which it has in place for the development of AI within its own ecosystem, while numerous other players have either launched their own approaches or backed another. Governments and bureaucracies are fuelling their own programmes as another layer, paving the way for fragmentation.

Although this sounds negative, it is encouraging to see governments engage industry during the early years of development. It does appear lessons have been learned.

Traditionally, governments and regulators stay at arm’s length from an embryonic technology. The industry is often given the freedom of self-regulation to accelerate development, though this often results in government intervention down the line to limit the negative impact of industry’s flamboyance. You only have to look at recent privacy scandals for evidence of what can happen when the government gives too much slack on the leash.

An increase of bureaucracy might well slow the introduction of AI in the public sector slightly, but it is also much more likely to create a segment which is sustainable, beneficial, healthy and transparent.