Google weighs in with its own AI ethics ideas

Seemingly not happy to let the bureaucrats and legislators dictate the ethical landscape of artificial intelligence, Google has launched its own initiative to join the debate.

In the last couple of months, we have been witness to a flurry of new councils, committees and think tanks, all of whom have been charged with defining reasonable and ethical behaviour surrounding the development of AI. In February, President Trump launched the American AI Initiative, while the UK announced it’s unveiled the Centre for Data Ethics and Innovation last week.

There are now more than 20 countries around the world running these initiatives, and it was only going to be a matter of time before the private sector entered the fray.

“Last June we announced Google’s AI Principles, an ethical charter to guide the responsible development and use of AI in our research and products,” Kent Walker, SVP of Global Affairs wrote in a blog entry to announce the group.

“To complement the internal governance structure and processes that help us implement the principles, we’ve established an Advanced Technology External Advisory Council (ATEAC). This group will consider some of Google’s most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work.”

The group does not actually contain any Googlers but is instead made up of eight experts from academia, private industry and politics. That said, you can bet the majority (if not all) are Google friendlies and won’t say anything which would shine a bad light on the firm or set in motion any storylines which will negatively impact the business in the long-run.

This is perhaps why more of these councils and advisory groups will emerge over the coming months. Private industry will want to put forward its own ideas on how AI should develop, forcing policy which would be beneficial to private industry, or attempt damage limitations plays at the very least.

Before Google claims too many plaudits for its vision and philanthropic nature, be aware this is just another form of lobbying. By being proactive and demonstrating to governments around the world that it is seeking external advice on how to act responsibly in the AI-orientated era, rule makers will be less inclined to swing the heavy hammer of regulation. There will of course be new rules to govern this new dynamic, however the friendlier private industry presents itself, the more light-touch the regulation will be.

The world of artificial intelligence does of course promise a lot, but it will come with trade-offs. As with every industrial revolution, certain jobs become redundant as technology swallows up livelihoods. The ‘intelligent’ era will be exactly the same, though the pace of change creates more of a threat as society has less time to adapt to the new status quo.

Think of passport controls in airports. With the introduction of biometric identification less border patrol officers are needed. The same can be said with self-check out machines in supermarkets, while some of the technology giants are already trialling stores with effectively no human staff members. Autonomous vehicles threaten numerous different careers, while a recent report from the Office for National Statistics estimates 1.5 million people in England are at high risk of losing their jobs to an algorithm and/or robot. Waiters are one of the professions under threat.

AI is becoming highly politicised, and quite rightly so. The impact on society is going to be incredibly wide-ranging and profound. Some of it will be good, some great and some devastating to families. This is unavoidable, but it can be managed hence the creation of groups to oversee the ethical side of the technology.

However, private industry will always feel these groups will have to be managed themselves, such is the pressure to ensure regulatory reform does not have too much of a deep impact on business prospects.