Microsoft and Sony join up on AI and cloud gaming

Microsoft and Sony have signed a memorandum of understanding to jointly develop cloud systems for game and content streaming, and to integrate Microsoft’s AI with Sony’s image sensors.

This is another step on Sony’s journey to transform from a console and title seller to a game streaming service platform. Microsoft’s leadership in both cloud computing, its Azure cloud platform, and the global footsteps of its datacentres makes it an ideal partner to Sony.

The collaboration will also cover semiconductors and AI. Sony has been a leader in image sensors (among its clients is the iPhone including the latest XS Max model), and the integration of Microsoft Azure AI will help improve both the imaging processing in the cloud and on device, what the companies called “a hybrid manner”. Microsoft’s AI will also be incorporated in Sony’s other consumer products to “provide highly intuitive and user-friendly AI experiences”, the companies said.

“Sony has always been a leader in both entertainment and technology, and the collaboration we announced today builds on this history of innovation,” said Satya Nadella, CEO of Microsoft, in a statement. “Our partnership brings the power of Azure and Azure AI to Sony to deliver new gaming and entertainment experiences for customers.”

Kenichiro Yoshida, president and CEO of Sony agreed. “I hope that in the areas of semiconductors and AI, leveraging each company’s cutting-edge technology in a mutually complementary way will lead to the creation of new value for society,” he said.

Looking to the future of the PlayStation platform, Yoshida said, “Our mission is to seamlessly evolve this platform as one that continues to deliver the best and most immersive entertainment experiences, together with a cloud environment that ensures the best possible experience, anytime, anywhere.”

Gaming is following the trend of video and music from one-off ownership selling to access streaming. But gamers are more sensitive to the visual quality and, above everything else, lagging. So to provide good experience to convert gamers to long-term streaming subscribers, the platform needs to guarantee superb connection. This is where Microsoft’s datacentre footsteps and the upcoming 5G networks will fit well with the “game” plan.

Another key success factor, similar to video streaming market, is the content. Gamers’ taste can be fast changing and frivolous. That is why the companies also stressed the importance to “collaborate closely with a multitude of content creators that capture the imagination of people around the world, and through our cutting-edge technology, we provide the tools to bring their dreams and vision to reality.”

No information on the size of investment or the number of staff involved in the collaboration is disclosed, but the companies promised to “share additional information when available”.

LG muscles in on competitive AI chip space

LG has unveiled has developed its own artificial intelligence chip in an attempt to muscle in on this increasingly competitive segment of the semiconductor market.

The AI market is proving to be rewarding for those who can prove their worth, and each day there seems to be a new ‘thought leader’ entering the fray. While there is a feeling AI could benefit application developers (Uber, Cruise, Waymo etc.) and internet companies (Amazon, Google, Microsoft etc.) more than the semiconductor giants, there will be winners and losers in this segment also.

“Our AI C​hip is designed to provide optimized artificial intelligence solutions for future LG products,” said IP Park, CTO of LG Electronics. “This will further enhance the three key pillars of our artificial intelligence strategy – evolve, connect and open – and provide customers with an improved experience for a better life.”

Nvidia might have made a run at this segment in the early days, though considering its experience lies in gaming applications, whether it can mount a serious challenge remains to be seen. Graphcore is one which has attracted investment from the likes of Dell, Microsoft and Samsung, while AMD, Intel, Huawei, Google and Qualcomm (as well as numerous others) are making this a very competitive space.

As with Intel in the PC-era and Qualcomm’s continued dominance in mobile, some might suspect there might be a clear leader in AI also.

LG has stated its chip will feature its proprietary LG Neural Engine to better mimic the neural network of the human brain. The aim is to distinguish space, location, objects and users, while hoping to improve the capabilities of the device by detecting physical and chemical changes in the environment. As with every AI plug, LG is also promoting the ability of on-device processing power.

Looking at the approach from LG, the team are targeting quite a niche aspect of the AI segment; the smart home. This makes sense, as while LG has a smartphone business, the brand is perhaps primarily known for its home appliances range.

During the last earnings call, the LG mobile business continued to struggle in a sluggish and cut-throat market, reporting a 29% year-on-year drop to $1.34 billion, though the home appliance market soared. Revenues and profits soared to record levels, accounting for more than 80% of the total profits for the business over the three months.

Future products, such as washing machines, refrigerators, and air conditioners will be fitted with the devices, as ‘intelligence’ and personalisation become more common themes in more generic and everyday products.

Maybe the smart toilet isn’t that far away after all.

UK Gov names members of AI Council

The UK government will be hoping its AI advisory board is a bit more successful than Google’s as it names the full line-up.

Bringing together experts from industry, academia and data rights organisations, the ambition is to provide a guiding light for the future of artificial intelligence. Tabitha Goldstaub, co-founder of CognitionX, will chair the council which will feature the likes of Ocado CTO Paul Clarke, Kriti Sharma, the founder of AI for Good and Deepmind’s co-founder Mustafa Suleyman.

The primary objective of the council will be to make the UK a leading name in the AI world.

Such is the promise of the technology in terms of productivity and the creation of new services, technologists will be keen to drive innovation forward, though the dangers are also high.

AI not only presents the risk of abuse through prejudice and unconscious bias, but the unknown risks should be considered as much of a danger. Such is the embryotic nature of AI, the full-potential, power and influence are anyone’s guess for the moment. This is an exciting prospect, but also should be approached with caution.

For example, back in July 2017, a Facebook AI application managed to invent its own language to speak to other applications meaning human overseers had no idea what was going on. This was a very simplistic and limited application so there was no real danger, but it was a lesson to the industry; more defined perimeters need to be created for more complex applications in the real world.

This council will aim to create a framework to take the UK into a leadership position in the AI world, but it will be critical the members do not forget about the importance of ethical and responsible development.

“Britain is already a leading authority in AI,” said Secretary of State for Digital, Culture, Media and Sport, Jeremy Wright. “We are home to some of the world’s finest academic institutions, landing record levels of investment to the sector and attracting the best global tech talent, but we must not be complacent.

“Through our AI Council we will continue this momentum by leveraging the knowledge of experts from a range of sectors to provide leadership on the best use and adoption of artificial intelligence across the economy.”

The full list of members:

  • Tabitha Goldstaub, Chair and Cofounder of Cognition X
  • Wendy Hall, Professor of Computer Science at the University of Southampton
  • Professor Adrian Smith. Institute Director and Chief Executive at the Alan Turing Institute
  • Alice Bentinck, Co-founder at Entrepreneur First
  • Alice Webb, Director for Children’s and Education at the BBC
  • Ann Cairns, Executive Vice Chair of Mastercard
  • Professor Chris Bishop, Microsoft Technical Fellow and Director of the Microsoft Research Lab in Cambridge
  • Dr Claire Craig, Chief Science Policy Officer at the Royal Society
  • Professor David Lane, Professor & Founding Director of the Edinburgh Centre for Robotics
  • Kriti Sharma, Founder of AI for Good
  • Marc Warner, CEO of Faculty
  • Professor Maire O’Neill, Professor at Queen’s University Belfast
  • Sir Mark Walport, Chief Executive of UKRI
  • Martin Tisne, Managing Director of Luminate
  • Mustafa Suleyman, Co-founder of Deepmind
  • Professor Neil Lawrence, Professor at the University of Sheffield and Director, IPC Machine Learning at Amazon
  • Professor Nick Jennings, Vice-Provost Research and Enterprise of Imperial College
  • Dame Patricia Hodgson, Member of the Independent Commission on Freedom of Information and Centre for Data Ethics and Innovation
  • Paul Clarke, CTO of Ocado
  • Professor Pete Burnap, Professor of Data Science & Cybersecurity at Cardiff University
  • Priya Lakhani, Founder of edtech AI platform Century Tech
  • Rachel Dunscombe, CEO of NHS Digital Academy

San Francisco puts the brakes on facial recognition surveillance

The City of San Francisco has passed new rules which will significantly curb the abilities of public sector organisations to purchase and utilise facial recognition technologies.

Opinions on newly emerging surveillance technologies have varied drastically, with some pointing to the benefits of safety and efficiency for intelligence and police forces, while others have bemoaned the crippling potential it could have on civil liberties and privacy.

The new rules in San Francisco do not necessarily ban surveillance technologies entirely, but barriers to demonstrate justification have been significantly increased.

“The success of San Francisco’s #FacialRecognition ban is owed to a vast grassroots coalition that has advocated for similar policies around the Bay Area for years,” said San Francisco Supervisor Aaron Peskin.

The legislation will come into effect in 30 days’ time. From that point, no city department or contracting officer will be able to purchase equipment unless the Board of Supervisors has appropriated funds for such acquisition. New processes will also be introduced including a surveillance technology policy for the department which meet the demands of the Board, as well as a surveillance impact report.

The department would also have to produce an in-depth annual report which would detail:

  • How the technology was used
  • Details of each instance data was shared outside the department
  • Crime statistics

The impact report will have to include a huge range of information including all the forward plans on logistics, experiences from other government departments, justification for the expenditure and potential impact on privacy. The department may also have to consult public opinion, while it will have to create concrete policies on data retention, storage, reporting and analysis.

City officials are making it as difficult as possible to make use of such technologies, and considering the impact or potential for abuse, quite rightly so. As mentioned before, this is not a ban on next-generation surveillance technologies, but an attempt to ensure deployment is absolutely necessary.

As mentioned before, the concerns surround privacy and potential violations of civil liberties, which were largely outlined in wide-sweeping privacy reforms set forward by California Governor Jerry Brown last year. The rules are intended to spur on an ‘informed public debate’ on the potential impacts on the rights guaranteed by the First, Fourth, and Fourteenth Amendments of the US Constitution.

Aside from the potential for abuse, it does appear City Official and privacy advocates are concerned over the impact on prejudices based on race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective. Many analytical technologies are based on the most likely scenario, leaning on stereotypical beliefs and potentially increasing profiling techniques, effectively removing impartiality of viewing each case on its individual factors.

While the intelligence and policing community will most likely view such conditions as a bureaucratic mess, it should be absolutely be viewed as necessary. We’ve already seen the implementation of such technologies without public debate and scrutiny, a drastic step considering the potential consequences.

Although the technology is not necessarily new, think of border control at airports, perhaps the rollout in China has swayed opinion. When an authoritarian state like China, where political and societal values conflict that of the US, implements such technologies some will begin to ask what the nefarious impact of deployment actually is.

In February, a database emerged demonstrating China has used a full suite of AI tools to monitor its Uyghur population in the far west of the country. This could have been a catalyst for the rules.

That said, the technology is also far from perfect. Police forces across the UK has been trialling facial recognition and data analytics technologies with varied results. At least 53 UK local councils and 45 of the country’s police forces are heavily relying on computer algorithms to assess the risk level of crimes against children as well as people cheating on benefits.

In May last year, the South Wales Police Force has to defend its decision to trial NEC facial recognition software during the 2017 Champions League Final as it is revealed only 8% of the identifications proved to be accurate.

It might be viewed by some as bureaucracy for the sake of bureaucracy but considering the potential for abuse and damage to privacy rights, such administrative barriers are critical. More cities should take the same approach as San Francisco.

The private power of the edge

One of conundrums which has been quietly emerging over the last couple of months concerns how to maintain privacy when attempting to improve customer experience, but the power of the edge might save the day.

If telcos want to be able to improve customer experience, data needs to be collected and analysed. This might sound like a very obvious statement to make, but the growing privacy movement across the world, and the potential of new regulatory restraints, might make this more difficult.

This is where the edge could play a more significant role. One of the more prominent discussions from Mobile World Congress in Barcelona this year was the role of the edge, and it does appear this conversation has continued through to Light Reading’s Big 5G Event in Denver.

Some might say artificial intelligence and data analytics are solutions looking for a problem, but in this instance, there is a very real issue to address. Improving customer experience though analytics will only be successful if implemented quickly, some might suggest in real-time, therefore the models used to improve performance should be hosted on the edge. This is an example of where the latency business model can directly impact operations.

It also addresses another few issues, firstly, the cost of sending data back to a central data centre. As it was pointed out today, telcos cannot afford to send all customer data back to be analysed today, it is simply an unreasonable quantity, therefore the more insight which can be actioned on the edge, with only the genuinely important insight being sent back to train models, the more palatable customer experience management becomes.

Secondly, the privacy issue is partly addressed. The more which is actioned on the edge, as close to the customer as possible, the lesser the concerns of the privacy advocates. Yes, data is still being collected, analysed and (potentially) actioned upon, but as soon as the insight is realised the sooner it can be deleted.

There are still sceptics when it comes to the edge, the latency business case, artificial intelligence and data analytics, but slowly more cases are starting to emerge to add credibility.

Turns out real people sometimes hear what you say to smart speakers

The revelation that Amazon employs people to listen to voice recordings captured from its Echo devices has apparently surprised some people.

The scoop comes courtesy of Bloomberg and seems to have caught the public imagination, as it has been featured prominently by mainstream publications such as the Guardian and BBC News. Apparently Amazon employs thousands of people globally to help improve the voice recognition and general helpfulness of its smart speakers. That means they have to listen to real exchanges sometimes.

That’s it. Nothing more to see here folks. One extra bit of spice was added by the detail that sometimes workers use internal chatrooms to share funny audio files such as people singing in the shower. On a more serious note some of them reckon they’ve heard crimes being committed but were told it’s not their job to interfere.

Amazon sent Bloomberg a fairly generic response amounting to a justification of the necessity of human involvement in the AI and voice recognition process but stressing that nothing’s more important to it than privacy.

Bloomberg’s main issue seems to be that Amazon doesn’t make it explicit enough that another person may be able to listen into your private stuff through an Echo device. Surely anyone who knowingly installs and turns on a devices that is explicitly designed to listen to your voice at all times must be at least dimly aware that there may be someone else on the other end of the line, but even if they’re not it’s not obvious how explicit Amazon needs to be.

An underlying fact of life in the artificial intelligence era is that the development of AI relies on the input of as much ‘real life’ stuff as possible/ Only be experiencing loads of real interactions and scenarios can a machine learn to mimic them and participate in them. In case there is any remaining doubt, if you introduce a device into your house that is designed to listen at all times, that’s exactly what it will do.

Qualcomm moves to the edge with Cloud AI 100 chip

Mobile chip-maker Qualcomm reckons all the stuff it has learned about processing AI in smartphones will come in handy in datacentres too.

The Qualcomm Cloud AI 100 Accelerator is a special chip designed to process artificial intelligence in the cloud. Specifically Qualcomm seems to think it has an advantage when it comes to ‘AI inference’ processing – i.e. using algorithms that have been trained with loads of data. This stands to reason as it has its chips in millions of smart devices, all of which will have been asked to do some inference processing of their own from time to time.

“Today, Qualcomm Snapdragon mobile platforms bring leading AI acceleration to over a billion client devices,” said Qualcomm Product Management SVP Keith Kressin. “Our all new Qualcomm Cloud AI 100 accelerator will significantly raise the bar for the AI inference processing relative to any combination of CPUs, GPUs, and/or FPGAs used in today’s datacentres. Furthermore, Qualcomm Technologies is now well positioned to support complete cloud-to-edge AI solutions all connected with high-speed and low-latency 5G connectivity.”

The datacentre chips in question will largely be provided by Intel, although Nvidia has done a great job of converting its struggling mobile chip efforts into a successful AI processing operation. Qualcomm claims a 10x performance per watt advantage over incumbent AI inference chips and, while it didn’t call out any competitors in its press release, the predominance of their names in the headlines of other stories covering this launch makes it likely that has been the angle behind the scenes.

Europe unveils its own attempt to address ethical AI

Addressing the ethical implications of artificial intelligence has become very fashionable in recent months, and right on cue, the European Commission has produced seven guidelines for ethical AI.

The guidelines themselves are not much more than a theoretical playbook for companies to build products and services around for the moment. However, any future legislation which is developed to guide the development of AI in the European Union will likely use these guidelines as the foundation blocks. It might not seem critical for the moment, but it could offer some insight into future regulation and legislation.

“The ethical dimension of AI is not a luxury feature or an add-on,” said Vice-President for the Digital Single Market Andrus Ansip. “It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”

“We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society,” said Commissioner for Digital Economy and Society Mariya Gabriel. “We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI.”

The seven guidelines are as follows:

  1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  4. Transparency: The traceability of AI systems should be ensured.
  5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The Commission will now launch a pilot phase with industry and academia to make sure the guidelines are realistic to implement in real-world cases. The results of this pilot will inform any measures taken by the Commission or national governments moving forward.

This is one of the first official documents produced to support the development of AI, though many parties around the world are attempting to weigh in on the debate. It is critically important for governments and regulators to take a stance, such is the profound impact AI will have on society, though private industry is attempting to make itself heard as well.

From private industry’s perspective, the mission statement is relatively simple; ensure any bureaucratic processes don’t interfere too much with the ability to make money. Google was the latest to attempt to create its own advisory board to hype the lobby game, but this was nothing short of a disaster.

Having set up the board with eight ‘independent’ experts, the plan was scrapped almost immediately after employees criticised one of the board members for not falling on the right side of the political divide. This might have been an embarrassing incident, though the advisory board was hardly going to achieve much.

Google suggested the board would meet four times a year to review the firms approach to AI. Considering AI is effectively embedded, or will be, in everything which Google does, a quarterly assessment was hardly going to provide any actionable insight. It would be simply too much to do in a short period of time. This was nothing more than a PR plug by the internet giant, obsessed with appearing to be on the side of the consumer.

AI will have a significant impact on the world and almost everyone’s livelihood. For some, jobs will be enhanced, but there will always be pain. Some will find their jobs redundant, some will find their careers extinguished. Creating ethical guidelines for AI development and deployment will be critical and Europe is leading the charge.

Google caves in to employee activism… this time

The Silicon Valley search giant has decided to dissolve its AI ethical council, one week after it was created, in response to opposition from its own employees. But it’s not always so responsive to their concerns.

A week after the Advanced Technology External Advisory Council (ATEAC) was created, Google told  VOX that it has decided to cancel the project. Controversy has been following the project from the start, especially surrounding one council member Google enlisted. This prompted an internal petition that attracted the signatures of more than 2,300 employees and the resignation of one Council member. The sole purpose of ATEAC, with its members unpaid and the body without any decision-making power, seems to generate good PR. In that respect it represents a spectacular own-goal, so Google has bravely run away.

“It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.” Google sent this statement to VOX.

This is not the first time that Google has “listened to employees”. In June 2018, Google famously “ditched contract with the US military” after more than 3,000 employees protested the company’s AI technology being used for military surveillance, the so-called project Maven.

But Google has not always respected its employees’ views. After almost exactly a year after he disclosed that Google was secretly working on a censored version of search engine for China, Ryan Gallagher, the reporter for The Intercept, kept the interested readers updated with the news that Google was closer to readiness with the so-called project Dragonfly. Some senior executives were said to be doing a secret “performance review” of the product, contrary to Google’s normal practice of involving large numbers of employees when assessing upcoming products.

Despite that more than 1,400 employees have condemned project Dragonfly and some have resigned, in addition to Google’s CEO having to testify in front of the Congress, Google looks to be rather determined to push forward with its China re-entry strategy. The Financial Times reported that the search and online advertising giant has recently suspended serving ads on two Chinese websites that evaluate VPNs, which would have helped users inside the Great Firewall to bypass the blocking. A local research firm told the FT that, considering the acrimonious nature of Google’s departure from China nine years ago, the company “may feel compelled to make additional efforts to curry favour and get back in the good graces to get approval to re-enter the market.”

So it is not clear whether it was due to the number of employees protesting against project Dragonfly being smaller or the resignations lower-profile that Google has decided not to back down, or it is simply more convenient to disband a rubber-stamp council or to discontinue a contract with the American military than resisting the temptation of the Chinese market and standing up to the censorial demands of the Chinese authorities.