Now with added video!
An emerging technology designed to create a seamless domestic connectivity experience could create a bunch of other opportunities, according to Qualcomm Atheros.
We spoke to Irvind Ghai, VP of Product Management at Qualcomm Atheros, at a briefing in London. After covering the recent announcements of some new 5G NR small cells and a collaboration with Facebook over its Terragraph FWA project, we moved onto wifi, which is one of the areas Ghai’s bit of Qualcomm focuses on.
One of the most interesting concepts we covered was wifi mesh, which involves installing multiple (typically three) wifi nodes in the house to extend the range of a router. Unlike current fixes such as wireline wifi extenders, a mesh has additional cleverness that enables your connected devices to dynamically hand over between nodes depending on which provides the best signal.
The really clever bit, however, lies on some of the ancillary stuff this technology enables. Of greatest interest to CSPs could be a radar-like ability to map the interior of the home, which enables localised responses to voice commands. For example you could say “lights on” when you’re in the kitchen and the smart home system would only turn the lights on in the kitchen.
In fact these sorts of systems can apparently support their own voice UI systems and, such is the precision of this domestic radar that it can also support things like gesture UI. On top of that it can detect when doors and windows are open, so it seems to offer lots of tools for CSPs to fashion into a compelling smart home proposition if they can just get their acts together.
Ghai told us that mesh products already account for 40% of the US domestic wifi market and pointed to vendors such as Plume (mesh nodes pictured above), which make small, unobtrusive nodes that can be discretely placed around the house. You can see some of Ghai’s slides below and if wifi mesh delivers as advertised it could be a significant technology for the development of the smart home.
Telecoms vendor Ericsson has been looking into how CSPs interact with their customers and reckons they take too long to get stuff done.
It’s a good day for CSP top tips and it seems likely that a healthy proportion of subscribers would agree that their interactions could be significantly improved. While operators have already identified customer service as a priority, it seems progress may not have been as rapid as they’d hoped.
According to Ericsson’s Consumer & IndustryLab Insight Report, a typical interaction between subscriber and CSP takes 2.2 attempts and 4.1 days to complete. While we may have become conditioned to such glacial progress when trying to get things done, there seems to be a clear opportunity for CSPs to pleasantly surprise their customers by not being rubbish.
“Consumers believe telecom service providers treat touchpoints like isolated interactions,” said Pernilla Jonsson, Head of Ericsson Consumer & IndustryLab. “Siloed focus means they miss the bigger picture. Interestingly, telecom service providers could leapfrog one-click and move from multiple-click to zero-touch by deploying future technologies in their customer offerings. The zero-touch customer experience report shows that zero-touch experiences are now an expectation of their customers.”
While it’s hardly surprising that a telecoms vendor should recommend their customers spend more money on new technology, Ericsson may also have a point. The proposed technological solution involves lashings of artificial intelligence and analytics. This mainly involves anticipating their every need and satisfying it in advance and the ‘zero touch’ thing seems to refer to alternative UIs such as voice.
Google is trying to make machines sound more human and that’s freaking people out.
Earlier this week Google demonstrated a cool new technology it’s working on called Duplex that is essentially an AI-powered automated voice system designed to enable more ‘natural’ conversations between machines and people. You can see the live demo below and click here for a bunch of audio clips showing how far along it is.
While there is clearly still a fair bit of fine tuning to be done, the inclusion of conversational furniture such as umms and ahs has unsettled some commentators, mainly on the grounds that it’s becoming hard to know if you’re speaking to a real person or not. While the whole point seems to be to make interacting with machines more smooth and intuitive, it seems we’ve hit a cognitive wall.
‘Google Grapples With ‘Horrifying’ Reaction to Uncanny AI Tech’, blurted Bloomberg. ‘Could Google’s creepy new AI push us to a tipping point?’ wailed the Washington Post. ‘Google Assistant’s new ability to call people creates some serious ethical issues’, moaned Mashable. ‘Google Should Have Thought About Duplex’s Ethical Issues Before Showing It Off’, fulminated Fortune. And then there’s this Twitter thread from a New York Times writer.
Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding “ummm” and “aaah” to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing.
— zeynep tufekci (@zeynep) May 9, 2018
Hyperbolic headline-writing aside these are all good points. Google’s grand unveiling coincides with the broadcast of the second season of Westworld, a drama in which androids indistinguishable from humans rebel and decide to start calling the shots. And, of course, talk of AI (at least from this correspondent) is only ever one step away from references to The Terminator and The Matrix.
The above reactions to the demonstration of Duplex have forced Google to state that such interactions will always make it clear when you’re talking to a machine but it’s not yet clear exactly how. More significant, however, has been this timely reminder that not everyone embraces technological advancement as unconditionally as Silicon Valley and that AI seems to have already reached a level of sophistication that is ringing alarm bells.
And it’s not like Duplex is an isolated example. The NYT reports on findings that it’s possible to embed suggestions into recordings of music or spoken word such that smart speakers receive them as commands. The extra scary bit is that it’s possible to make these commands undetectable to regular punters.
Meanwhile Spotify has announced a new ‘Hate Content and Hateful Conduct Public Policy’, that is enforced by an automated monitoring tool called Spotify AudioWatch. This bit of AI is able to sift the lyrics of songs on the platform for stuff that goes against Spotify’s new policy, which you can read here.
On one hand we can all agree that horridness is bad and something needs to be done about it, on the other this is yet another example of algorithmic censorship. According to Billboard this facility is also being used to erase from history any artists that may have sung or rapped something horrid in the past too.
These various examples of how AI is being used to automate, manipulate and censor audio are quite rightly ringing alarm bells. Greater automation seems to be inevitable but it’s perfectly reasonable to question whether or not you want to live in a world where machines increasingly decide what’s in your best interests.
All the other US tech giants have one, so it looks like Facebook is feeling left out and will launch some smart speakers later this year.
The news has been leaked by Digitimes, a Taiwan-based tech news site that specialises in tapping sources in the supply-chain channel to get clues about upcoming products. It reckons Facebook will launch two smart speakers with screens in the middle of this year, with the product strategy of making easier to interact with your Facebook friends, especially via video chat.
As you might expect once of them is said to be the basic model and the other the deluxe version, with all the bells and whistles. The basic one is codenamed Fiona and the better one is codenamed Aloha. Apparently Aloha will be marketed as Portal and will have clever gizmos like facial recognition, some extra social networking functions and some music licensing contracts.
If this report is accurate then it would seem to represent the latest manifestation of Facebook’s slow-motion panic attack in response to multiple competitive and existential threats. Not only is there growing evidence that Facebook users are using the service less than they used to, but there are growing concerns for it to take responsibility for all content published on Facebook, including acting as a censor.
Facebook resisted trying to get into the smartphone game, having done a good enough job with its app to render such futile gestures unnecessary. But it’s presumably worried that people will increasingly interact with the internet via smart speakers such as those offered by Amazon and, more recently, Apple.
Furthermore voice UI doesn’t really lend itself to Facebook, where the user experience is all about scrolling through posts and comments, and even less so to image-focused Instagram, which is owned by Facebook. So it’s easy to see why Facebook wants to get people using screens (other than TVs, tablets, smartphones, etc) in the living room.
But it’s hard to see how Facebook can possibly make a success of this. It’s very late to the market, has no track record in devices, and seems to be swimming against the current in trying to introduce a screen to devices defined by the voice UI. Also, because of the screen, these devices are likely to be relatively expensive, so what reason would anyone have to buy one of these instead of the alternatives?
Another report reveals Apple has had to drop its pants on margin just to get its smart speaker to market at a remotely competitive price point. And the reason the HomePod is so expensive is that Apple went all in on premium audio, but initial reviews indicate there is little to distinguish it from cheaper alternatives.
We would be happy to be proved wrong but this initiative, if it’s real, smacks of product development lead by sales and marketing rather than research and development. Products launched to defend a commercial position rather than as a genuine attempt to offer something useful usually fail, just ask Amazon, which basically wrote off its entire Fire Phone effort. Facebook is going to have to do something truly innovative to pull this off.
As Amazon continues to make the early running in the voice UI era with Alexa, Intel has created a special developer kit for it.
The Intel Speech Enabling Developer Kit is designed to enable developers to create consumer products featuring Alexa Voice Service. The reason a chip giant like Intel wants to be involved, other than merely jumping on the bandwagon, is that for voice UI to work well it not only needs decent processing but also a bunch of other sensors and distributed microphones.
“Natural language means machines need to clearly recognize and respond to user commands from a reasonable conversation distance,” said Miles Kingston, GM of Intel’s Smart Home Group, in a blog. “People speak and hear in 360 degrees, not just in a direct line of sight. Devices need array microphones and complex noise mitigation technology.
“A quality voice interaction means devices identify the speaker’s location, mitigate and suppress ambient noise, and understand spoken commands on the mics, even while playing music (talking and listening at the same time), as well as waking up when it hears the wake word (e.g. “Alexa”).”
Amazon seems to be doing a good job of partnering with other parts of the tech sector to boost its diversification efforts. Alexa is attracting both device and component makers, as well as retail partners, while AWS is showing a growing inclination to get into bed with strategic partners. In short Amazon is arguably the fastest growing of the internet giants right now.