Sonos says Google has been stealing its patented tech for years

Wireless audio specialist Sonos is suing internet giant Google, claiming it has knowingly used its patented technology without paying for it since 2016.

The grievance relates to Google’s portfolio of smart, wireless, networked speakers, now going under the collective brand of Google Home. Sonos says it gave Google access to its patents in 2013 in order to allow Google Play Music to work on the Sonos platform. At the time Google had no competing hardware.

A couple of years later, however, the Chromecast Audio dongle was launched, which promised to connect regular speaker to the internet. Initially, as the Guardian reported, each connected speaker required its own audio source. But within a couple of months Google added a bunch of additional functionality, including multi-room support, which seems to be the first of the patent infringements.

Here are the main patents Sonos claims are being infringed, although there are another 23 not detailed in the suit:

  • US. Patent No. 8,588,949 – Method and Apparatus for Adjusting Volume Levels in a Multi-Zone System
  • US. Patent No. 9,195,258 – System and Method for Synchronizing Operations Among a Plurality of Independently Clocked Digital Data Processing Devices
  • US. Patent No. 9,219,959 – Multi Channel Pairing in a Media System
  • US. Patent No. 10,209,953 – Playback Device
  • US. Patent No. 10,439,896 – Playback Device Connection

“Google is an important partner with whom we have collaborated successfully for years, including bringing the Google Assistant to the Sonos platform last year,” said Sonos CEO, Patrick Spence. “However, Google has been blatantly and knowingly copying our patented technology in creating its audio products.

“Despite our repeated & extensive efforts over the last few years, Google has not shown any willingness to work with us on a mutually beneficial solution. We’re left with no choice but to litigate in the interest of protecting our inventions, our customers, and the spirit of innovation that’s defined Sonos from the beginning.”

We’re not aware of any public Google response, but even if they have it will just be templated legalese claiming innocence, so let’s just take that as read. Sonos has filed suit in the Central California district court and also asked the International Trade Commission to block the importing of any of the products claimed to infringe the patents into the US. If Google is guilty of any of this it would be well advised to settle quickly as the PR from exploiting such a well-loved tech brand is unlikely to be good.

The next generation of Bluetooth audio looks good

At CES 2020 the people who run the short range Bluetooth wireless standard unveiled a new version of its audio technology that promises a lot of new features.

The Bluetooth SIG (special interest group) is calling its next generation LE Audio as it is an evolution of Bluetooth Low Energy. Indeed LE Audio uses a new codec called LC3 that promises to improve sound quality while significantly reducing the power requirement. This in turn should enable even smaller wireless earbuds and that sort of thing.

“Extensive listening tests have shown that LC3 will provide improvements in audio quality over the SBC codec included with Classic Audio, even at a 50% lower bit rate,” said Manfred Lutzky, Head of Audio for Communications at Fraunhofer IIS. “Developers will be able to leverage this power savings to create products that can provide longer battery life or, in cases where current battery life is enough, reduce the form factor by using a smaller battery.”

On top of that this new tech comes with multi-stream audio for the first time. “Developers will be able to use the Multi-Stream Audio feature to improve the performance of products like truly wireless earbuds,” said Nick Hunn, CTO of WiFore Consulting and Chair of the Bluetooth SIG Hearing Aid Working Group. “For example, they can provide a better stereo imaging experience, make the use of voice assistant services more seamless, and make switching between multiple audio source devices smoother.”

Similarly another new feature enables multiple BT peripherals to access a singe audio source. This is handy not just as another way of sharing audio content, but also for location based audio that could intrude upon your listening, presumably with permission. The low power aspect also allows better support for hearing aids, which could also benefit from the broadcast feature for things like safety announcements.

“Location based Audio Sharing holds the potential to change the way we experience the world around us,” said Peter Liu of Bose Corporation and member of the Bluetooth SIG Board of Directors. “For example, people will be able to select the audio being broadcast by silent TVs in public venues, and places like theaters and lecture halls will be able to share audio to assist visitors with hearing loss as well as provide audio in multiple languages.”

It seems safe to assume that the Bluetooth chip in devices and peripherals will support this next generation from now on. Assuming it delivers as advertised there’s nothing to dislike about Bluetooth LE Audio. It seems to be a solid evolution of the technology that should improve the digital audio experience for people with nearly all levels of hearing capacity.

AI audio is getting scary

Google is trying to make machines sound more human and that’s freaking people out.

Earlier this week Google demonstrated a cool new technology it’s working on called Duplex that is essentially an AI-powered automated voice system designed to enable more ‘natural’ conversations between machines and people. You can see the live demo below and click here for a bunch of audio clips showing how far along it is.

While there is clearly still a fair bit of fine tuning to be done, the inclusion of conversational furniture such as umms and ahs has unsettled some commentators, mainly on the grounds that it’s becoming hard to know if you’re speaking to a real person or not. While the whole point seems to be to make interacting with machines more smooth and intuitive, it seems we’ve hit a cognitive wall.

‘Google Grapples With ‘Horrifying’ Reaction to Uncanny AI Tech’, blurted Bloomberg. ‘Could Google’s creepy new AI push us to a tipping point?’ wailed the Washington Post. ‘Google Assistant’s new ability to call people creates some serious ethical issues’, moaned Mashable. ‘Google Should Have Thought About Duplex’s Ethical Issues Before Showing It Off’, fulminated Fortune. And then there’s this Twitter thread from a New York Times writer.

Hyperbolic headline-writing aside these are all good points. Google’s grand unveiling coincides with the broadcast of the second season of Westworld, a drama in which androids indistinguishable from humans rebel and decide to start calling the shots. And, of course, talk of AI (at least from this correspondent) is only ever one step away from references to The Terminator and The Matrix.

The above reactions to the demonstration of Duplex have forced Google to state that such interactions will always make it clear when you’re talking to a machine but it’s not yet clear exactly how. More significant, however, has been this timely reminder that not everyone embraces technological advancement as unconditionally as Silicon Valley and that AI seems to have already reached a level of sophistication that is ringing alarm bells.

And it’s not like Duplex is an isolated example. The NYT reports on findings that it’s possible to embed suggestions into recordings of music or spoken word such that smart speakers receive them as commands. The extra scary bit is that it’s possible to make these commands undetectable to regular punters.

Meanwhile Spotify has announced a new ‘Hate Content and Hateful Conduct Public Policy’, that is enforced by an automated monitoring tool called Spotify AudioWatch. This bit of AI is able to sift the lyrics of songs on the platform for stuff that goes against Spotify’s new policy, which you can read here.

On one hand we can all agree that horridness is bad and something needs to be done about it, on the other this is yet another example of algorithmic censorship. According to Billboard this facility is also being used to erase from history any artists that may have sung or rapped something horrid in the past too.

These various examples of how AI is being used to automate, manipulate and censor audio are quite rightly ringing alarm bells. Greater automation seems to be inevitable but it’s perfectly reasonable to question whether or not you want to live in a world where machines increasingly decide what’s in your best interests.