ITV and BBC to launch yet another streaming service

The streaming segment is already looking pretty crowded, but ITV and the BBC have decided to pair up to add another premium option into the mix with BritBox.

Priced at £5.99 a month, the service will launch in the final quarter of 2019, undercutting digital rivals who have been making life so difficult for the traditional players these last few years.

“We have a world beating TV industry with outstanding content,” said BBC Director General, Tony Hall. “The BBC and ITV are at the centre of that. Together, we have been responsible for delivering the majority of ‘must see’ moments on British TV over the last decade. That ‘must see’ content will now be on BritBox.”

“The agreement to launch BritBox is a milestone moment. Subscription video on demand is increasingly popular with consumers who love being able to watch what they want when they want to watch it,” said Carolyn McCall, CEO of ITV. “They are also happy to pay for this ease of access to quality content and so BritBox is tapping into this, and a new revenue stream for UK public service broadcasters.”

Although negotiations did seem to be on rocky ground occasionally, the BBC did appear to be showing a preference to its own iPlayer service, it might attract interest from various different segments of the UK population. The BBC has attracted plaudits for its programming over the last few years, especially around multi-series boxsets, where this service will focus.

BritBox will now become the home for all BBC and ITV programming once they fall off the streaming services which are already in place. Although ITV is free to put its programming on the ITV Hub for as long as it wants, it is believed Ofcom will force the BBC to make its programming available for a year on the iPlayer.

Bearing in mind both of these organizations already have streaming services for free, some might question what the point of this new service actually is, though there is an opportunity to combine forces and drive original programming exclusively for BritBox. Details are relatively thin on the ground, but the team will take the same approach as Netflix and Amazon, creating exclusive original content to attract subscribers.

Netflix share price slumps 11% as Q2 falls short of expectations

Netflix blames price increases and an under-performing content slate for poor performance in the second quarter of 2019.

Revenues and profits might be up, but these are two metrics which have never seemingly bothered Netflix shareholders that much. What seems to be bothering the twitchy investors is tepid subscriber growth and increased competition in the streaming segment; ultimately these will both lead to the revenue and profitability metrics, but the point is to cast an eye on the horizon not today.

And it appears some investors do not share the same optimism as the Netflix management team as share price slumps 11% in overnight trading.

“… as you can see over the past 3 years, sometimes we’re forecast high,” CEO Reed Hastings said during the earnings call. “Sometimes we forecast low. This is one where we forecasted high. There was no one thing.

“And if I think about three years ago, we were also light, and we never really were confident of the explanation. Then, we were $2 billion in quarterly revenue. Now, we’re going $5 billion. And so, it’s easy to over-interpret the quarter membership adds, which are a bit noisy. So, for the most part, we’re just executing forward and trying to do the best forecast we can.”

Subscription numbers are of course all important for what effectively is a single revenue stream business model, and the numbers aren’t the most flattering. 2.7 million net additions compared to the 5 million which were forecast at the beginning of the quarter, including a net loss of 130,000 in the US. When you consider the US currently accounts for roughly 48% of total revenues, this becomes an issue.

Hastings is playing his hand exactly as you would expect. In the early years, Netflix told investors not to worry about the money as subscriber gains are going through the roof; the money would come eventually. Now, Hastings and co. are telling investors not to worry about subscribers because the money is rolling in.

Netflix subs

What is worth noting is that one bad forecast, one dip in subscribers, does not engulf Netflix in flames. Its not ideal, but as long as it doesn’t become a trend there shouldn’t be much to worry about.

Understanding why is of course critical however should Netflix want to rebound to the 7 million subscription gains it is promising over the next three months.

Firstly, price increases have not landed well with the subscribers according to CFO Spencer Neumann. The biggest churn rates across the world were in the markets where Netflix had announced price increases, the US for example, though strong performance in Q1 and a poor content slate over Q2 were also factors which contributed to the numbers.

Neumann suggests the first quarter of 2019 was particularly strong, perhaps pulling forward subscriptions and emptying the sales funnel for the second quarter, while the team has suggested the content slate was not as attractive as previous periods. This should change in the future however as the business moves from a licensing model to one which is more governed by original content.

Over the next couple of years, certain companies are going to start pulling back content from the Netflix catalogue, Disney and WarnerMedia/AT&T are prime examples. Not only will this decrease the variety of content on the platform, removing some fan favourites as well, but it will also strengthen the opposition.

Netflix has not blamed competition for the poor performance this quarter, suggesting there has been not material change to the competitive landscape just yet, though some shareholders might be getting a bit worried. Netflix is facing some difficulties at the moment, while bigger disruptions are on the horizon with Disney and AT&T readying their own streaming services.

Netflix Financials

What is also worth noting is content; Netflix is the market leader in the streaming market and is in the strongest position to deal with increased competition. There of course will be some difficult conversations to be had in the future, but this is still a business heading in the right direction.

Total revenue for the quarter was $4.9 billion, up 26% year-on-year, while the management team is forecasting $5.2 billion for the next three months, a 31% year-on-year jump. Globally, paid memberships increased to 151.5 million, while there were more than 7 million free trials across the second quarter.

On the pricing front, although this might have a slight negative impact on churn and subscription gains in the short-term, collecting additional revenue each month is only going to be a positive in the long-term. Netflix is still very affordable for the majority.

Looking forward, the team has suggested the first couple of weeks of the current quarter has demonstrated an acceleration in subscriptions, while churn is returning the levels experienced prior to the price increase. And while it might seem internet TV is taking over the world, there is still plenty of room for growth according to the team, both in terms of the linear/streaming dynamic and the opportunity on mobile.

Another factor to consider is the spend which is being allocated to original and localised content; few in the industry can compete with the Netflix numbers in this column. And as content becomes more fragmented through the various streaming platforms, the more original content Netflix produces, the more attractive is becomes as a service to consumers.

Netflix is still in a very strong position, but it is not going to have the same free-reigning dominance as it has experienced over the last few years. A diluted content library, price hikes and increased fragmentation will have a say in the fate of the business, but it is still in the strongest position of this increasingly competitive segment.

AT&T gets streaming with HBO Max

Gone are the days when the consumer could get all the content they wanted in one place as AT&T’s WarnerMedia joins the streaming landgrab.

With Netflix, Amazon Prime, Hulu, Disney, HBO and numerous other streaming services on the market before too long, the fragmentation of content is looking like it could be a serious problem for the consumer. Whether splitting the spoils has an overarching negative impact on the segments profits remains to be seen, but customers wallets can only be pushed so far; how many streaming services can each customer be expected to have?

That said, AT&T is in a strong position with this proposition. In HBO, it owns a lot of promising content already, playing into consumer nostalgia, and it does seem to be heading in the right direction in terms of original programming.

“HBO Max will bring together the diverse riches of WarnerMedia to create programming and user experiences not seen before in a streaming platform,” said Robert Greenblatt, Chairman of WarnerMedia Entertainment and Direct-To-Consumer.

“HBO’s world-class programming leads the way, the quality of which will be the guiding principle for our new array of Max Originals, our exciting acquisitions, and the very best of the Warner Bros. libraries, starting with the phenomenon that is ‘Friends’.”

With the service set to debut in Spring 2020, AT&T is promising 10,000 hours of programming from the outset. Full series of ‘Fresh Prince of Bel Air’, ‘Friends’ and ‘Pretty Little Liars’ will feature in the content library, as well as new dramas such as ‘Batwoman’ and ‘Katy Keene’.

Looking at future Max Original series, the list is quite extensive. ‘Dune: The Sisterhood’ is an adaptation of Brian Herbert and Kevin Anderson’s book based in the world created by Frank Herbert’s book Dune. ‘Lovecraft Country’ is a horror series based on a novel by Matt Ruff. ‘The Plot Against America’ will be a reimagined history based on Phillip Roth’s novel.

The ingredients are all in place to ensure AT&T makes a sustained stab at cracking the streaming market which has been dominated by the OTTs to date. There are a couple of questions which remain however.

Firstly, pricing. Can executives price the service competitively while also sustaining investments in content? Secondly, experience. Will the platform meet the high-expectations set by consumers thanks to the high-bar set by Netflix? And finally, culture. Will AT&T allow WarnerMedia to operate as a media business or will it impose the traditional mentality of telcos onto the business?

AT&T has bet big on the content world and it can ill-afford to fluff its lines on its debut. Having signed an $85 billion deal to acquire Time Warner and spent what seems like decades battling various government departments to authorise the transaction, the telco will need to see some ROI sooner rather than later.

The question is whether the momentum in the streaming world can be sustained. Platforms like Netflix, Hulu and Amazon Prime were attractive in the early days because there was consolidation of content onto a single library. With more streaming services becoming available, the fragmentation of content might well become a problem before too long. Consumers will have to make choices on what service to subscribe to, limiting the profits of the individual providers.

The days of subscribing to everything might be a thing of the past before too long; wallets can only be pushed so far.

Diversification into profitable segments is certainly a sensible strategy in the days of meagre connectivity profits, but $85 billion is a lot to spend on a hunch.

FBI and London Met land in hot water over facial recognition tech

The FBI and London Metropolitan Police force will be facing some awkward conversations this week over unauthorised and potentially illegal use of facial recognition technologies.

Starting in the US, the Washington Post has been handed records dating back almost five years which suggest the FBI and ICE (Immigration and Customs Enforcement) have been using DMV databases to build a surveillance network without the consent of citizens. The emails were obtained by Georgetown Law researchers through public records requests.

Although law enforcement agencies have normalised biometrics as part of investigations nowadays, think finger print or DNA evidence left at crime scenes, the traces are only useful when catching repeat offenders. Biometric databases are built by obtaining data from those who have been previously charged, but in this case, the FBI and ICE have been accessing data on 641 million individuals, the vast majority of which are innocent and would not have been consulted for the initiative.

In the Land of the Free, such hypocrisy is becoming almost second nature to national security and intelligence forces, who may well find themselves in some bother from a privacy perspective.

As it stands, there is no legislative or regulatory guidelines which authorise the development of such a complex surveillance system, or any public consultation with the citizens of the US. This act first, tell later mentality is something which is becoming increasingly common in country’s the US has designated as national enemies, though there is little evidence authorities in the US have any respect for the rights of their own citizens.

Heading across the pond to the UK, a report from the Human Rights, Big Data & Technology Project has identified ‘significant flaws’ with the way live facial recognition has been trialled in London by the Metropolitan Police force. The group, based out of the University of Essex Human Rights Centre, suggests it could be found to be illegal should it be challenged in court.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law,” said Dr Daragh Murray, who authored the report alongside Professor Peter Fussey.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR [live facial recognition]. Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset and was not an integral part of the process.”

The main gripe from the duo here seems to be how the Met approached the trials. LFR was approached in a manner similar to traditional CCTV, failing to take into the intrusive nature of facial recognition, and the use of biometric processing. The Met did not consider the ‘necessary in a democratic society’ test established by human rights law, and therefore effectively ignored the impact on privacy rights.

There were also numerous other issues, including a lack of public consultation, the accuracy of the technology (8 out of 42 tests were actually correct), criteria for using the technology was not clearly defined and accuracy and relevance of the ‘watchlist’ of suspects. However, the main concern from the University’s research team was that only the technical aspects of the trial were considered, not the impact on privacy.

There is a common theme in both of these instances; the authorities supposedly in place to protect our freedoms pay little attention to the privacy rights which are granted to us. There seems to be a ‘ends justify the means’ attitude with little consideration to the human right to privacy. Such attitudes are exactly what the US and UK aim to eradicate when ‘freeing’ citizens of oppressive regimes abroad.

What is perhaps the most concerned about these stories is the speed at which they are being implemented. There has been little public consultation to the appropriateness of these technologies or whether the general public is prepared to sacrifice privacy rights in the pursuit of national security. With the intrusive nature of facial recognition, authorities should not be allowed to make this decision on behalf of the general public, especially when there is so much precedent for abuse and privacy is a hot-topic following scandals in private industry.

Of course, there are examples of the establishment slowing down progress to give time for these considerations. In San Francisco, the city’s Board of Supervisors has made it illegal for forces to implement facial recognition technologies unless approval has been granted. The police force would have to demonstrate stringent justification, accountability systems and safeguards to privacy rights.

In the UK, Dr Murray and Professor Fussey are calling for a pause on the implementation or trialling of facial recognition technologies until the impact on and trade-off of privacy rights have been fully understood.

Facial recognition technologies are becoming incredibly useful when it comes to access and authentication, though there needs to be some serious conversations about the privacy implications of using the tech in the world of surveillance and police enforcement. At the moment, it seems to be nothing but an after-thought for the police forces and intelligence agencies, an incredibly worrying and dangerous attitude to have.

Niantic’s Harry Potter launches but remains in Pokémon Go’s shadow

Harry Potter: Wizards Unite is up-and-running, but its dash from the starting line is no-where near as fast as Niantic’s gold standard, Pokémon Go.

Few would have predicted the roaring success of Pokémon Go. Most would have assumed it would have done well, but the sustained acceleration of downloads and revenues came as a surprise to most. Even now, almost three years after the launch of the game, Niantic is still hoovering up the cash; the first quarter of 2019 brought in an estimated $205 million; it left a lot for Harry Potter to live up to.

But if you are expecting new records to be broken, you might feel a little bit underwhelmed.

This is not to say Harry Potter: Wizards Unite is not doing well. Most app developers would sell their left leg for the numbers being reported over this weekend. According to estimates from Sensor Tower, Harry Potter: Wizards Unite was downloaded three million times over the opening weekend, bringing in $1.1 million in player spending. Projections for the first month stand at roughly $10 million.

For a single title, most developers would be thrilled by this, but Niantic will always have the Pokémon Go comparisons to deal with.

During the first four days of Pokémon Go, Niantic boasted 24 million downloads and player spending of $28 million. In the first month, player spending reached $206 million and downloads were almost 173 million. Realistically, Harry was always going to struggle to meet these expectations. But that is not to say it won’t be a success.

Your correspondent downloaded the game over the weekend and has been playing around with it over the last few days, and it is pretty good. The experience is better than Pokemon Go, the AR is closer to what many would expect and there is more of a story involved.

There are a few issues, though many of these would have been expected. Heavy data consumption should be expected, your correspondent used 636 MB in the first two days and wasn’t using it as much as most would. Battery life also takes a notable kick, five hours was knocked off what was to be expected on the device in question. Both of these factors might have a notable impact on how much users are using the game in the long-run.

But why has Harry Potter: Wizards Unite fallen short on the lofty goals? We suspect the nostalgia factor is the biggest contributor.

Firstly, lets have a look at the audience. Pokémon came into existence in 1996 primarily targeted at children, however even in the early days there was popularity with those in their 20s. Those who played the original games are 23 years older, though the TV series also proved to be incredibly popular across the world, running from 1997 through to today. There will be millions who are in their 20s, 30s and 40s who would have watched the show and felt the nostalgia bug when the game was launched almost three years ago.

The first Harry Potter title was released in 1997, though perhaps did not reach the peak of its fandom for a decade. During the 00s, the final books were being released and the films were taking the franchise to new audiences. Harry Potter remains popular today, but the core audiences are younger due to the longer period of time it took the spark to grow into a flame.

In short, the nostalgia bug bit for more people in control of credit cards for Pokémon Go than with Harry Potter: Wizards Unite. Many of those downloading the Harry Potter title today will have to ask permission from parents to make purchases, whereas we suspect a much higher proportion of those with Pokémon Go can make their own financial decisions.

Looking at statistics revealed by Survey Monkey a few months after Pokémon Go was released, 71% of players were aged between 18 and 50. The comparative numbers have not been revealed for Harry Potter: Wizards Unite just yet, but we suspect they will be a lot younger. For the final two films of the Harry Potter series, 56% and 55% were over the age of 25, but the books are designed for teenagers.

Secondly, we are going to have a look at the global appeal of both titles.

Although both are incredibly popular throughout the world, one originated in the UK and the other in Japan. Due to the fact the Pokémon TV series was animated, dubbing into new languages would have been much simpler, increasing the accessibility of the content. The TV series is available in 169 countries around the world, while the Harry Potter book series has been translated into 80 different languages.

Harry Potter is very popular in the likes of Japan, South Korea and China, though we suspect it does not exceed the popularity of Pokemon at its prime. This will have a translation into the nostalgia effect which drove the initial adoption of Pokémon Go and the continued success today. Let’s not forget, the US and Asia are the two biggest regions for gaming revenues and perhaps these markets favour the Pokemon brand over Harry Potter.

We confident the Harry Potter game will be a success, but it isn’t able to tap into the nostalgia effect of the right audiences. With the brand continuing to be more relevant than Pokemon is today, see the theme parks and sustained popularity of the movies, it will bring in revenues but perhaps not on the same scale in the short- to mid-term as Pokémon Go.

What we are less confident about is the impact this will have on the normalisation of AR in the entertainment world as a direct result. Yes, it will have an incremental impact and open the eyes of some, but we doubt this will be a watershed moment for the technology.

That said, we do not believe there will ever be a watershed moment for AR. This is likely to be a technology which gathers momentum slowly, gradually being introduced as additional features in every day life. Before we know it, AR will be everywhere, and we’ll wonder where it came from.

Niantic’s Harry Potter might take AR into the world of reality

Augmented Reality is a technology which has promised a lot but hasn’t delivered to date. Niantic will be hoping the hype converts into gain with the launch of Harry Potter: Wizards Unite.

Aside from being a title which taps into the nostalgic cravings of millennials, this is one of the first products which promises to genuinely make use of AR. Of course, we will reserve judgments until the product has been launched on Friday (June 21), but there will always be doubts in the build-up.

The doubts tie back to Pokemon Go. This was an incredibly successful app for Niantic and still brings in the profits. But, from an AR perspective, it wasn’t that genuine. This was an app which laid static images onto reality through the camera. For some, this might be AR, but realistically, AR has to interact with the environment. It was a half-way solution, but commercially it was incredibly successful.

There are perhaps two major reasons it was a massive money-maker for the firm. Firstly, it was a game which offered a new twist to users. Little could be compared to Pokemon Go at the time, and it captured the interest of millions. Secondly, nostalgia.

Nostalgia is a powerful draw for many, and in Pokemon Go, Niantic engaged numerous generations. The same could be said about Harry Potter. Spreading through the books and the movie franchise, this is a title which could attract interest from today’s generation through to those in the 40s. If the game is any good, it could make a ridiculous amount of cash.

The promise is this game will actually deliver on the AR expectations. Users will be able to explore the Muggle world through the app, encountering various characters, challenges and missions in different physical locations. Users will be asked to assume the character of a new recruit in the Statute of Secrecy Task Force to investigate The Calamity.

We’re not too sure what to expect, but we are pretty sure the downloads with soar over the first couple of days. The depth of the experience and the effectiveness of the new technology will drive popularity once the initial excitement has dipped.

One of the areas which is worth keeping an eye on is whether they can prevent the servers from crashing.

This was one of the issues which Pokemon Go faced. It would appear Niantic did not anticipate the popularity of the app, resulting in the service crashing constantly for weeks on end. We dread to think how much revenue was lost due to the fact users couldn’t actually log on, and we hope lessons have been learned. Surely the right amount of resource has been allocated but the same issue persists; predicting demand is a very difficult task.

The next couple of weeks could prove to be very interesting. Firstly, whether Niantic is finally embracing AR properly, and secondly, whether this opens the door for everyone else. If this app proves to be successful, consumers might have their eyes opened to the promise of AR. This app might be a very important factor in validating the technology for the general public.

The doors could be blown off the hinges, or at least if you are watching the doors on the screen of your smartphone.

Why encryption is still impacting mobile video quality of experience

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this article Santiago Bouzas, Director, Product Management at Openwave Mobility looks at some of the underlying issues surrounding video encryption.

At a time when data breaches occur on an almost daily basis, undermining consumer confidence in enterprise IT’s ability to secure and protect private data, it might seem like the best solution is to increase efforts to encrypt data.

While encryption is an important part of securing data, it’s easy to underestimate the amount of complexity it adds to any service or device, especially in terms of the processing power required. On a surface level, encryption transforms one block of data reversibly into another. However, below the surface, encryption requires mathematical computation on data that needs to be read, reread, rewritten, confirmed and hashed.

Encrypting a text message is relatively simple. Encrypting video, however, is quite complicated, as computations occur on massive megabytes of data that’s constantly stored and retrieved. Moreover, video traffic is growing, especially as operators begin deploying 5G networks.

For instance, by the end of 2019, streaming services are expected from Apple, WarnerMedia and Disney+. In fact, video is predicted to account for nearly four-fifths of mobile network traffic by 2022 and almost 90% of 5G traffic according to the Mobile Video Industry Council, underscoring the need for mobile operators to build networks that can effectively handle the massive increase of encrypted traffic their networks are expected to carry.

The growth of video encryption

The increase of encrypted traffic isn’t a new challenge for operators. 4G networks brought about a seismic shift in connectivity and mobility, spurring the launch of millions of disruptive application-based businesses, including Spotify, Uber and Waze. But the unbridled freedom these new players enjoyed was short lived.

In 2013, whistleblower Edward Snowden revealed how global intelligence agencies were accessing mobile data, often in collaboration with technology companies. Quick to react, Facebook, Google and others began encrypting data with secure protocols, and that encryption has remained in place ever since.

By the end of 2018, about 90 percent of mobile internet traffic was encrypted, and there was no single standard followed for encrypting that data. For instance, Google uses QUIC, an encryption protocol based on the user datagram protocol (UDP). By contrast, Facebook and Instagram use zero round trip time resumption (0-RTT).

The QUIC protocol already accounts for between 30 and 35 percent of the market, and it is considered one of the most popular and efficient delivery mechanisms for video streaming. However, both protocols make it extremely difficult for operators to profile or optimize data with conventional traffic management tools, hindering their ability to deliver consistent quality of experience (QoE).

Without question, dedicated streaming services like Netflix and Amazon Prime are contributing to the increase in encrypted video traffic. However, Facebook is quickly becoming the primary channel for sharing video content. Facebook’s strategy is based around sharing video and merging its platforms, including Instagram, WhatsApp and Messenger. And that strategy is clearly paying off.

While Facebook has been sharing video from its vast content delivery network (CDN) for some time, the volume of video data shared across its different properties is 10 percent higher than that shared across all of Google’s entities combined. This is especially true on mobile, where there is a strong demand for social media, for which Facebook and Instagram are the dominant platforms.

Additional advertising investment is further cementing Facebook’s position, so much so that Facebook could soon overtake Google as the key driver of both video consumption and encryption protocols. Interestingly, Facebook is moving away from using the 0-RTT protocol and is also beginning to embrace QUIC.

In time, Facebook is expected to change protocols again, likely to Transport Layer Security (TLS) 1.3, a more robust and secure cryptographic protocol. Those plans have significant implications for mobile operators looking to deliver the best possible QoE.

Additional complications for video

Not only must operators contend with different encryption protocols, they also face challenges from the quality (resolution) of video that traverses the network. For instance, more than half of video traffic is expected to be high definition (HD) by the end of 2019. HD video consumes three times the amount of data as standard definition (SD) and requires three times the bandwidth.

As we near deployment of 5G networks, operators likely will have to contend with ultra-high definition (UHD) video, which will consume three or four times the data as HD video. Moreover, operators won’t just grapple with the need to monitor and manage video data. They’ll need new and different capabilities to detect and manage demand created by the obfuscation of encrypted video traffic.

The deep packet inspection (DPI) method that operators employ to analyze and optimize network usage will need to be sufficiently agile to handle the change in encryption protocols. Heuristic evaluation models and reporting structures will need to adapt, as well. Without these improved capabilities, operators will find it increasingly challenging to deliver the QoE expected for video content.

Failure to adequately address the increasing complexity of video traffic will result in increased buffering times, which is the death knell for consumers of mobile video. In an increasingly competitive ecosystem, customers that aren’t happy with network quality for video will have a myriad of competitors to churn to.

 

SantiagoBouzasSantiago Bouzas is the Director of Product Management at Openwave Mobility and is an expert on mobile internet connectivity. Santiago has over 12+ years of experience in telecoms, holding product management, sales/pre-sales and professional services roles in both global and start-ups.

YouTube CEO’s struggle session was futile

In her first public statements since last week’s censorship controversy YouTube CEO Susan Wojcicki attempted to strike a balance between freedom of speech and censorship.

As a quick reminder: one YouTube user claimed to be the subject of homophobic harassment by another user and wanted them censored accordingly. YouTube initially said none of its policies had been violated but on further reflection decided to demonetize (stop serving ads, which are the primary source of revenue for YouTubers) the channel of the accused.

At a live event hosted by Recode – a tech site owned by Vox, which also employs the above complainant, Carlos Maza – Wojcicki insisted on making a public apology to ‘the LGBTQ community’ before answering any questions. This was presumably in response to critics from within that group of the decisions made, of which Maza himself remains one of the most persistent.

Wojcicki moved on to recap what had taken place, which consisted of two distinct but parallel events. The first was the announcement of measures YouTube is taking against ‘hate speech’, which had apparently been in the pipeline for a while. The second was Maza’s allegations and demands, which YouTube addressed separately.

For two such separate issues, however, there seemed to be a fair bit of overlap. Firstly it was revealed that YouTube had pre-briefed the media about the hate speech announcement, raising the possibility that Maza was aware of it when he made his allegations on Twitter. Secondly the decision to demonetize the offending channel coincided precisely with outcry at the original decision that none of its policies had been transgressed, despite that decision having apparently taken 5 days to make.

In the context of hate speech Wojcicki also mentioned that laws addressing it vary widely from country to country. This highlighted one of the central dilemmas faced by internet platforms, that they’re increasingly expected to police speech beyond the boundaries of legality. Their attempts to do so lie at the core of the impossible position that they’re now in.

The interviewer expressed sympathy about the impossibilities of censoring an open platform at such scale and Wojcicki could only say that YouTube is constantly striving to improve and pointed to recent pieces of censorship as proof that it’s doing so. She pushed back at the suggestion that YouTube moderate every upload before publication, saying a lot of voices would be lost. She pointed instead to the tiered model that allows for things like demonetization of contentious content.

This model was also used in defence of another couple of specific cases flagged up by the interviewer. The first concerned a recent cover story on the New York Times, the headline of which spoke of one YouTube user who found himself brainwashed by the ‘far-right’ as a result of recommendations from YouTube, but the substance of which indicated the opposite. Wojcicki said another tool they use is reducing the recommendations towards contentious content in order to make it harder to find.

The other case was of a US 14-year-old YouTuber called Soph, who recently got one of her videos taken down due to some of its content, but whose channel remains. The utter futility of trying to assess and potentially censor every piece of content uploaded to the platform was raised once more and, not for the first time, Wojcicki attempted to steer the conversation to the 99% of content on YouTube that is entirely benign.

Carlos Maza responded to the interview with the following tweet, inspired by a question from the audience querying the sincerity of Wojcicki’s apology to the LGBTQ community, to which she responded that she is really sincere. Maza’s tweet indicates he won’t be happy until anything perceived as harassment of ‘queer’ people is censored from YouTube.

You can see the full interview below. As well as the prioritised apology, this did seem like a good-faith attempt by Wojcicki to openly address the many complexities and contradictions faced by any censor. It seems very unlikely that her critics will have been swayed by her talk of nuance and context, however, and there is little evidence that this interview solved anything. Still, at least she gave it a go and if nothing else it will have been good practice for the many other such struggle sessions Wojcicki will doubtless have to endure in future.

 

EFF to testify in support of California facial recognition technology ban

Last month, the City of San Francisco banned law enforcement agencies from using facial recognition software in cameras, and now the issue has been escalated to the State Senate.

While this is still only a minor thorn in the side of those who have complete disregard for privacy principles, it has the potential to swell into a major debate. There have been numerous trials around the world in an attempt to introduce the invasive technology, but no-one has actually stopped to have a public debate as to whether the disembowelling of privacy rights should be so easily facilitated.

After the City of San Francisco passed the rules, officials voted 8-1 in support of the ban, the issue was escalated up to State level. SB 1215 is now being considered by State legislators, with the Senate Committee on Public Safety conducting a review of pros and cons.

Numerous organizations have come out to support progress of the bill, and of course the official organizations representing law enforcement agencies at State level are attempting to block it. As part of the review process, EFF Grassroots Advocacy Organizer Nathan Sheard will testify in-front of the California Senate Public Safety Committee later on today [June 11].

The issue which is being debated here is quite simple; should the police be allowed to use such invasive surveillance technologies, potentially violating citizens right to privacy without knowledge or consent. Many laws are being passed to give citizens more control of their personal data in the digital economy, but with such surveillance technologies, said citizens may have no idea their images are being collected, analysed and stored by the State.

What should be viewed as an absolutely incredible instance of negligence and irresponsible behaviour, numerous police forces around the world have moved forward implementing these technologies without in-depth public consultation. Conspiracy theorists will have penned various nefarious outcomes for such data, but underhanded Government and police actions like this do support the first-step of their theories.

The City of San Francisco, the State of California and the EFF, as well as the dozens of other agencies challenging deployment of the technology, are quite right to slow progress. The introduction of facial recognition software should be challenged, debated and scrutinised. Free-reign should not be given to police forces and intelligence agencies; they have already show themselves as untrustworthy. They have lost the right to play around with invasive technologies without public debate.

“This bill declares that facial recognition and other biometric surveillance technology pose unique and significant threats to the civil rights and civil liberties of residents and visitors,” the proposed bill states.

“[the bill] Declares that the use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights. [the bill] States that this technology also allows people to be tracked without consent and would also generate massive databases about law-abiding Californians and may chill the exercise of free speech in public places.”

Under existing laws, there seems to be little resistance to implementing these technologies, aside from the loose definition of ‘best practice’. This would not be considered a particularly difficult hurdle to overcome, such is the nuanced nature of ‘best practice’. Considering the negative implications of the technology, more red-tape should be introduced, forcing the police and intelligence agencies to provide suitable levels of justification and accountability.

Most importantly, there are no requirements for police forces or intelligence agencies to seek approval from the relevant legislative body to deploy the technology. Permission is needed to acquire cellular communications interception technology, in order to protect the civil rights and civil liberties of residents and visitors. The same rights are being challenged with facial recognition software in cameras, but no permissions are required.

This is of course not the first sign of resistance to facial recognition technologies. In January, 85 pro-privacy organizations, charities and influencers wrote to Amazon, Google and Microsoft requesting the firms pledge not to sell the technology to police forces or intelligence agencies. It does appear the use of the data by enforcement agencies in countries like China has put the fear into these organizations.

The accuracy of the technology has also been called into question. Although the tech giants are claiming AI is improving the accuracy every day, last year the American Civil Liberties Union produced research which suggested a 5% error rate. The research claimed 28 members of Congress had been falsely identified as people who had been arrested.

Interestingly enough, critics also claim the technology violates the Forth Amendment of the US Constitution. It has already been established that police demanding identification without suspicion violates this amendment, while the American Civil Liberties Union argues such technologies are effectively doing the same thing.

What is worth noting is that it is highly unlikely a total ban will be passed. This is not the case in the City of San Francisco, as the city has introduced measures to ensure appropriate justification and also that data is stored properly. The key with the rules in San Francisco is that it is making it as difficult as possible to ensure the technologies are not used haphazardly.

What we are most likely to see is bureaucracy. Red-tape will be scattered all over the technology to ensure it is used in an appropriate and justified manner.

Accessibility is one of the issues which privacy campaigners are facing right now. Companies like New York-based Vuzix and NNTC in the UAE are making products which are not obviously used for surveillance and are becoming increasingly affordable. Software from companies like NEC is also becoming more available, giving the police more options. A landscape with affordable technology and no regulatory resistance paints a gloomy picture.

The introduction of more red-tape might have under-resourced and under-pressure police forces frustrated, but such is the potential invasion of privacy rights and the consequence of abuse, it is absolutely necessary. The quicker this technology is brought into the public domain and understood by the man-on-the-street, the better.

Google fleshes out its Stadia cloud gaming platform

Having teased a new cloud gaming platform earlier this year, Google has finally got around to launching it properly.

Stadia offers games that are 100% hosted in the cloud, which means you don’t need a console, don’t need to install any software and can game on any screen with an adequate internet connection. Right now Google is only launching the premium tier, which offers 4K gaming but requires a £9 per month subscription and a 35 Mbps connection.

A freemium tier will follow in due course that won’t change a subscription fee but will offer reduced performance. It looks like both tiers will charge full-whack for individual games, although the premium one will chuck in a few freebies to sweeten the pot. Among the games announced by Google is a third version of the popular RPG Baldur’s Gate.

To seed the market Google is urging early adopters to by a Founder’s Edition bundle that includes a controller, a Chromecast Ultra dongle and three months subscription to the ‘Pro’ premium tier for £119. Here’s what you get for Pro versus the basic package.

stadia pricing

The main telecoms angle here is bandwidth. Google reckons you still need a 20 Mbps connection even for 1080p gaming, which a lot of people, even in the UK, still struggle to reach. But the real strain on networks will come if people start using stadia via mobile devices. This is unlikely to really take off until you get games developed specifically for mobile, probably with a location and/or AR element to them, but when they do we might finally see a killer consumer app for 5G.