AT&T gets streaming with HBO Max

Gone are the days when the consumer could get all the content they wanted in one place as AT&T’s WarnerMedia joins the streaming landgrab.

With Netflix, Amazon Prime, Hulu, Disney, HBO and numerous other streaming services on the market before too long, the fragmentation of content is looking like it could be a serious problem for the consumer. Whether splitting the spoils has an overarching negative impact on the segments profits remains to be seen, but customers wallets can only be pushed so far; how many streaming services can each customer be expected to have?

That said, AT&T is in a strong position with this proposition. In HBO, it owns a lot of promising content already, playing into consumer nostalgia, and it does seem to be heading in the right direction in terms of original programming.

“HBO Max will bring together the diverse riches of WarnerMedia to create programming and user experiences not seen before in a streaming platform,” said Robert Greenblatt, Chairman of WarnerMedia Entertainment and Direct-To-Consumer.

“HBO’s world-class programming leads the way, the quality of which will be the guiding principle for our new array of Max Originals, our exciting acquisitions, and the very best of the Warner Bros. libraries, starting with the phenomenon that is ‘Friends’.”

With the service set to debut in Spring 2020, AT&T is promising 10,000 hours of programming from the outset. Full series of ‘Fresh Prince of Bel Air’, ‘Friends’ and ‘Pretty Little Liars’ will feature in the content library, as well as new dramas such as ‘Batwoman’ and ‘Katy Keene’.

Looking at future Max Original series, the list is quite extensive. ‘Dune: The Sisterhood’ is an adaptation of Brian Herbert and Kevin Anderson’s book based in the world created by Frank Herbert’s book Dune. ‘Lovecraft Country’ is a horror series based on a novel by Matt Ruff. ‘The Plot Against America’ will be a reimagined history based on Phillip Roth’s novel.

The ingredients are all in place to ensure AT&T makes a sustained stab at cracking the streaming market which has been dominated by the OTTs to date. There are a couple of questions which remain however.

Firstly, pricing. Can executives price the service competitively while also sustaining investments in content? Secondly, experience. Will the platform meet the high-expectations set by consumers thanks to the high-bar set by Netflix? And finally, culture. Will AT&T allow WarnerMedia to operate as a media business or will it impose the traditional mentality of telcos onto the business?

AT&T has bet big on the content world and it can ill-afford to fluff its lines on its debut. Having signed an $85 billion deal to acquire Time Warner and spent what seems like decades battling various government departments to authorise the transaction, the telco will need to see some ROI sooner rather than later.

The question is whether the momentum in the streaming world can be sustained. Platforms like Netflix, Hulu and Amazon Prime were attractive in the early days because there was consolidation of content onto a single library. With more streaming services becoming available, the fragmentation of content might well become a problem before too long. Consumers will have to make choices on what service to subscribe to, limiting the profits of the individual providers.

The days of subscribing to everything might be a thing of the past before too long; wallets can only be pushed so far.

Diversification into profitable segments is certainly a sensible strategy in the days of meagre connectivity profits, but $85 billion is a lot to spend on a hunch.

FBI and London Met land in hot water over facial recognition tech

The FBI and London Metropolitan Police force will be facing some awkward conversations this week over unauthorised and potentially illegal use of facial recognition technologies.

Starting in the US, the Washington Post has been handed records dating back almost five years which suggest the FBI and ICE (Immigration and Customs Enforcement) have been using DMV databases to build a surveillance network without the consent of citizens. The emails were obtained by Georgetown Law researchers through public records requests.

Although law enforcement agencies have normalised biometrics as part of investigations nowadays, think finger print or DNA evidence left at crime scenes, the traces are only useful when catching repeat offenders. Biometric databases are built by obtaining data from those who have been previously charged, but in this case, the FBI and ICE have been accessing data on 641 million individuals, the vast majority of which are innocent and would not have been consulted for the initiative.

In the Land of the Free, such hypocrisy is becoming almost second nature to national security and intelligence forces, who may well find themselves in some bother from a privacy perspective.

As it stands, there is no legislative or regulatory guidelines which authorise the development of such a complex surveillance system, or any public consultation with the citizens of the US. This act first, tell later mentality is something which is becoming increasingly common in country’s the US has designated as national enemies, though there is little evidence authorities in the US have any respect for the rights of their own citizens.

Heading across the pond to the UK, a report from the Human Rights, Big Data & Technology Project has identified ‘significant flaws’ with the way live facial recognition has been trialled in London by the Metropolitan Police force. The group, based out of the University of Essex Human Rights Centre, suggests it could be found to be illegal should it be challenged in court.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law,” said Dr Daragh Murray, who authored the report alongside Professor Peter Fussey.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR [live facial recognition]. Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset and was not an integral part of the process.”

The main gripe from the duo here seems to be how the Met approached the trials. LFR was approached in a manner similar to traditional CCTV, failing to take into the intrusive nature of facial recognition, and the use of biometric processing. The Met did not consider the ‘necessary in a democratic society’ test established by human rights law, and therefore effectively ignored the impact on privacy rights.

There were also numerous other issues, including a lack of public consultation, the accuracy of the technology (8 out of 42 tests were actually correct), criteria for using the technology was not clearly defined and accuracy and relevance of the ‘watchlist’ of suspects. However, the main concern from the University’s research team was that only the technical aspects of the trial were considered, not the impact on privacy.

There is a common theme in both of these instances; the authorities supposedly in place to protect our freedoms pay little attention to the privacy rights which are granted to us. There seems to be a ‘ends justify the means’ attitude with little consideration to the human right to privacy. Such attitudes are exactly what the US and UK aim to eradicate when ‘freeing’ citizens of oppressive regimes abroad.

What is perhaps the most concerned about these stories is the speed at which they are being implemented. There has been little public consultation to the appropriateness of these technologies or whether the general public is prepared to sacrifice privacy rights in the pursuit of national security. With the intrusive nature of facial recognition, authorities should not be allowed to make this decision on behalf of the general public, especially when there is so much precedent for abuse and privacy is a hot-topic following scandals in private industry.

Of course, there are examples of the establishment slowing down progress to give time for these considerations. In San Francisco, the city’s Board of Supervisors has made it illegal for forces to implement facial recognition technologies unless approval has been granted. The police force would have to demonstrate stringent justification, accountability systems and safeguards to privacy rights.

In the UK, Dr Murray and Professor Fussey are calling for a pause on the implementation or trialling of facial recognition technologies until the impact on and trade-off of privacy rights have been fully understood.

Facial recognition technologies are becoming incredibly useful when it comes to access and authentication, though there needs to be some serious conversations about the privacy implications of using the tech in the world of surveillance and police enforcement. At the moment, it seems to be nothing but an after-thought for the police forces and intelligence agencies, an incredibly worrying and dangerous attitude to have.

Why encryption is still impacting mobile video quality of experience

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this article Santiago Bouzas, Director, Product Management at Openwave Mobility looks at some of the underlying issues surrounding video encryption.

At a time when data breaches occur on an almost daily basis, undermining consumer confidence in enterprise IT’s ability to secure and protect private data, it might seem like the best solution is to increase efforts to encrypt data.

While encryption is an important part of securing data, it’s easy to underestimate the amount of complexity it adds to any service or device, especially in terms of the processing power required. On a surface level, encryption transforms one block of data reversibly into another. However, below the surface, encryption requires mathematical computation on data that needs to be read, reread, rewritten, confirmed and hashed.

Encrypting a text message is relatively simple. Encrypting video, however, is quite complicated, as computations occur on massive megabytes of data that’s constantly stored and retrieved. Moreover, video traffic is growing, especially as operators begin deploying 5G networks.

For instance, by the end of 2019, streaming services are expected from Apple, WarnerMedia and Disney+. In fact, video is predicted to account for nearly four-fifths of mobile network traffic by 2022 and almost 90% of 5G traffic according to the Mobile Video Industry Council, underscoring the need for mobile operators to build networks that can effectively handle the massive increase of encrypted traffic their networks are expected to carry.

The growth of video encryption

The increase of encrypted traffic isn’t a new challenge for operators. 4G networks brought about a seismic shift in connectivity and mobility, spurring the launch of millions of disruptive application-based businesses, including Spotify, Uber and Waze. But the unbridled freedom these new players enjoyed was short lived.

In 2013, whistleblower Edward Snowden revealed how global intelligence agencies were accessing mobile data, often in collaboration with technology companies. Quick to react, Facebook, Google and others began encrypting data with secure protocols, and that encryption has remained in place ever since.

By the end of 2018, about 90 percent of mobile internet traffic was encrypted, and there was no single standard followed for encrypting that data. For instance, Google uses QUIC, an encryption protocol based on the user datagram protocol (UDP). By contrast, Facebook and Instagram use zero round trip time resumption (0-RTT).

The QUIC protocol already accounts for between 30 and 35 percent of the market, and it is considered one of the most popular and efficient delivery mechanisms for video streaming. However, both protocols make it extremely difficult for operators to profile or optimize data with conventional traffic management tools, hindering their ability to deliver consistent quality of experience (QoE).

Without question, dedicated streaming services like Netflix and Amazon Prime are contributing to the increase in encrypted video traffic. However, Facebook is quickly becoming the primary channel for sharing video content. Facebook’s strategy is based around sharing video and merging its platforms, including Instagram, WhatsApp and Messenger. And that strategy is clearly paying off.

While Facebook has been sharing video from its vast content delivery network (CDN) for some time, the volume of video data shared across its different properties is 10 percent higher than that shared across all of Google’s entities combined. This is especially true on mobile, where there is a strong demand for social media, for which Facebook and Instagram are the dominant platforms.

Additional advertising investment is further cementing Facebook’s position, so much so that Facebook could soon overtake Google as the key driver of both video consumption and encryption protocols. Interestingly, Facebook is moving away from using the 0-RTT protocol and is also beginning to embrace QUIC.

In time, Facebook is expected to change protocols again, likely to Transport Layer Security (TLS) 1.3, a more robust and secure cryptographic protocol. Those plans have significant implications for mobile operators looking to deliver the best possible QoE.

Additional complications for video

Not only must operators contend with different encryption protocols, they also face challenges from the quality (resolution) of video that traverses the network. For instance, more than half of video traffic is expected to be high definition (HD) by the end of 2019. HD video consumes three times the amount of data as standard definition (SD) and requires three times the bandwidth.

As we near deployment of 5G networks, operators likely will have to contend with ultra-high definition (UHD) video, which will consume three or four times the data as HD video. Moreover, operators won’t just grapple with the need to monitor and manage video data. They’ll need new and different capabilities to detect and manage demand created by the obfuscation of encrypted video traffic.

The deep packet inspection (DPI) method that operators employ to analyze and optimize network usage will need to be sufficiently agile to handle the change in encryption protocols. Heuristic evaluation models and reporting structures will need to adapt, as well. Without these improved capabilities, operators will find it increasingly challenging to deliver the QoE expected for video content.

Failure to adequately address the increasing complexity of video traffic will result in increased buffering times, which is the death knell for consumers of mobile video. In an increasingly competitive ecosystem, customers that aren’t happy with network quality for video will have a myriad of competitors to churn to.

 

SantiagoBouzasSantiago Bouzas is the Director of Product Management at Openwave Mobility and is an expert on mobile internet connectivity. Santiago has over 12+ years of experience in telecoms, holding product management, sales/pre-sales and professional services roles in both global and start-ups.

YouTube CEO’s struggle session was futile

In her first public statements since last week’s censorship controversy YouTube CEO Susan Wojcicki attempted to strike a balance between freedom of speech and censorship.

As a quick reminder: one YouTube user claimed to be the subject of homophobic harassment by another user and wanted them censored accordingly. YouTube initially said none of its policies had been violated but on further reflection decided to demonetize (stop serving ads, which are the primary source of revenue for YouTubers) the channel of the accused.

At a live event hosted by Recode – a tech site owned by Vox, which also employs the above complainant, Carlos Maza – Wojcicki insisted on making a public apology to ‘the LGBTQ community’ before answering any questions. This was presumably in response to critics from within that group of the decisions made, of which Maza himself remains one of the most persistent.

Wojcicki moved on to recap what had taken place, which consisted of two distinct but parallel events. The first was the announcement of measures YouTube is taking against ‘hate speech’, which had apparently been in the pipeline for a while. The second was Maza’s allegations and demands, which YouTube addressed separately.

For two such separate issues, however, there seemed to be a fair bit of overlap. Firstly it was revealed that YouTube had pre-briefed the media about the hate speech announcement, raising the possibility that Maza was aware of it when he made his allegations on Twitter. Secondly the decision to demonetize the offending channel coincided precisely with outcry at the original decision that none of its policies had been transgressed, despite that decision having apparently taken 5 days to make.

In the context of hate speech Wojcicki also mentioned that laws addressing it vary widely from country to country. This highlighted one of the central dilemmas faced by internet platforms, that they’re increasingly expected to police speech beyond the boundaries of legality. Their attempts to do so lie at the core of the impossible position that they’re now in.

The interviewer expressed sympathy about the impossibilities of censoring an open platform at such scale and Wojcicki could only say that YouTube is constantly striving to improve and pointed to recent pieces of censorship as proof that it’s doing so. She pushed back at the suggestion that YouTube moderate every upload before publication, saying a lot of voices would be lost. She pointed instead to the tiered model that allows for things like demonetization of contentious content.

This model was also used in defence of another couple of specific cases flagged up by the interviewer. The first concerned a recent cover story on the New York Times, the headline of which spoke of one YouTube user who found himself brainwashed by the ‘far-right’ as a result of recommendations from YouTube, but the substance of which indicated the opposite. Wojcicki said another tool they use is reducing the recommendations towards contentious content in order to make it harder to find.

The other case was of a US 14-year-old YouTuber called Soph, who recently got one of her videos taken down due to some of its content, but whose channel remains. The utter futility of trying to assess and potentially censor every piece of content uploaded to the platform was raised once more and, not for the first time, Wojcicki attempted to steer the conversation to the 99% of content on YouTube that is entirely benign.

Carlos Maza responded to the interview with the following tweet, inspired by a question from the audience querying the sincerity of Wojcicki’s apology to the LGBTQ community, to which she responded that she is really sincere. Maza’s tweet indicates he won’t be happy until anything perceived as harassment of ‘queer’ people is censored from YouTube.

You can see the full interview below. As well as the prioritised apology, this did seem like a good-faith attempt by Wojcicki to openly address the many complexities and contradictions faced by any censor. It seems very unlikely that her critics will have been swayed by her talk of nuance and context, however, and there is little evidence that this interview solved anything. Still, at least she gave it a go and if nothing else it will have been good practice for the many other such struggle sessions Wojcicki will doubtless have to endure in future.

 

EFF to testify in support of California facial recognition technology ban

Last month, the City of San Francisco banned law enforcement agencies from using facial recognition software in cameras, and now the issue has been escalated to the State Senate.

While this is still only a minor thorn in the side of those who have complete disregard for privacy principles, it has the potential to swell into a major debate. There have been numerous trials around the world in an attempt to introduce the invasive technology, but no-one has actually stopped to have a public debate as to whether the disembowelling of privacy rights should be so easily facilitated.

After the City of San Francisco passed the rules, officials voted 8-1 in support of the ban, the issue was escalated up to State level. SB 1215 is now being considered by State legislators, with the Senate Committee on Public Safety conducting a review of pros and cons.

Numerous organizations have come out to support progress of the bill, and of course the official organizations representing law enforcement agencies at State level are attempting to block it. As part of the review process, EFF Grassroots Advocacy Organizer Nathan Sheard will testify in-front of the California Senate Public Safety Committee later on today [June 11].

The issue which is being debated here is quite simple; should the police be allowed to use such invasive surveillance technologies, potentially violating citizens right to privacy without knowledge or consent. Many laws are being passed to give citizens more control of their personal data in the digital economy, but with such surveillance technologies, said citizens may have no idea their images are being collected, analysed and stored by the State.

What should be viewed as an absolutely incredible instance of negligence and irresponsible behaviour, numerous police forces around the world have moved forward implementing these technologies without in-depth public consultation. Conspiracy theorists will have penned various nefarious outcomes for such data, but underhanded Government and police actions like this do support the first-step of their theories.

The City of San Francisco, the State of California and the EFF, as well as the dozens of other agencies challenging deployment of the technology, are quite right to slow progress. The introduction of facial recognition software should be challenged, debated and scrutinised. Free-reign should not be given to police forces and intelligence agencies; they have already show themselves as untrustworthy. They have lost the right to play around with invasive technologies without public debate.

“This bill declares that facial recognition and other biometric surveillance technology pose unique and significant threats to the civil rights and civil liberties of residents and visitors,” the proposed bill states.

“[the bill] Declares that the use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights. [the bill] States that this technology also allows people to be tracked without consent and would also generate massive databases about law-abiding Californians and may chill the exercise of free speech in public places.”

Under existing laws, there seems to be little resistance to implementing these technologies, aside from the loose definition of ‘best practice’. This would not be considered a particularly difficult hurdle to overcome, such is the nuanced nature of ‘best practice’. Considering the negative implications of the technology, more red-tape should be introduced, forcing the police and intelligence agencies to provide suitable levels of justification and accountability.

Most importantly, there are no requirements for police forces or intelligence agencies to seek approval from the relevant legislative body to deploy the technology. Permission is needed to acquire cellular communications interception technology, in order to protect the civil rights and civil liberties of residents and visitors. The same rights are being challenged with facial recognition software in cameras, but no permissions are required.

This is of course not the first sign of resistance to facial recognition technologies. In January, 85 pro-privacy organizations, charities and influencers wrote to Amazon, Google and Microsoft requesting the firms pledge not to sell the technology to police forces or intelligence agencies. It does appear the use of the data by enforcement agencies in countries like China has put the fear into these organizations.

The accuracy of the technology has also been called into question. Although the tech giants are claiming AI is improving the accuracy every day, last year the American Civil Liberties Union produced research which suggested a 5% error rate. The research claimed 28 members of Congress had been falsely identified as people who had been arrested.

Interestingly enough, critics also claim the technology violates the Forth Amendment of the US Constitution. It has already been established that police demanding identification without suspicion violates this amendment, while the American Civil Liberties Union argues such technologies are effectively doing the same thing.

What is worth noting is that it is highly unlikely a total ban will be passed. This is not the case in the City of San Francisco, as the city has introduced measures to ensure appropriate justification and also that data is stored properly. The key with the rules in San Francisco is that it is making it as difficult as possible to ensure the technologies are not used haphazardly.

What we are most likely to see is bureaucracy. Red-tape will be scattered all over the technology to ensure it is used in an appropriate and justified manner.

Accessibility is one of the issues which privacy campaigners are facing right now. Companies like New York-based Vuzix and NNTC in the UAE are making products which are not obviously used for surveillance and are becoming increasingly affordable. Software from companies like NEC is also becoming more available, giving the police more options. A landscape with affordable technology and no regulatory resistance paints a gloomy picture.

The introduction of more red-tape might have under-resourced and under-pressure police forces frustrated, but such is the potential invasion of privacy rights and the consequence of abuse, it is absolutely necessary. The quicker this technology is brought into the public domain and understood by the man-on-the-street, the better.

HTC debuts eye-tracking with enterprise VR launch

HTC has announced it is bringing its enterprise VR product to North America, after teasing executives at CES in January.

The product itself, Vive Pro Eye, is not cheap, $1,599, but features the latest in eye tracking technology with HTC claiming it is ‘setting a new standard’ for VR in the enterprise market. While the consumer VR segment has been relatively sluggish, despite the incredible promises made by technologists, though there does seem to be a bigger focus on enterprise in recent months.

The Vive Pro Eye follows up HTC’s Vive Pro which is already in the hands of various different enterprise customers throughout the world, introducing new features such as precision eye tracking software, deeper data analysis, new training environments and more intuitive user experiences.

And while some of the features might be considered excessive at the moment, there is always the potential to influence mainstream adoption.

“We’ve invested in VR technology to connect our fans to our game and deliver a new level of engagement through VR game competitions and in-ballpark attractions,” said Jamie Leece, SVP of Games and VR for Major League Baseball.

“By integrating eye tracking technology into Home Run Derby VR, we are able to transport this immersive baseball experience to any location without additional controllers needed. Our fans can simply operate menus by using their eyes.”

This is perhaps where the VR industry has fallen short of expectations over the first few years; cash conscious consumers do not have the funds to fulfil the promise. These are after all individuals who have been stung by various difference financial potholes over the last decade and might be hesitant to invest so handsomely in such an unproven technology.

The focus on enterprise is a much more sensible bet for many of the VR enthusiasts to follow. Firstly, in working with organizations like Major League Baseball, new applications can be created, and experiential experiences can be offered to consumers at the games. This might have a normalising impact for the technology on the mass market.

Secondly, there is a lot more money in the enterprise world than in the individual’s wallet, with decision makers much more enthusiastic about investments when it isn’t linked directly to their bank accounts.

Finally, there are more usecases in the enterprise world. Some of them might be boring, but they are realistic and important for the companies involved. Training exercises are an excellent example.

What this product also bringing into the equation is eye-tracking software, offering an entirely new element for developers to consider.

“Our virtual venues come to life as individual audience members can react with various animations when a user makes direct eye contact with them,” said Jeff Marshall, CEO of Ovation, a company which uses VR to help media train customers in public speaking environments.

“As a developer, there’s just no going back once you’ve seen all that eye tracking makes possible.”

From an experience perspective, the eye-tracking software can also add to the gaming world. Foveated rendering is a graphics-rendering technique which uses an eye tracker integrated which helps reduce rendering workload by reducing the image quality in the peripheral vision. By focusing processing power where it is needed most, the strain placed on the device and experience is lessened.

Many have suggested this technology could be at the forefront of the next generation of VR devices, both in the consumer and enterprise world. Whether this is enough to force the potential of VR from promise to reality remains to be seen, but something needs to be done.

Vivendi media mission continues rolling through Europe

Vivendi-subsidiary Canal Plus has announced the €1 billion acquisition of pay-TV operator M7, expanding the business into seven new European markets.

The deal, which is still subject to approval from the European Commission, will take Vivendi across the borders of the Netherlands, Belgium, Austria, Czech Republic, Slovakia, Hungary and Romania. M7’s subscriptions currently total more than three million across its European footprint and revenues of just over €400 million.

“We are particularly pleased with this acquisition project made possible by Vivendi. The operation would allow Canal Plus Group to approach 20 million subscribers worldwide,” said Maxime Saada, Chairman Canal Plus’ Board of Directors.

“Our global subscriber base will have almost doubled in five years, with a clear acceleration starting in 2015. This major operation will allow us to strengthen our distribution capacity in order to leverage content originating from our library and our numerous production operations in Europe.”

The Vivendi media mission is not a secret in the industry. Acquisitions have been somewhat of a guilty pleasure for the business, and this move is intended to further increase the influence of Canal Plus over the European continent, and worldwide. With M7 in the armoury, Canal Plus will have 20 million subscribers worldwide, including 12 million in Europe.

M7 is currently an aggregator of various local and international content, though the acquisition would create additional avenues for Canal Plus to distribute its own content. Canal Plus claims it currently spends €3 billion a year creating content, putting it in the same league as Netflix when you factor in the scale of the subscription bases (Netflix spent $8 billion in 2018 with a subscriber base of roughly 140 million).

Google has another run at the AR world

Google is taking another crack at the growing augmented reality segment with the launch of Glass Enterprise Edition 2.

While the first enterprise product has seemingly trundled along without fanfare, Google will be hoping the segment is ripe enough to make the desired millions. Although this is a technology area which promises huge prospects in the future, sceptics will suggest society, networks and the supporting ecosystem isn’t quite ready to make this dream a reality.

“Over the past two years at X, Alphabet’s moonshot factory, we’ve collaborated with our partners to provide solutions that improve workplace productivity for a growing number of customers – including AGCO, Deutsche Post DHL Group, Sutter Health, and H.B. Fuller,” said Jay Kothari Project, Lead for Glass. “We’ve been inspired by the ways businesses like these have been using Glass Enterprise Edition.

“X, which is designed to be a protected space for long-term thinking and experimentation, has been a great environment in which to learn and refine the Glass product. Now, in order to meet the demands of the growing market for wearables in the workplace and to better scale our enterprise efforts, the Glass team has moved from X to Google.”

This is a massive step for any Google idea. Graduating from the moonshot labs to be listed as a genuine brand in the Google family is a sign executives think there are profits to be made now, not in the future. Over the last couple of months, we’ve seen the likes of Loon and Fi make their way into the real world, and now it is time for Glass to hit the big time.

Google Glass was first brought to the market in 2013, though this wasn’t exactly a riveting success. Perhaps it was just a sign of the ecosystem and society at the time; people just weren’t ready for this type of innovation. However, Google is a company which often demonstrates innovation leadership and it was never going to completely give up on this idea. The products were taken back to the labs and refined.

What you have now is an enterprise orientated product which has the potential to run into the mass market. This makes sense for two reasons; firstly, there are more immediate usecases for the enterprise world, and secondly, businesses have more money to spend on these types of products than the consumer.

What remains to be seen is whether Google has any long-term interest in the hardware space or whether this is a game-plan to generate momentum in an embryonic segment.

When you look at the smart speaker segment, Google was always set to make more money in software and services than the hardware space. As soon as the traditional audio brands got the idea, its products were going to come up short. However, selling the hardware cheap to gain consumer buy-in while simultaneously demonstrating market appetite to the traditional brands was an excellent move.

Now there are more mainstream brands starting to develop their own smart speakers, Google can create partnerships to ensure its virtual assistance is exposed to the consumer and make money through means which are embedded in its corporate DNA; third-party relationships and online advertising.

Google might well have ambitions to take a leadership position in the AR glasses space, but you can also guarantee it has bigger plans to make profits through the supporting software and services ecosystem.

Facebook restricts Live streaming access

Facebook has introduced new restrictions on its video streaming platform, Live, suggesting those who break other Facebook policies will be banned for a period of time.

The move comes in response to the live broadcast of the terrorist attack in Christchurch, New Zealand. The social media platform broadcast the incident for 29 minutes, with around 200 people viewing the content, before it was cut. After heavy criticism, Facebook needed to act in an attempt to prevent a repeat of such a broadcast.

“Following the horrific terrorist attacks in New Zealand, we’ve been reviewing what more we can do to limit our services from being used to cause harm or spread hate,” said Guy Rosen, VP Integrity at Facebook.

“As a direct result, starting today, people who have broken certain rules on Facebook – including our Dangerous Organizations and Individuals policy – will be restricted from using Facebook Live.”

Although some might suggest this is a potential limitation of free speech principles, Facebook has had to do something about the grey areas. It is unreasonable for moderators to view and approve every piece of content, while artificial intelligence technologies are still not advanced enough to tackle the problem. Taking a merit approach, removing privileges from those who already break the rules, is a less-than adequate approach but one of the few options without shutting down the feature completely.

The ‘one strike rule’ is a tightening up of rules which already existed. Facebook has been limiting the access of those who break the platforms rules, though this is a much more stringent approach specific to the Live feature.

“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense,” said Rosen. “For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.”

This is an incredibly difficult equation to balance, and this is not a perfect approach. It is still reactionary not preventative, but it should limit the risk. Unfortunately for Facebook, and everyone in general, whatever is done to attempt to limit these abuses, and technological abuses in general, will only be hurdles; there will always be a way to get around the safeguards.

The only way Facebook can prevent a repeat of this incident is to shut down Live completely, however, the vast majority of those using the feature are doing so as intended. More work needs to be done, but Facebook is attempting to make progress.

Streaming platforms are starting to become less attractive

Netflix started as a platform where old-series could be relived, but now with rivals aiming to replicate the success of the streaming giant, the content world is becoming increasingly fragmented.

The big question which remains is how big is the consumers appetite for content? How many streaming subscriptions are users willing to tolerate?

The news which hit the headlines this morning concerned Hulu. Disney has come to an agreement to purchase Comcast’s stake in the streaming service, for at least $5.8 billion, in a divorce proceeding which will take five years. This transaction follows the confirmation AT&T sold its 10% stake in Hulu to Disney last month.

Disney consolidating control of Hulu is not much of a surprise to those in the industry, but fan favourites disappearing from the various different streaming services might shock a few consumers.

AT&T has also confirmed it will be pulling WarnerMedia content, such as Friends and ER, from rival’s platforms. The Office, one of the most popular titles on Netflix, will be pulled by owner NBCUniversal. The series, and other NBCUniversal content, will also be pulled from Hulu in favour of parent-company Comcast’s streaming service which will launch next year. Disney will also be pulling its headline content, the Marvel movie franchise for example, back behind its own paywall. Amazon Prime has its own exclusive originals, and YouTube has ambitions with this model as well.

Over the next 12-18 months, content will be pulled back away from the licensing deals to reside only on the owners streaming platform. Users will find the content world which they have come to love is quickly going to change. Some might have presumed the cord-cutting era was one of openness, a stark contrast to one of exclusivity in traditional premium media, but it does seem to be heading back that direction.

It is perfectly reasonable to understand why this is being done. These are assets which need to be monetized, and the subscription model is clearly being favoured over the licensing one. WarnerMedia, 21st Century Fox, AT&T, Comcast and Disney might have had an interest in the licensing model in by-gone years, but following the consolidation buzz, it has become increasingly popular to create another streaming service to add into the mix.

The issue which may appear on the horizon is the fragmented nature of the streaming world; consumers wallets are only so thick, how many streaming services can the market handle?

The test over the next couple of months, or years, will be the quality of original programming. Netflix grew its original audience through a library of shows other content companies were ignoring, but today’s mission is completely different; original and local content is driving the agenda.

The question is whether other providers will be able to provide the same quality? With subscription revenue being spread thinner across multiple providers, will there be enough money flowing into the coffers to fuel the creation of this content? Will the pressures of increased competition decrease overall quality?

Today it is very easy to find the best and deepest range of content available. You might have to subscribe to more than one service, but at the moment consumers are able to afford it. Tomorrow might be a different case. The more streaming services in the market and the more fragmented the content, the more decisions consumers will have to made. Having 4/5 services is probably unreasonable. And we’re only talking about quality of experience, the mess of different discovery engines is another topic.

The question which remains is whether the economics of a fragmented content segment can support the original content dream which has been promised to consumers, or whether the old-world of low-quality, low-budget, limited and repetitive content returns. Soon enough Disney+ will launch, as will Comcast’s streaming service, to add to Hulu, Netflix, DirecTV, Amazon Prime, YouTube’s premium service, and any others which might be in the mix.

Content will become fragmented, thinner on the platforms, before consumers wallets become strained. How long the budget for content will last in this scenario remains to be seen as executives look to cut corners and increase profitability. It’s hard to see how current trends are going to benefit consumers.