UK government grapples with bias in artificial intelligence

Artificial intelligence (AI) has enormous potential for good, but with applications processing data faster than we can comprehend, how do you protect against bias?

To address this issue, the Department of Digital, Culture, Media and Sport (DCMS) has unveiled the Centre for Data Ethics and Innovation, with one of the first project focusing on the idea of programmed or learned bias in the algorithms which power AI.

“Technology is a force for good and continues to improve people’s lives but we must make sure it is developed in a safe and secure way,” said Digital Secretary Jeremy Wright. “Our Centre for Data Ethics and Innovation has been set up to help us achieve this aim and keep Britain at the forefront of technological development.

“I’m pleased its team of experts is undertaking an investigation into the potential for bias in algorithmic decision-making in areas including crime, justice and financial services. I look forward to seeing the centre’s future recommendations to help make sure we maximise the benefits of these powerful technologies for society.”

First up, the new centre will partner with the Cabinet Office’s Race Disparity Unit to explore potential for bias in crime and justice. As more applications emerge for use in the world of policing, assessing the likelihood of re-offending for instance, a lack of research on the potential of bias makes for a very dangerous scenario.

The algorithms which are in place might not demonstrate any bias at any point in the future, but implementation without understanding the risk is incredibly irresponsible. When these applications are used to inform decisions about policing, probation and parole, there is a very real-world consequence. Proceeding without such safeguards for bias in place is leaving developments down to chance.

This is of course just one application of AI, though the increased use of AI is becoming much more common. In recruitment, computer algorithms can be used to screen CVs and shortlist candidates, or in financial services, data analysis has long been used to inform decisions about whether people can be granted loans. The idea of unconscious bias can be applied to both instances with vert detrimental outcomes. In the recruitment case, there have already been reports circulating of gender bias.

Technology giant Amazon is one of those firms which got caught unawares. In 2014, Amazon began building an application which would review the CVs of the thousands of applicants it gets every week, giving each CV a rating between one and five stars. In 2015, it realised the application was not assessing the CVs in a gender-neutral manner, favouring male applicants for more technical roles.

The complication perhaps arises when machine learning applications search for attributes which are traditionally associated with roles. For a computer, data is everything and stereotypes are there for a reason, therefore it would appear to be a very logical decision to make.

This type of conundrum is one of the main challenges with AI. As these machines are driven by data and code, it is very difficult to translate ethics, morals, acceptable tolerances, nuance and societal influences into a language it understands. These are also limited applications, built for a single purpose. In the recruitment case, it looks at past attributes to decide, but does not have the ability to understand context. In this instance, the context would be sexism is not acceptable, but as the machine does not have the general knowledge or understanding of a human, how would it know?

This is the finely balanced equation which both industry and government have to assess. Without slowing the wheels of progress, how do you protect society and the economy from the known unknowns and unknown unknowns?

What is developing is the perfect catch-22 situation. The known challenges are known, but without a solution progress is a risk. Then you have the unknown challenges, those which might be compounded through progress but without anyone being aware until it is a complete disaster.

The Centre for Data Ethics and Innovation is an excellent idea to benefit society in numerous ways. But, it faces an almost impossible task.

FCC casts an eye north of 95 GHz

The FCC has unveiled plans to create a new regulatory framework for spectrum above 95 GHz.

While these bands have largely been considered outside the realms of usable spectrum, progress in radio tech has made the prospects much more realistic. And, dare we say it, such a regulatory framework could begin to set the foundations for 6G…

“Today, we take big steps towards making productive use of this spectrum,” said FCC Chairman Ajit Pai. “We allocate a massive 21 gigahertz for unlicensed use and we create a new category of experimental licenses. This will give innovators strong incentives to develop new technologies using these airwaves while also protecting existing uses.”

The Spectrum Horizons First Report and Order creates a new category of experimental licenses for use of frequencies between 95 GHz and 3 THz, valid for 10 years. 21.2 GHz of spectrum will also be made available for use by unlicensed devices. The team envision usecases such as data-intensive, high bandwidth applications as well as imaging and sensing operations.

With this spectrum now on the table, the line between science fiction and reality could begin to blur. Data throughput rates will become almost unimaginably fast, meaning computational power in the wireless world could start to replicate the kind of performance only seen in human brains.

“One reason the US leads the world in wireless is that we’ve moved quickly to open-up new spectrum bands for innovative uses,” said Commissioner Brendan Carr. “We don’t wait around for technologies to develop fully before unlocking spectrum so that entrepreneurs have the incentives to invest and experiment.”

While such a statement suggests the FCC is doing a wonderful job, flooded with foresight, the industry tends to disagree.

In 2017, the mmWave Coalition was born. Although this is a relatively small lobby group for the moment, it does have some notable members already including Nokia and Keysight Technologies. This group has been calling for a regulatory framework above 95 GHz for 18 months, pointing to developments around the world and stating the US risks falling behind without amendments.

A good example of other initiatives is over in Europe, where the European Telecommunications Standards Institute (ESTI) has created the ISG mWT working group which is looking at how to make the 50 GHz – 300 GHz band work. This group has already been running trials with a broad range of members including BT, Deutsche Telekom, Intel, InterDigital and Qualcomm.

While the US is certainly taking a step in the right direction, it would be worth noting it is by no-means the first to get moving beyond the 95 GHz milestone. Europe is leading the charge at the moment.

However, Commissioner Jessica Rosenworcel believes the FCC is being too conservative in its approach.

“I believe that with these way-up-there frequencies, where the potential for interference is so low, we should flip the script,” said Rosenworcel. “The burden should be on those seeking exclusive licenses to demonstrate the interference case and justify why we should carve up an otherwise open space for innovation and experimentation.”

Rosenworcel points to the incredibly short-distance this spectrum will offer, as well as the creation of new antenna designs, like quasi-optical antennas, to ensure efficiency. With the shorter distance and better control of the direction of signals, interference does not pose a threat and therefore an unlicensed approach to spectrum should be prioritised.

Commissioner Michael O’Reilly is another who also supports this position.

“While I strenuously advocate for both licensed and unlicensed spectrum opportunities, I understand that it may be a bit premature to establish exclusive-use licenses above 95 GHz when there is great uncertainty about what technologies will be introduced, what spectrum would be ideal, or what size channel blocks are needed,” said O’Reilly.

Both of these messages effectively make the same point; don’t make assumptions. Taking the same approach to spectrum allocation will not work. The traditional approach of licensed spectrum allocation is perhaps unnecessarily rigid. It might be necessary in the future but granting innovators freedom in the first instance would provide more insight. Perhaps it would be better to react to future developments than to try and guess.

“Better that than being forced to undo a mess later,” said O’Reilly.

While it is of course encouraging the FCC is taking such a long-term view on industry developments, the team needs to ensure it does not over-complicate the landscape right now with unnecessary red-tape. Future regulation needs to protect innovation and grant the freedoms to experiment; a light-touch regulatory environment needs to blossom.

Competent robots demoralise human workers – study

Artificial intelligence and automation might well be the future (and the now in some cases) but a study from Cornwell University suggests robots could negatively impact human performance.

While it might be widely accepted robotic workers would be more efficient than human counterparts, the dream which has been presented by technologists is an augmented workplace. Humans would benefit from the power of a robot, whether this be a physical or virtual one, with a symbiotic relationship producing a greater output. This is the theory, but the research disputes this concept.

“Think about a cashier working side-by-side with an automatic check-out machine, or someone operating a forklift in a warehouse which also employs delivery robots driving right next to them,” said Guy Hoffman, Assistant Professor in the Sibley School of Mechanical and Aerospace Engineering.

“While it may be tempting to design such robots for optimal productivity, engineers and managers need to take into consideration how the robots’ performance may affect the human workers’ effort and attitudes toward the robot and even toward themselves. Our research is the first that specifically sheds light on these effects.”

The test pinned university students against a robot in a relatively simple test. Both parties were given random text, asked to count the number of ‘G’s and then deposit the text in a relevant bin, market with a number to denote the number of ‘G’s. Depending on the performance of the human compared to the robot, the participants were placed in a lottery at the end of each round with a monetary reward. The interesting part of the test was the competence of the robot, which was varied throughout the rounds.

What the research found could have quite an impact on how robotics, artificial intelligence and automation are applied in the work place. The researchers found that as the robot performed better, people rated its competence higher, its likability lower and their own competence lower.

In short, human workers were demoralised when effective and efficient automation was introduced.

While this might all seem very obvious, the long-term promise of robotics, artificial intelligence and automation could be completely undermined. Optimists in the industry have promised the application of these technologies would be the enhance human performance, not replace it, however, it this introduction has the power to degrade human performance you have to wonder what the point would actually be.

What is worth noting is that this research is not directly applicable to most of the usecases which have been discussed, but the learnings should be factored in to any initiatives. In most cases, the technology would perform a different role to the human, whereas this experiment from Cornell University pitted human versus machine. However, such is the desire for success and credibility from people in general, they will naturally compare themselves to robot performance in the first instance, even if there is no direct crossover.

In the telco world, the main applications to date have been in network automation and customer services, the latter of which is where the risk could become more apparent. There is a feeling customer service agents will eventually be entirely taken over by robotics, artificial intelligence and automation, therefore in the intervening period the risk of poor performance is relevant.

That said, irrelevant as to where robotics, artificial intelligence and automation are being applied to the business, this is certainly material worth thinking about. If the overall objective is to improve the end product and performance, the moral of human workers will almost certainly have to be taken into account. That is, of course, unless we are all doomed to be replaced by robots entirely…

Almost half of UK value streaming video over pay TV

A report by EY showed 44% of UK households think they get better value from streaming services than from any pay TV operators.

This is one of the key findings from “Zooming in on household viewing habits”, a follow-up deep-dive on the annual survey EY conducted last September, which covered 2,500 UK families. This message from the UK consumers was also corroborated by a separate, US-focused research by Deloitte, where nearly half of all pay TV subscribers said they were dissatisfied with their service, and 70% felt they were getting too little value for their money.

One of the key themes coming out of the deep-dive into the UK family’s media consumption habits is the ascendency of the consumption of content over the Internet, at the expense of pay TVs. Despite that cord-cutting has not yet hit the UK hard, 54% of all families are already spending more time on the Internet than in front of the traditional TV, including two-thirds of young users primarily watch content on streaming platforms.

“It’s no surprise the UK is becoming a nation of streamers, but our research shows just how enthusiastically households have embraced it. Over the next 12-18 months we will see the launch of new streaming services to further sate the UK’s appetite for content,” said Martyn Whistler, Global Lead Media and Entertainment Analyst at EY. “However, reports of the demise of traditional TV seem a little premature. Our research shows their popularity is undiminished, with viewers watching them more now than in previous years.”

Although this could spell even more bad news for the pay TV operators, when the consumers do watch broadcast TV, 51% of households mainly just watch the five traditional “free” channels (if you did not count the £150 TV licence as “pay”), up from 46% in 2017.

In general consumers are much more tolerant towards pay TV carrying ads than streaming services do. But, still, more consumers are also willing to pay for the content they like. For example, Netflix ranked number one on the table of apps by consumer spending, according to App Annie. And the Deloitte report showed that in the US, a consumer would subscribe to up to three on-demand streaming services at the same time. The willingness to pay has even extended to catch-up watching, especially to get rid of the ads, according to the report. 18% surveyed would be happy to pay more to stream ad-free catch-up TV, up from 16% in 2017.

Another trends that stood out in the report is the diversification of content consumption platforms and its problems. A third families stream video on multiple screens, while 62% of the 18-24-year olds do so. Meanwhile, a quarter of all households have found it hard to track the availability of their favourite content across different services, apps and platforms. This number went up to 39% among the 18-24-year olds, which should be more tech-savvy.

These trends combined can have some implications for how content is produced, distributed, and monetised. For example, if consumers will most likely binge watch content on streaming services (e.g. the average Netflix user would stream two hours a day), the idea of “episode”, which has worked on broadcast TV, will be less relevant. Or should a long series be released all at once on a streaming platform, or making it available episode by episode as the conventional TV broadcasting does? How should pay TV services improve not only its users’ account management, but also the content’s ID management, to provide more pleasant experience for cross-platform and cross-device users?

As Praveen Shankar, EY’s Head of Technology, Media and Telecommunications for the UK & Ireland, put it: “Our survey demonstrates that audiences are struggling to keep track of their favourite content across various platforms and they are confused by the choices available to them. Technology, Media and Telecoms (TMT) companies need to move away from programme guides and big budget marketing and build artificial intelligence (AI) enabled recommendation engines to push content. This will improve user experience, reduce costs and maximise assets.”

On-demand video streaming has surely gained more impetus again in the last few days. CanalPlus has just launched its own streaming service Canal+ Séries, and Apple is widely expected to unveil a version of video on-demand service on 25 March at an event on its own campus.

Silicon Valley’s grip on innovation is loosening – KPMG

Silicon Valley is up there with Wall Street as a driver of US economic dominance, but this leadership position is increasingly coming under threat, including from those pesky Europeans.

As it stands, California still maintains that position as Utopia for technology enthusiasts and innovators. There are numerous reasons for this, ranging from culture to cash and climate, but this lofty position is no-longer looking as attractive as alternative cities woo the next generation of economic disruptors.

KPMG is one company which is predicting the downfall of Silicon Valley. After conducting a survey, the consultancy claims 58% of respondents believe the global centre of innovation will have moved out of Silicon Valley over the next four years. Other US cities are of course lodging a challenge, New York, Austin and Boston for example, though Europe and Asia are also having a poke.

Looking at the top ten alternatives which could lead a challenge, New York ranks first, while Beijing, Tokyo, London and Shanghai feature in the top five. Taipei, Singapore, Seoul, Boston and Austin complete the top ten, but there are several other European competitors floating around.

There are numerous factors which KPMG has taken into account, and some of these will start to play heavy on the Silicon Valley case. With 5G being hyped so considerably over the last few years, most of these cities will be on-par when it comes to infrastructure, but you also have to consider the local talent pool, immigration laws, cost of living, availability of private and public investment, mass transit systems and the attractiveness of a city to millennials.

A separate Medium post from investment manager Byrne Hobart is another which is predicting the downfall of Silicon Valley as the global centre of innovation. Hobart questions whether the culture of innovation is dying out in the region, with the money men seeking more stable and predictable investments, but another interesting point is the ‘cost of existing’ as he puts it.

“As long as higher rents raise the cost of starting a pre-revenue company, fewer people will join them, so more people will join established companies, where they’ll earn market salaries and continue to push up rents,” said Hobart.

Not only does the high cost of living prevent talent from joining start-ups, the preference for established companies and the lucrative salaries further pushes up rent, compounding the problem further. This also prevents lower-income earners in other segments living in the region (arts, fashion or media for example), restricting diversification and making it a less attractive region for liberally minded individuals, the type of person the success of Silicon Valley was built on.

When researching the availability of technology jobs across the US, there are of course numerous regions which are growing faster year-on-year than Silicon Valley, though this would be expected considering the overwhelming focus of tech in the Valley. However, cities like Seattle, Austin, Denver and Huntsville are increasingly home to more technology companies, and when you factor in the more proportionate cost of living, it might be an appealing alternative.

Another very interesting development over the last couple of weeks takes place in France. The French government has recently announced an overhaul of visas for employees working for a tech company, making it easier for talent to be recruited internationally. Considering the anti-globalisation and isolationist trends we are seeing in the US, this is development worth taking note of.

There are now 10,000 start-ups that meet the requirements to access the French Tech Visa and hire foreign employees more easily. These visas cost €368 in administrative fees, is valid for four years (and is renewable) and allows employees to switch jobs during this period. The visa also extends to family members. Just as the US is making it more difficult to hire talent, the French government is attempting to empower start-ups to go an seek the best innovators around and attract them to the country.

As far as a challenge to the Silicon Valley dominance, Europe is putting itself in a very strong position. Not only are many of the cities affordable, they are attractive to millennials (culture, arts, history) a key demographic for technology success moving forward. The European Union also creates a wider society and economy, helping organizations grow in multiple markets and source talent from a wider pool.

Another factor to consider is the focus of these regions. Another KPMG research note suggests US companies are looking towards AI as a market disruptor, while IOT is attracting the interest of European companies. Perhaps this suggests a split in the innovation pool, with AI hubs being focused in North America, while IOT dominance could be wrestled across the pond to Europe. R&D is driven by customer needs and demands, therefore this is not an impossible conclusion. Interestingly enough, Japanese companies are leading the demand for robotics, another potential fragmentation of the innovation pool.

Silicon Valley is not going to disappear, but its dominant position is not only being eroded domestically, but internationally. The technology ecosystem is of course going to evolve over the next few years, but who knows where the global hub of innovation will be; there are a lot of candidates putting their hands up.

Zuckerberg’s vision for Facebook: as privacy-focused as WhatsApp

The Facebook founder laid out his plan for the next steps how Facebook will evolve with a focus on privacy and data security, and promised more open and transparency in the transition.

In a long post published on Facebook, Mark Zuckerberg first recognised that going forward, users may prefer more private communication than socialising publicly. He used the analogy of town squares vs. living rooms. To facilitate this, he aims to use the technologies of WhatsApp as the foundation to build the Facebook ecosystem.

Zuckerberg laid out principles for the next steps, including:

  • Private interactions: this is largely related to users’ control over who they communicate with, safeguarded by measures like group size control and limiting public stories being share;
  • End-to-end encryption: this is about encrypting messages going through Facebook’s platforms. An interesting point here is that Zuckerberg admitted that Facebook’s security systems can read the content of users messages sent over Messenger. WhatsApp is already implementing end-to-end encryption and is not storing encryption keys, which makes it literally impossible for it share content of communication between individuals with any other third parties including the authorities. Zuckerberg recalled the case of the Facebook’s VP for Latin America being jailed in Brazil to illustrate his point.
  • Reducing Permanence: this is mainly about giving users the choice to decide how long they like their content (messages, photos, videos, etc.) to be stored, to ensures what they said many years ago would not come back to haunt them.
  • Safety: Facebook will guard the data safe against malignant attacks
  • Interoperability: Facebook aims to make its platforms interoperable and may extend to be interoperable with SMS too.
  • Secure data storage: one of the most important point here is Zuckerberg vowed not to save user data in countries which “have a track record of violating human rights like privacy or freedom of expression”.

To do all these right, Zuckerberg promised, Facebook is committed to “consulting with experts, advocates, industry partners, and governments — including law enforcement and regulators”.

None of these principles are new or surprising, and are an understandable reaction to recent history when Facebook has been battered by scandals of both data leaking and misuse of private data for monetisation purpose. However there are a couple of questions that are not answered:

  1. What changes Facebook needs to make to its business model: in other words, when Facebook limits its own ability to penetrate user data it weakens its value for targeted advertisers. How will it convince the investors this is the right step to take, and how will it to compensate the loss?
  2. Is Facebook finally giving up its plan to re-enter markets like China? Zuckerberg has huffed and puffed over the recent years without bringing down the Great Wall. While his peers in Apple have happily handed over the keys to iCloud and Google has working hard, secretly or not so secretly to re-enter China, how will the capital market react to Facebook’s public statement that “there’s an important difference between providing a service in a country and storing people’s data there”?

Telcos need to seriously think about how to sell to consumers

Following the news that Sky has been slapped on the wrist for misleading claims during a 2018 advertising campaign, marketers need to have a long and hard think about whether they are doing a good job.

The most recent assault against the marketing strategies of the telcos comes from the Advertising Standards Authority (ASA), with the group ruling Sky’s push to suggest customers would be able to receive stronger wifi signal throughout the house because of its routers, was misleading. The campaign features characters from ‘The Incredibles’ franchise, running across TV and through mainstream press.

The campaign was originally challenged by BT and Virgin Media, with both suggesting the claims were misleading as there was no way to substantiate the assertion. And the ASA agreed. In some cases, Sky’s router might be able to improve wifi signal throughout the home, but due to the breadth of different homes, each with their own structural design, it is an impossible claim to justify. The ad was far too generalist and deemed misleading.

“Unfortunately for Sky, its promise of a strong wifi signal all over your house has been shown to be misleading, and while it is by no means unique in falling foul of the ASA, it will be stung by this ruling the regulator has upheld against it,” said Dani Warner of uSwitch.com.

“Broadband providers are no longer allowed to make such exaggerated claims about potential speeds following the ASA’s major clampdown at the end of 2017, so they have had to become more imaginative in how they stand out from the pack with their advertising.”

This is an area the ASA has been quite hot on in recent years; telcos should not be allowed to make such generalist claims, intentionally misleading customers over performance. Especially in an age where advertising can be personalised on such a dramatic scale, at best it is lazy and incompetent, at worst it is directly and intentionally lying.

What is worth noting is that Sky can potentially boost signal throughout the home, though additional equipment would be required to make this possible. This is not mentioned during the advertising campaign however. The ASA ruled that some of the claims made in the ad could be substantiated, however it is no longer allowed to run in its current form.

Sky incredibles

This is of course not the only area where telcos are being challenged in the world of advertising. ‘Fibre’ claims are another, the ‘up-to’ metric has been removed and the telcos are being forced to detail speeds during peak times. Another factor to consider is the up-coming 5G service. Do any of the telcos have a clue how they are going to sell the service to consumers, as we do not believe the idea of ‘bigger, faster, meaner’ will not work, at least for the first few years.

Starting with the ‘up to’ claim, this is one which plagued the consumer for years. Masses of customers were duped into buying promised services which could only be delivered to a fraction. Thankfully, the ASA changed rules, forcing the telcos to be more accurate in how they communicate with potential customers.

Not only did this ruling mean the ‘up to’ claim had to be avoided, but it also forced the telcos to claim speeds during peak times. This also more readily informs the consumer of services which they are likely to experience, as opposed to the dreamland which most telcos seem to think we live in.

The term ‘fibre’ and ‘full-fibre’ has also been challenged, though telcos can still get away with some nefarious messaging. Irrelevant of whether there is fibre in the connection, and there generally always will be at some point, the ‘last mile’ is where the difference is made to broadband speeds. If it is copper, you will never get the same experience as fibre, however, telcos are still able to mention fibre in advertising.

The ASA has done some work to clear this up, in all fairness, though we still feel there is opportunity to abuse the trust of the consumer. And the telcos have shown that when there is an opportunity to be (1) at best lazy or (2) at worst directly misleading, they will take it.

The final area which we want to discuss takes us into the world of mobile and 5G. The telcos have always leant on the idea of ‘faster, bigger, meaner’ to sell new services to customers, or lure subscriptions away from competitors, but 5G presents a conundrum for the marketers; do consumer need faster speeds right now?

EE

4G delivers a good experience to most, and if it doesn’t, there generally is a good reason for this (i.e. congestion, interference, remote location, indoor etc.). 4G will continue to improve both in terms of speed and coverage over the next few years, and as it stands, there are few (if any) services which supersede what 4G is or will be capable of.

Another factor to consider is the price. Many consumers will want the fastest available, even if they don’t need it, but the premium placed on 5G contracts might be a stumbling block. EE has already hinted 5G will be more expensive than 4G, though details have not been released yet. In the handsets segment, consumers have shown they are more cash conscious, especially when there is little to gain through upgrades, and this is heading across to the tariffs space as purchasing savviness increases.

“I don’t think there are many great telco brands out there most consumers see them more as a utility,” said Ed Barton, Chief Analyst at Ovum. “T-Mobile USA is an exception with their customer champion, ‘un-carrier’ positioning but there no branding even approaching the effectiveness of, say, Apple’s.

“If 5G is sold only as a faster G, sales will be slow and it’s up to the entire ecosystem to create the apps, services and use cases which can only exist because of 5G network capabilities. These will probably rely on some combination of edge computing, high volume data transfer, low latency and maybe network slicing. An early use case is domestic broadband however as 5G networks evolve the use cases should proliferate relatively quickly.”

If consumers are becoming more cash conscious and have perfectly agreeable speeds on their 4G subscriptions, the old telco marketing playbook might have to be torn-up. The big question is whether the ideas are there to make the 5G dream work. Differentiation is key, but few telcos have shown any genuinely interesting ideas to differentiate.

Priority

One excellent example is over at O2 with its Priority initiative. Through partnerships with different brands, restaurants, gig venues and companies, customers are given freebies every week (a Nero coffee on a Tuesday) or special discounts periodically (£199 trip to Budapest). It leverages O2’s assets, the subscription base, allowing O2 to add value to both sides of the equation without monstrous expense. This has been a less prominent aspect of O2 advertising in recent years; perhaps the team is missing a trick.

Another, less successful, example of differentiation is getting involved with the content game. BT has been pursuing this avenue for years, though this expensive bet has seemingly been nothing more than a failure, with former CEO Gavin Patterson heading towards the door as a result.

This is not to say content cannot be a differentiator however. The content aggregator business model is one which leverages the exclusive relationship telcos have with their subscribers, streaming-lining the fragmented content landscape into a single window. Again, it uses assets which the telco already has, adding value to both sides of the equation. It also allows the telcos to get involved in the burgeoning content world without having to adopt a risky business model (content ownership) to challenge the existing and dominant members of the ecosystem.

In France, Orange is a making a place for ownership of the customers ‘smart ecosystem’, offering new services such as storage and security, while the same play is being made by Telefonica in South America through Aura. These offerings will offer differentiation, as well as an opportunity to make more revenues through third-party services. It’s a tough segment, as it will put them head-to-head with the likes of Google and Amazon’s digital assistants, but it is a differentiator.

By having these initiatives in place, marketers have something unique to go to market with, enticing consumers with promises which are genuinely different.

Three is a company which is taking a slightly different approach, hitting the consumers appetite for more data as opposed to speeds. Here, the team is leaning on ‘binge-watching’ trends, offering huge data bundles, but you have to wonder whether this is sustainable in the long-run when it comes to profitability and customer upgrades. There is only so long a company can persist in the ‘race to the bottom’.

Go Binge

“There are too many claims in an attempt to stand out in a crowded market,” said Paolo Pescatore, Tech, Media & Telco Analyst at PP Foresight.

“This is not the first time and wont be the last. It will only proliferate with the rollout of fibre broadband and 5G services. Consumers are happy to pay for the service they’ve signed up for, not to be misled. In essence, telcos are struggling to differentiate beyond connectivity. There’s a role for a provider to be novel and provide users with value through additional services and features.”

With the ASA chipping away at what marketers can and cannot say, as well as the traditional playbook becoming dated and irrelevant, telcos need to take a new approach to selling services to the consumer. The winners of tomorrow will not necessarily be the ones with the best network, O2 currently sits at the bottom of the rankings but has the largest market share in the UK, but the telco who can more effectively communicate with consumers.

5G offers an opportunity for telcos to think differently, as does the emergence of the smart ecosystem. Other product innovations, such as AI-driven routers, which can intelligently manage bandwidth allocation in the home, could be used as a differentiator, but it won’t be long before these become commonplace.

At the moment, all the bold claims being made by telcos, each competing the game of one-up-manship, are merging into white noise. The telcos have lost the trust of the consumer, many of which has cottoned onto the claims being nothing more than chest-beating. The telcos need to get smarter, and it will be interesting to see whether there are any unique approaches to capture the imagination of today’s cash conscious, technologically aware and savvy consumer.

Failure to do so, and the telcos might as well start calling themselves utilities.

Reports of Google China’s death are greatly exaggerated

Google engineers have found that the search giant has continued with its work on the controversial search engine customised for China.

It looks that our conclusion that Google has “terminated” its China project may have been premature. After the management bowed to pressure from both inside and outside of the company to stop the customised search engine for China, codenamed “Dragonfly”, some engineers have told The Intercept that they have seen new codes being added to the products meant for this project.

Despite that the engineers on Dragonfly have been promised to be reassigned to other tasks, and many of them are, Google engineers said they noticed around 100 engineers are still under the cost centre created for the Dragonfly project. Moreover, about 500 changes were made to the code repositories in December, and over 400 changes between January and February of this year. The codes have been developed for the mobile search apps that would be launched for Android and iOS users in China.

There is the possibility that these may be residuals from the suspended project. One source told The Intercept that the code changes could possibly be attributed to employees who have continued this year to wrap up aspects of the work they were doing to develop the Chinese search platform. But it is also worth noting that the Google leadership never formally rang the dead knell of Dragonfly.

The project, first surfaced last November, has angered quite a few Google employees that they voiced their concern to the management. This was also a focal point of Sundar Pichai’s Congressional testimony in December. At that time, multiple Congress members questioned Pichai on this point, including Sheila Jackson Lee (D-TX), Tom Marino (R-PA), David Cicilline (D-RI), Andy Biggs (R-AZ), and Keith Rothfus (R-PA), according to the transcript. Pichai’s answers were carefully worded, when he repeated stated “right now there are no plans for us to launch a search product in China”. When challenged by Tom Marino, the Congressman from Pennsylvania, on the company’s future plan for China, Pichai dodged the question by saying “I’m happy to consult back and be transparent should we plan something there.”

On learning that Google has not entirely killed off Dragonfly, Anna Bacciarelli of Amnesty International told The Intercept, “it’s not only failing on its human rights responsibilities but ignoring the hundreds of Google employees, more than 70 human rights organizations, and hundreds of thousands of campaign supporters around the world who have all called on the company to respect human rights and drop Dragonfly.”

While Sergei Brin, who was behind Google’s decision to pull out of China in 2010, was ready to stand up to censorship and dictatorship, which he had known too well from his childhood in the former Soviet Union, Pichai has adopted a more mercantile approach towards questionable markets since he took over the helm at Google in 2015. In a more recent case, Google (and Apple) has refused to take down the app Absher from their app stores in Saudi Arabia, with Goolge claiming that the app does not violate its policies. The app allows men to control where women travel and offers alerts if and when they leave the country.

This has clearly irritated the lawmakers. 14 House members wrote to Tim Cook and Sundar Pichai, “Twenty first century innovations should not perpetuate sixteenth century tyranny. Keeping this application in your stores allows your companies and your American employees to be accomplices in the oppression of Saudi Arabian women and migrant workers.”