Facebook is reading minds while Amazon perfects text-to-speech

A Facebook-funded study has achieved a breakthrough in decoding speech directly from brain signals at the same time as AWS has made automated speech more realistic.

The study funded by the creepily-named Facebook Reality Labs was conducted by San Francisco University. Its findings were published yesterday under the heading ‘Real-time decoding of question-and-answer speech dialogue using human cortical activity’. It claims to have achieved breakthroughs in the accuracy of identifying speech from the electrical impulses in people’s brains.

The clever bit doesn’t seem to have anything to do with the actual reading of these impulses, but in using algorithms and context to narrow down the range of possible sounds attributable to a given piece of brain activity. This helps distinguish between words comprised of similar sets of sounds and thus improve accuracy, with a key piece of context being the question asked. Thus this breakthrough is as much about AI and machine learning as anything else.

At the same time Amazon Web Services (AWS) has announced a new feature of its Polly text-to-speech managed service. The specific announcement is relatively minor – the ability to give the resulting speech a newsreader style of delivery – but it marks a milestone in the journey to make machine-generated speech as realistic as possible.

When you combine the potential of these two developments, two eventualities spring to mind. The first is an effected cure for muteness without the need for interfaces such as keyboards, which would be amazing. The second is somewhat more ominous, which is a world in which we can no longer be sure we’re communicating with an actual human being unless we’re face-to-face with them.

The AWS post makes joking reference to HAL 9000 from the film 2001: A Space Odyssey, but thanks in part to its own efforts and those funded by Facebook, that sort of thing is looking less like science fiction and more like science fact with every passing day.


Do you have some clear ideas about how the edge computing sector is developing? Then please complete the short survey being run by our colleagues at the Edge Computing Congress and Telecoms.com Intelligence. Click here to see the questions.

Using machine learning as a stethoscope for 5G

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this article Yuval Stein, AVP Technologies at TEOCO looks at the use of machine learning to optimise the roll out of 5G

The invention of the stethoscope was thanks to shyness rather than a spark of genius. In 1816, French doctor René Laennec, felt that listening to a young woman’s heart by pressing his ear to her chest wasn’t appropriate. Instead, he rolled up some paper, and found that he was able to hear much better.

The earliest designs of stethoscopes were simple wooden tubes and it was many years before the instrument we see as emblematic of healthcare was created. But the stethoscope was more than just a useful tool, it led to a new way of doing medicine. Before, it was normal to treat symptoms rather than underlying causes—now, through this new device, doctors had insight into what was going on inside the body, and were better able to understand the diseases behind the symptoms.

This shift from treating symptoms to treating causes seems natural to us now, but at this time doctors would treat a fever with no real idea of what was causing it. A similar change is now necessary with mobile networks. Increased complexity means that we need a new way of looking at these networks, to find the root causes of faults, rather than only treating the symptoms.

Machine learning as a stethoscope

The sheer amount of data that a 5G network will produce is going to be overwhelming. More data is good, of course, because the more data we have on a network, the better we can understand the issues that may be causing problems for users. But all this data needs to be analysed. In the past, this was simple—more data meant hiring more people to analyse and formulate actionable conclusions from the data.

This is no longer tenable. But it’s also no longer possible to rely on simple forms of automation in order to react to regular and more obvious issues. 5G is different from previous network generations, in that many new technologies and architectural innovations are being introduced at the same time. These technologies include NFV/SDN, edge computing, new radio access technologies and more.

This new complexity means we need new tools to examine the network. And this is where machine learning becomes just like the stethoscope—not just a tool, but a shift in how things are done. The use of machine learning can identify patterns and reduce the need for human oversight—a vital means of increasing operational efficiency by reducing headcount. But the real change is shifting from fixing issues to detecting underlying issues—even those that don’t linger in the network for long.

Treating the causes, not the symptoms

The rise of virtualised, software-driven networks has meant that service assurance is more decentralised. This means more network alarms—even with automation it’s still often impossible to determine where the real problems reside. This is particularly an issue where faults are intermittent—the symptoms may last for far longer that the fault itself. Manually examining service alarms will give an engineer no real clue as to where they can start to fix the underlying problems.

Also, there is big difference between being reactive and being proactive in maintaining a level of network assurance. A simple example would be if network bandwidth was too low to provide a certain service, and an alarm is set for when this happens. Automation would mean that the fix for this happens without any intervention. But a step further would be to use statistical techniques, such as trend analysis and forecasting to detect abnormalities in the network. These tools would mean pre-empting a situation that would result in poor service in anticipation of a glitch,  rather than reacting when the issue actually arises. This isn’t about fixing problems, but preventing them before they ever happen, addressing the underlying symptoms before they have a chance to take root and cause havoc.

But machine learning can go further. Self-learning algorithms mean that operators can create a baseline profile that identifies when exceptions occur. Rather than determining a threshold for an alarm, this allows for the creation of adaptive thresholds. An example of this would be an area where many homes are being built—at some point there will be a lot more traffic in that area, but engineers don’t have the time to check how closely construction timelines are being followed. Instead, the network behavior should change to meet the demand automatically. While a hard-coded threshold would need to be reconfigured, machine learning means that thresholds are adjusted automatically.

These examples seem fairly straightforward—but millions of similar decisions need to be made every day based on an overwhelming amount of data. Operators have known that automation is necessary for some time, but machine learning is key to decision-making in a 5G network. Without it, operators will be reduced to guesswork, lacking the tools to make the most out of their new—and expensive—networks.


Yuval-SteinYuval Stein is the AVP of Product Management and Service Assurance Products at TEOCO. With more than 15 years of experience in the service assurance domain, Yuval has held key product management positions throughout his career. He brings his knowledge to the fault, performance and service domains, and uses his hands-on experience to adapt service assurance solutions to the industry challenges: digital services and network technologies.

What to bin and what to keep; the big data conundrum

Figuring out what is valuable data and binning the rest has been a challenge for the telco industry, but here’s an interesting dilemma; how do you know the unknown value of data for the usecases of tomorrow?

This was broadly one of the topics of conversation at Light Reading’s Software Defined Operations & the Autonomous Network event in London. Everyone in the industry knows that data is going to be a big thing, but the influx of questions are almost overwhelming as the number of data sets available.

“90% of the data we collect is useless 90% of the time,” said Tom Griffin of Sevone.

This opens the floodgates of questions. Why do you want to collect certain data sets? How frequently are you going to collect the data? Where is it going to be stored? What are the regulatory requirements? How in-depth does the data need to be for the desired use? What do you do with the redundant data? Will it still be redundant in the future? What is the consequence of binning data which might become valuable in the future? How long do you keep information for with the hope it will one day become useful?

For all the promise of data analytics and artificial intelligence in the industry, the telcos have barely stepped off the starting block. For Griffin and John Clowers of Cisco, identifying the specific usecase is key. While this might sound very obvious, it’s amazing how many people are still floundering, but once this has been identified machine learning and artificial intelligence become critically important.

As Clowers pointed out, with ML and AI data can be analysed in near real-time as it is collected, assigned to the right storage environment (public, private or traditional dependent on regulatory requirements) and then onto the right data lakes or ponds (dependent on the purpose for collecting the data in the first place). With the right algorithms in place, the process of classifying and storing information can be automated, freeing up the time of the engineers to add value, though it also keeps an eye on costs. With the sheer volume of information being collected increasing very quickly, storage costs could rise rapidly.

And this is below the 5G and IoT trends have really kicked in. If telcos are struggling with the data demands of today, how are they going to cope with the tsunami of information which is almost guaranteed in tomorrow’s digital economy.

Which brings us back to the original point. If you have to be selective with the information which you keep, how do you know what information will be valuable for the usecases of tomorrow? And what will be the cost of not having this data?

Nokia’s tour of China continues with China Mobile 5G AI partnership

Nokia has expanded its relationship with the world’s largest mobile operator China Mobile to jointly develop artificial intelligence (AI) and machine learning capability on 5G.

It has been busy days at Nokia. One day after its collaboration with Tencent was announced, Nokia’s new R&D related MOU in China is focused on AI and machine learning in 5G environment, and this time the joint lab will be located in Hangzhou, east China. China Mobile will define use cases as well as standardise the open APIs for third party partners, while Nokia will develop and verify demo solutions using its 5G technologies including its Cloud RAN and open edge server.

China is aiming to lead the world’s AI industry. A national strategy for AI was developed in July 2017, which was followed by a working level conference at the end of year. A policy paper was published earlier this year, providing guidelines to industries and businesses on how to advance China’s AI capabilities in the years to come. As one of the largest state-owned enterprises (SOEs), China Mobile shoulders the expectations to lead the charge. Tying this R&D partnership with Nokia will help.

This is also a good move by Nokia, not the least because of the importance of market, and the level of investment expected out of China into AI and machine learning using the 5G technologies. What Nokia will take away from the partnership will potentially help enhance its capabilities when meeting other customers.

But here is the catch. Machines learn by being fed with large amount of data. However, no operators in the world, China Mobile included, can guarantee their data accurately reflect what is happening on the networks. The margin of errors varies, some operators’ data is up to 40 per cent off the mark, by their own admission. Which will seriously compromise the intelligence generated artificially.

Nokia invests in its IoT portfolio with SpaceTime Insight acquisition

Networking vendor Nokia has snapped-up machine learning-powered analytics firm SpaceTime Insight, which it says will augment its IoT offering.

SpaceTime Insight specialises in the use of predictive analytics to manage and optimize the use of enterprise assets. It has packaged all this cleverness into an IoT platform and it’s this application that seems to have caught Nokia’s eye. Specifically Nokia is going to integrate SpaceTime into its IoT software portfolio and expects to produce better IoT apps as a consequence.

“Adding SpaceTime to Nokia Software is a strong step forward in our strategy, and will help us deliver a new class of intelligent solutions to meet the demands of an increasingly interconnected world,” said Bhaskar Gorti, president of Nokia Software. “Together, we can empower customers to realize the full value of their people, processes and assets, and enable them to deliver rich, world-class digital experiences.”

“Today marks a transformational moment for SpaceTime, and I’m delighted to join forces with one of the world’s top organizations-a global brand that is reshaping the future of networking and intelligent software,” said Rob Schilling, CEO of SpaceTime Insight, who’s hanging around. “I am excited for this incredible opportunity to help accelerate and scale Nokia’s IoT business and provide a new class of next-generation IoT solutions customers cannot find anywhere else.”

It has been a busy start to the week for Nokia. On the software side its Nuage SDN division announced a deal win with Telefónica Spain to software-define its datacentres. This is an extension of an SD-WAN rollout last year and the usual claims of agility, scalability and efficiency apply.

“To meet the rapidly emerging business requirements for agility and on-demand deployments, we moved aggressively to build our business connectivity services around a new cloud-based architecture,” said Joaquín Mata, director of operations, network and IT at Telefónica Spain. “Nuage Networks provided us with a highly scalable SDN architecture that could support all our services across all our regions without disruption. We are confident our customers will significantly improve their businesses with these new cloud-based services.”

Lastly Nokia has got together with French operator SFR to claim the first French 5G NR call over the 3.5 GHz spectrum. It was a test conducted at Nokia’s Paris campus and seems to be a pretty standard affair, designed as much to give the protagonists some 5G kudos as anything else.

“SFR is developing a roadmap for the evolution of its networks that takes into account the benefits and complexity of implementing 5G,” said François Vincent, head of Mobile Network at SFR. “The joint projects and trials will enable us to meet future data demand in the most effective way, while exploring new ways to deliver our media content that will increase the subscriber experience.”

Google exec ditches Moonshot labs for hippy life

Mo Gawdat, former Chief Business Officer for Google Google [X], has left the life of cutting edge technology to spread the message of peace, love and happiness.

It is quite a turnaround. Leaving one of the world’s most ruthless profit generating machines to start #onebillionhappy, an organization with the simple sounding objective of making one billion people happier in their day-to-day lives. While this might sound very optimistic and generic, there is an underlying philosophy which ties quite neatly back into his background at Google, IBM and Microsoft.

“Artificial intelligence is real, it’s here,” Gawdat said in a LinkedIn video promoting his new mission (which you can see at the bottom of this article). “Those machines are developing partial intelligence that way surpasses our human intelligence. They see better, they hear better and sometimes they even reason better. Over the next fifteen to twenty years this is going to develop a computer that is much smarter than all of us. We call that moment singularity.

“Singularity is a moment beyond which we can no longer see, we can no longer forecast. The development of the world beyond the moment where machines are smarter than we are is highly unpredictable. Everything is possible. Those machines can solve every problem we’ve ever faced. Or they can actually decide we are the problem and get rid of us.”

This is the sci-fi concept which Gawdat is using the basis of his new organization. If machines are to rise up and take over the world, the blame can only lie with ourselves. Human nature could very well be the foundation of its own downfall.

It sounds like a conspiracy theory, usually regulated to the comment boards of sci-fi websites, but there is some logic so stick with us for a second. By 2029 artificially intelligent machines will surpass human intelligence and by 2049 AI is predicted to be a billion times more intelligent than us. The way in which these machines are being designed is with machine learning technologies at the core. The vision is to create intelligence which is can demonstrate self-learning, self-governance, self-repair and self-reliance. But this is where the problem lies.

The basic concept of machine learning technologies say that data is absorbed and characteristics/capabilities/protocols adjusted in light of this new information.

“How are those machines learning? They’re looking at the knowledge which is out there in the world and they’re building patterns from that,” said Gawdat. “Just like an 18 month old infant.

“We basically write algorithms which allow computers to understand those patterns. Through pattern recognition and billions of observations, they learn. They’re learning by observing. And what are they observing? They’re observing a world that is full of greed, disregard for other species, violence, ego, showing off.”

A computer learning to hate because all it sees is hate might sound far-fetched and the basis for a George Orwell novel, but we have already experienced how our terrible behaviour can negatively influence artificial intelligence. Who remembers Tay?

Back in 2016 Microsoft unveiled Tay, a Twitter bot which was supposed to learn from interactions, comments and trends from the social media platform. Thanks to online trolls and general social media bad behaviour Tay picked up some awful habits, and this is a bit of an understatement. Within 24 hours Tay went from being friendly and inoffensive, to comparing feminism to a cult and worshipping Adolf Hitler.

At the time it was an experiment which went horribly wrong and was entertaining for a moment, but it does show the danger of the way which we act online. It is entirely possible for this influence to have a very negative impact on the digital economy of tomorrow.

Right now the danger is not as apparent. Artificial intelligence is still in the very early days and structured data is what is being fed into the machines. The vast majority of the information which is available on the web is unstructured so we are able to control the flow of this information. But AI is advancing incredibly quickly and it won’t be long before these machines are intelligent enough to interpret any form of data, videos or pictures for example.

This is where it becomes a bit more complicated, how do you programme an understanding of sentiment? Or how about context? These two are some of the simple ones, but how about sarcasm? How do you implant our emotional intelligence, of which we don’t really understand, into a machine so it understands the various nuances of human interaction. This is complicated.

Of course if you let the machines run wild, this will create all sorts of problems. The trick will be to programme policies into the machines from the beginning, but as Ericsson’s Ulrika Jägare pointed out to us at Mobile World Congress, finding the right balance between maintaining integrity, but also creating the flexibility to allow for the machine to be creative is a tricky one. To prevent world domination, strict parameters will have to be set, but will these parameters prevent the machines from using this extraordinary intelligence to its full potential and helping to advance the human race? Catch-22.

Maybe Gawdat is right; we just need to be nicer:

“How do we contain them, we don’t contain them at all,” said Gawdat. “The best way to raise wonderful children is to be wonderful parents. It’s not the inventor of the technology which is going to set the tone moving forward, it’s the technology itself that’s going to use the knowledge, the values that we communicate to it, to develop its own intelligence.”

Arm launches dedicated chip designs for machine learning

UK mobile chip design giant Arm has created specialised chip designs specifically for machine learning and object detection.

Arm, which at some stage in the past few months seems to have decided its name is no longer an abbreviation of Advanced RISC Machines and is instead a type of limb, is best known for providing the designs for mobile chips such as applications and baseband processors. As the tech world gets increasingly keen on artificial intelligence and mobile edge computing, it makes sense for Arm to get involved at a silicon level.

This has taken the form of Project Trillium, which is described as ‘a suite of Arm IP including new highly scalable processors that will deliver enhanced machine learning (ML) and neural network (NN) functionality.’ The point of it seems to be to equip mobile devices with a degree of autonomous (as opposed to cloud-based) machine learning capability that they currently lack.

“The rapid acceleration of artificial intelligence into edge devices is placing increased requirements for innovation to address compute while maintaining a power efficient footprint,” said Rene Haas, President of the IP Products Group at Arm. “To meet this demand, Arm is announcing its new ML platform, Project Trillium. New devices will require the high-performance ML and AI capabilities these new processors deliver.”

The main chip design is the ML one, which puts a premium on scalability – presumably meaning more chips equals more ML power. On top of that Arm has launched a distinct design for object detection, which covers things like facial recognition and the detection of other objects via the device’s camera. The two apparently perform even better in combination and better when you throw special Arm neural network software.

Jem Davies, Arm’s GM of ML, has blogged on the launch and unsurprisingly thinks ML is the biggest thing since sliced bread. “In my opinion, the growth of machine learning represents the biggest inflection point in computing for more than a generation,” he blogged. “It will have a massive effect on just about every segment I can think of. People ask me which segments will be affected by ML, and I respond that I can’t think of one that won’t be.”

As a scuba diver Davies chose a diving illustration to show how cool life could be when everything has ML chips embedded in it. You could have a heads-up-display in your mask that provides real-time augmented reality information and even automated action, such as defensive counter-measures if a shark should suddenly turn up unannounced.


AI and the various other bits of computer cleverness that are generally associated with it, are very much in vogue in the mobile space these days. We’ve been broken in gently by the cloud-driven smart assistants like Siri, but enabling much of that processing to be done locally offers clear advantages. On the back of Project Trillium expect chip vendors, and consequently devices vendors, to be offering novel AI features before long.

Nvidia unveils Titan V with 110 Teraflops of deep learning power

Nvidia has unleashed new desktop GPU, with claims the beast is taming 110 Teraflops of horsepower under the hood, a moody nine times that of its puny predecessor.

Designed for computational processing for machine learning researchers, developers, and data scientists, it’s 21.1 billion transistors can deliver 110 teraflops of processing power, nine times more powerful than the Titan X, and what the company describes as ‘extreme energy efficiency’. The technology version of roid heads must be frothing at the mouth.

“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said CEO Jensen Huang.

“With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”

So where does the extra power come from? Nvidia has pointed towards a redesign of the streaming multiprocessor that is at the centre of the GPU, which it claims doubles energy efficiency compared to the previous generation, which results in the boost in performance in the same power envelope. The team has also highlighted independent parallel integer and floating-point data paths, while also a new combined L1 data cache and shared memory unit which apparently improves performance and simplifies programming.

Some might suggest it is a step backwards, as this is a product which is designed for local use, not necessarily the cloud, but there will be those who prefer the convenience of running workloads on a local machine. Customers will be able to connect to the Nvidia GPU Cloud to make use of software updates, including Nvidia-optimized deep learning frameworks, third-party managed HPC applications. And all this for a cool $2,999.

Amazon launches a bunch of machine learning goodies for developers

At the AWS Re:Invent event Amazon Web Services served up a large number of initiatives designed to make machine learning more accessible to developers and data scientists.

Even these are just a small part of an orgy of announcements from the enterprise cloud market leader, with nine press releases sent out yesterday alone. The plucky hacks at Enterprise Cloud News are living the AWS dream over in Vegas and even managing to extract themselves from the casinos and night clubs to cover the event, so we’ll leave the in-depth stuff to them.

Here are the six bits of machine learning cleverness revealed in the culminating release

  • Amazon SageMaker is a managed service to help developers and data scientists to build, train, deploy, and manage their own machine learning models
  • AWS DeepLens is a deep learning-enabled wireless video camera that can run real-time computer vision models
  • Amazon Transcribe is an AI-infused speech-to-text tool that can deal with low quality audio
  • Amazon Translate does what it says on the tin using state of the art neural machine translation techniques, apparently
  • Amazon Comprehend is an intriguing tool that is designed to understand natural language, including nuance, context, etc, and analyse it
  • Amazon Rekognition Video provides real-time facial recognition, as well as other objects, from video, which can then be ordered and analysed.

“Our original vision for AWS was to enable any individual in his or her dorm room or garage to have access to the same technology, tools, scale, and cost structure as the largest companies in the world. Our vision for machine learning is no different,” said Swami Sivasubramanian, VP of Machine Learning, AWS.

“We want all developers to be able to use machine learning much more expansively and successfully, irrespective of their machine learning skill level. Amazon SageMaker removes a lot of the muck and complexity involved in machine learning to allow developers to easily get started and become competent in building, training, and deploying models.”

As ever, advances in AI and machine learning bring with them a conflicting mixture of awe and unease. This latest batch of announcements from Amazon seem to be heavy on the surveillance side of things in so much as they will make it easier for other people to track our activities. Anyone already concerned about data privacy is unlikely to be reassured by this sort of thing.

Self-taught Google AI programme trounces previous human-taught one

Is it OK to start worrying yet?

Google has proudly announced that a new version of its computer programme designed to play the Chinese game Go, called AlphaGo Zero, has beaten the previous version 100-0. That previous version, in turn, had beaten 18-time world Go champion – Lee Sedol – by three games to nothing last year. It then went on to beat an even more powerful version.

The big difference between new, improved AlphaGo and the previous ones is that humans have been taken out of the loop. Rather than feed the programme data from loads of previously-played games between actual people, which was how the previous version got so good at it, this time they just gave the programme the basic rules and instructed it to play itself. Within a relatively short period of time this resulted in the new AlphaGo comfortably surpassing the previous one.


“The system starts off with a neural network that knows nothing about the game of Go,” says the blog written by DeepMind, Google’s AI division. “It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.

“This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.

“Over the course of millions of AlphaGo vs AlphaGo games, the system progressively learned the game of Go from scratch, accumulating thousands of years of human knowledge during a period of just a few days. AlphaGo Zero also discovered new knowledge, developing unconventional strategies and creative new moves that echoed and surpassed the novel techniques it played in the games against Lee Sedol and Ke Jie.”


“These moments of creativity give us confidence that AI will be a multiplier for human ingenuity, helping us with our mission to solve some of the most important challenges humanity is facing,” concluded the blog.

“While it is still early days, AlphaGo Zero constitutes a critical step towards this goal. If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to positively impact society.”

As you can see from the above quotes and the videos below, DeepMind is pumped about this autonomous AI breakthrough and quite rightly points to all the computational challenges this might help overcome. But the alarmist luddite in us can’t help feeling a tad uneasy about the precedent set by machines teaching themselves independently of humans. Where will it end? We’ll let Hollywood give you the worst-case scenario in the last two videos.