What to bin and what to keep; the big data conundrum

Figuring out what is valuable data and binning the rest has been a challenge for the telco industry, but here’s an interesting dilemma; how do you know the unknown value of data for the usecases of tomorrow?

This was broadly one of the topics of conversation at Light Reading’s Software Defined Operations & the Autonomous Network event in London. Everyone in the industry knows that data is going to be a big thing, but the influx of questions are almost overwhelming as the number of data sets available.

“90% of the data we collect is useless 90% of the time,” said Tom Griffin of Sevone.

This opens the floodgates of questions. Why do you want to collect certain data sets? How frequently are you going to collect the data? Where is it going to be stored? What are the regulatory requirements? How in-depth does the data need to be for the desired use? What do you do with the redundant data? Will it still be redundant in the future? What is the consequence of binning data which might become valuable in the future? How long do you keep information for with the hope it will one day become useful?

For all the promise of data analytics and artificial intelligence in the industry, the telcos have barely stepped off the starting block. For Griffin and John Clowers of Cisco, identifying the specific usecase is key. While this might sound very obvious, it’s amazing how many people are still floundering, but once this has been identified machine learning and artificial intelligence become critically important.

As Clowers pointed out, with ML and AI data can be analysed in near real-time as it is collected, assigned to the right storage environment (public, private or traditional dependent on regulatory requirements) and then onto the right data lakes or ponds (dependent on the purpose for collecting the data in the first place). With the right algorithms in place, the process of classifying and storing information can be automated, freeing up the time of the engineers to add value, though it also keeps an eye on costs. With the sheer volume of information being collected increasing very quickly, storage costs could rise rapidly.

And this is below the 5G and IoT trends have really kicked in. If telcos are struggling with the data demands of today, how are they going to cope with the tsunami of information which is almost guaranteed in tomorrow’s digital economy.

Which brings us back to the original point. If you have to be selective with the information which you keep, how do you know what information will be valuable for the usecases of tomorrow? And what will be the cost of not having this data?

Nokia’s tour of China continues with China Mobile 5G AI partnership

Nokia has expanded its relationship with the world’s largest mobile operator China Mobile to jointly develop artificial intelligence (AI) and machine learning capability on 5G.

It has been busy days at Nokia. One day after its collaboration with Tencent was announced, Nokia’s new R&D related MOU in China is focused on AI and machine learning in 5G environment, and this time the joint lab will be located in Hangzhou, east China. China Mobile will define use cases as well as standardise the open APIs for third party partners, while Nokia will develop and verify demo solutions using its 5G technologies including its Cloud RAN and open edge server.

China is aiming to lead the world’s AI industry. A national strategy for AI was developed in July 2017, which was followed by a working level conference at the end of year. A policy paper was published earlier this year, providing guidelines to industries and businesses on how to advance China’s AI capabilities in the years to come. As one of the largest state-owned enterprises (SOEs), China Mobile shoulders the expectations to lead the charge. Tying this R&D partnership with Nokia will help.

This is also a good move by Nokia, not the least because of the importance of market, and the level of investment expected out of China into AI and machine learning using the 5G technologies. What Nokia will take away from the partnership will potentially help enhance its capabilities when meeting other customers.

But here is the catch. Machines learn by being fed with large amount of data. However, no operators in the world, China Mobile included, can guarantee their data accurately reflect what is happening on the networks. The margin of errors varies, some operators’ data is up to 40 per cent off the mark, by their own admission. Which will seriously compromise the intelligence generated artificially.

Nokia invests in its IoT portfolio with SpaceTime Insight acquisition

Networking vendor Nokia has snapped-up machine learning-powered analytics firm SpaceTime Insight, which it says will augment its IoT offering.

SpaceTime Insight specialises in the use of predictive analytics to manage and optimize the use of enterprise assets. It has packaged all this cleverness into an IoT platform and it’s this application that seems to have caught Nokia’s eye. Specifically Nokia is going to integrate SpaceTime into its IoT software portfolio and expects to produce better IoT apps as a consequence.

“Adding SpaceTime to Nokia Software is a strong step forward in our strategy, and will help us deliver a new class of intelligent solutions to meet the demands of an increasingly interconnected world,” said Bhaskar Gorti, president of Nokia Software. “Together, we can empower customers to realize the full value of their people, processes and assets, and enable them to deliver rich, world-class digital experiences.”

“Today marks a transformational moment for SpaceTime, and I’m delighted to join forces with one of the world’s top organizations-a global brand that is reshaping the future of networking and intelligent software,” said Rob Schilling, CEO of SpaceTime Insight, who’s hanging around. “I am excited for this incredible opportunity to help accelerate and scale Nokia’s IoT business and provide a new class of next-generation IoT solutions customers cannot find anywhere else.”

It has been a busy start to the week for Nokia. On the software side its Nuage SDN division announced a deal win with Telefónica Spain to software-define its datacentres. This is an extension of an SD-WAN rollout last year and the usual claims of agility, scalability and efficiency apply.

“To meet the rapidly emerging business requirements for agility and on-demand deployments, we moved aggressively to build our business connectivity services around a new cloud-based architecture,” said Joaquín Mata, director of operations, network and IT at Telefónica Spain. “Nuage Networks provided us with a highly scalable SDN architecture that could support all our services across all our regions without disruption. We are confident our customers will significantly improve their businesses with these new cloud-based services.”

Lastly Nokia has got together with French operator SFR to claim the first French 5G NR call over the 3.5 GHz spectrum. It was a test conducted at Nokia’s Paris campus and seems to be a pretty standard affair, designed as much to give the protagonists some 5G kudos as anything else.

“SFR is developing a roadmap for the evolution of its networks that takes into account the benefits and complexity of implementing 5G,” said François Vincent, head of Mobile Network at SFR. “The joint projects and trials will enable us to meet future data demand in the most effective way, while exploring new ways to deliver our media content that will increase the subscriber experience.”

Google exec ditches Moonshot labs for hippy life

Mo Gawdat, former Chief Business Officer for Google Google [X], has left the life of cutting edge technology to spread the message of peace, love and happiness.

It is quite a turnaround. Leaving one of the world’s most ruthless profit generating machines to start #onebillionhappy, an organization with the simple sounding objective of making one billion people happier in their day-to-day lives. While this might sound very optimistic and generic, there is an underlying philosophy which ties quite neatly back into his background at Google, IBM and Microsoft.

“Artificial intelligence is real, it’s here,” Gawdat said in a LinkedIn video promoting his new mission (which you can see at the bottom of this article). “Those machines are developing partial intelligence that way surpasses our human intelligence. They see better, they hear better and sometimes they even reason better. Over the next fifteen to twenty years this is going to develop a computer that is much smarter than all of us. We call that moment singularity.

“Singularity is a moment beyond which we can no longer see, we can no longer forecast. The development of the world beyond the moment where machines are smarter than we are is highly unpredictable. Everything is possible. Those machines can solve every problem we’ve ever faced. Or they can actually decide we are the problem and get rid of us.”

This is the sci-fi concept which Gawdat is using the basis of his new organization. If machines are to rise up and take over the world, the blame can only lie with ourselves. Human nature could very well be the foundation of its own downfall.

It sounds like a conspiracy theory, usually regulated to the comment boards of sci-fi websites, but there is some logic so stick with us for a second. By 2029 artificially intelligent machines will surpass human intelligence and by 2049 AI is predicted to be a billion times more intelligent than us. The way in which these machines are being designed is with machine learning technologies at the core. The vision is to create intelligence which is can demonstrate self-learning, self-governance, self-repair and self-reliance. But this is where the problem lies.

The basic concept of machine learning technologies say that data is absorbed and characteristics/capabilities/protocols adjusted in light of this new information.

“How are those machines learning? They’re looking at the knowledge which is out there in the world and they’re building patterns from that,” said Gawdat. “Just like an 18 month old infant.

“We basically write algorithms which allow computers to understand those patterns. Through pattern recognition and billions of observations, they learn. They’re learning by observing. And what are they observing? They’re observing a world that is full of greed, disregard for other species, violence, ego, showing off.”

A computer learning to hate because all it sees is hate might sound far-fetched and the basis for a George Orwell novel, but we have already experienced how our terrible behaviour can negatively influence artificial intelligence. Who remembers Tay?

Back in 2016 Microsoft unveiled Tay, a Twitter bot which was supposed to learn from interactions, comments and trends from the social media platform. Thanks to online trolls and general social media bad behaviour Tay picked up some awful habits, and this is a bit of an understatement. Within 24 hours Tay went from being friendly and inoffensive, to comparing feminism to a cult and worshipping Adolf Hitler.

At the time it was an experiment which went horribly wrong and was entertaining for a moment, but it does show the danger of the way which we act online. It is entirely possible for this influence to have a very negative impact on the digital economy of tomorrow.

Right now the danger is not as apparent. Artificial intelligence is still in the very early days and structured data is what is being fed into the machines. The vast majority of the information which is available on the web is unstructured so we are able to control the flow of this information. But AI is advancing incredibly quickly and it won’t be long before these machines are intelligent enough to interpret any form of data, videos or pictures for example.

This is where it becomes a bit more complicated, how do you programme an understanding of sentiment? Or how about context? These two are some of the simple ones, but how about sarcasm? How do you implant our emotional intelligence, of which we don’t really understand, into a machine so it understands the various nuances of human interaction. This is complicated.

Of course if you let the machines run wild, this will create all sorts of problems. The trick will be to programme policies into the machines from the beginning, but as Ericsson’s Ulrika Jägare pointed out to us at Mobile World Congress, finding the right balance between maintaining integrity, but also creating the flexibility to allow for the machine to be creative is a tricky one. To prevent world domination, strict parameters will have to be set, but will these parameters prevent the machines from using this extraordinary intelligence to its full potential and helping to advance the human race? Catch-22.

Maybe Gawdat is right; we just need to be nicer:

“How do we contain them, we don’t contain them at all,” said Gawdat. “The best way to raise wonderful children is to be wonderful parents. It’s not the inventor of the technology which is going to set the tone moving forward, it’s the technology itself that’s going to use the knowledge, the values that we communicate to it, to develop its own intelligence.”

Arm launches dedicated chip designs for machine learning

UK mobile chip design giant Arm has created specialised chip designs specifically for machine learning and object detection.

Arm, which at some stage in the past few months seems to have decided its name is no longer an abbreviation of Advanced RISC Machines and is instead a type of limb, is best known for providing the designs for mobile chips such as applications and baseband processors. As the tech world gets increasingly keen on artificial intelligence and mobile edge computing, it makes sense for Arm to get involved at a silicon level.

This has taken the form of Project Trillium, which is described as ‘a suite of Arm IP including new highly scalable processors that will deliver enhanced machine learning (ML) and neural network (NN) functionality.’ The point of it seems to be to equip mobile devices with a degree of autonomous (as opposed to cloud-based) machine learning capability that they currently lack.

“The rapid acceleration of artificial intelligence into edge devices is placing increased requirements for innovation to address compute while maintaining a power efficient footprint,” said Rene Haas, President of the IP Products Group at Arm. “To meet this demand, Arm is announcing its new ML platform, Project Trillium. New devices will require the high-performance ML and AI capabilities these new processors deliver.”

The main chip design is the ML one, which puts a premium on scalability – presumably meaning more chips equals more ML power. On top of that Arm has launched a distinct design for object detection, which covers things like facial recognition and the detection of other objects via the device’s camera. The two apparently perform even better in combination and better when you throw special Arm neural network software.

Jem Davies, Arm’s GM of ML, has blogged on the launch and unsurprisingly thinks ML is the biggest thing since sliced bread. “In my opinion, the growth of machine learning represents the biggest inflection point in computing for more than a generation,” he blogged. “It will have a massive effect on just about every segment I can think of. People ask me which segments will be affected by ML, and I respond that I can’t think of one that won’t be.”

As a scuba diver Davies chose a diving illustration to show how cool life could be when everything has ML chips embedded in it. You could have a heads-up-display in your mask that provides real-time augmented reality information and even automated action, such as defensive counter-measures if a shark should suddenly turn up unannounced.

Arm ML OD AR

AI and the various other bits of computer cleverness that are generally associated with it, are very much in vogue in the mobile space these days. We’ve been broken in gently by the cloud-driven smart assistants like Siri, but enabling much of that processing to be done locally offers clear advantages. On the back of Project Trillium expect chip vendors, and consequently devices vendors, to be offering novel AI features before long.

Nvidia unveils Titan V with 110 Teraflops of deep learning power

Nvidia has unleashed new desktop GPU, with claims the beast is taming 110 Teraflops of horsepower under the hood, a moody nine times that of its puny predecessor.

Designed for computational processing for machine learning researchers, developers, and data scientists, it’s 21.1 billion transistors can deliver 110 teraflops of processing power, nine times more powerful than the Titan X, and what the company describes as ‘extreme energy efficiency’. The technology version of roid heads must be frothing at the mouth.

“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said CEO Jensen Huang.

“With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”

So where does the extra power come from? Nvidia has pointed towards a redesign of the streaming multiprocessor that is at the centre of the GPU, which it claims doubles energy efficiency compared to the previous generation, which results in the boost in performance in the same power envelope. The team has also highlighted independent parallel integer and floating-point data paths, while also a new combined L1 data cache and shared memory unit which apparently improves performance and simplifies programming.

Some might suggest it is a step backwards, as this is a product which is designed for local use, not necessarily the cloud, but there will be those who prefer the convenience of running workloads on a local machine. Customers will be able to connect to the Nvidia GPU Cloud to make use of software updates, including Nvidia-optimized deep learning frameworks, third-party managed HPC applications. And all this for a cool $2,999.

Amazon launches a bunch of machine learning goodies for developers

At the AWS Re:Invent event Amazon Web Services served up a large number of initiatives designed to make machine learning more accessible to developers and data scientists.

Even these are just a small part of an orgy of announcements from the enterprise cloud market leader, with nine press releases sent out yesterday alone. The plucky hacks at Enterprise Cloud News are living the AWS dream over in Vegas and even managing to extract themselves from the casinos and night clubs to cover the event, so we’ll leave the in-depth stuff to them.

Here are the six bits of machine learning cleverness revealed in the culminating release

  • Amazon SageMaker is a managed service to help developers and data scientists to build, train, deploy, and manage their own machine learning models
  • AWS DeepLens is a deep learning-enabled wireless video camera that can run real-time computer vision models
  • Amazon Transcribe is an AI-infused speech-to-text tool that can deal with low quality audio
  • Amazon Translate does what it says on the tin using state of the art neural machine translation techniques, apparently
  • Amazon Comprehend is an intriguing tool that is designed to understand natural language, including nuance, context, etc, and analyse it
  • Amazon Rekognition Video provides real-time facial recognition, as well as other objects, from video, which can then be ordered and analysed.

“Our original vision for AWS was to enable any individual in his or her dorm room or garage to have access to the same technology, tools, scale, and cost structure as the largest companies in the world. Our vision for machine learning is no different,” said Swami Sivasubramanian, VP of Machine Learning, AWS.

“We want all developers to be able to use machine learning much more expansively and successfully, irrespective of their machine learning skill level. Amazon SageMaker removes a lot of the muck and complexity involved in machine learning to allow developers to easily get started and become competent in building, training, and deploying models.”

As ever, advances in AI and machine learning bring with them a conflicting mixture of awe and unease. This latest batch of announcements from Amazon seem to be heavy on the surveillance side of things in so much as they will make it easier for other people to track our activities. Anyone already concerned about data privacy is unlikely to be reassured by this sort of thing.

Self-taught Google AI programme trounces previous human-taught one

Is it OK to start worrying yet?

Google has proudly announced that a new version of its computer programme designed to play the Chinese game Go, called AlphaGo Zero, has beaten the previous version 100-0. That previous version, in turn, had beaten 18-time world Go champion – Lee Sedol – by three games to nothing last year. It then went on to beat an even more powerful version.

The big difference between new, improved AlphaGo and the previous ones is that humans have been taken out of the loop. Rather than feed the programme data from loads of previously-played games between actual people, which was how the previous version got so good at it, this time they just gave the programme the basic rules and instructed it to play itself. Within a relatively short period of time this resulted in the new AlphaGo comfortably surpassing the previous one.

AlphaGo%20Zero%20Training%20Time

“The system starts off with a neural network that knows nothing about the game of Go,” says the blog written by DeepMind, Google’s AI division. “It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.

“This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. In each iteration, the performance of the system improves by a small amount, and the quality of the self-play games increases, leading to more and more accurate neural networks and ever stronger versions of AlphaGo Zero.

“Over the course of millions of AlphaGo vs AlphaGo games, the system progressively learned the game of Go from scratch, accumulating thousands of years of human knowledge during a period of just a few days. AlphaGo Zero also discovered new knowledge, developing unconventional strategies and creative new moves that echoed and surpassed the novel techniques it played in the games against Lee Sedol and Ke Jie.”

Knowledge%20Timeline

“These moments of creativity give us confidence that AI will be a multiplier for human ingenuity, helping us with our mission to solve some of the most important challenges humanity is facing,” concluded the blog.

“While it is still early days, AlphaGo Zero constitutes a critical step towards this goal. If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to positively impact society.”

As you can see from the above quotes and the videos below, DeepMind is pumped about this autonomous AI breakthrough and quite rightly points to all the computational challenges this might help overcome. But the alarmist luddite in us can’t help feeling a tad uneasy about the precedent set by machines teaching themselves independently of humans. Where will it end? We’ll let Hollywood give you the worst-case scenario in the last two videos.

 

Microsoft and AWS buddy up to democratise machine learning

Microsoft and AWS have teamed up to launch open-source, deep learning library called Gluon which will make machine learning more accessible to a wider-range of developers.

That will be the key to winning the machine learning race. Not making the most complex and advanced ML proposition, but the one which is the most accessible to developers of all standing. This democratization has even brought Microsoft and AWS together, two companies which are usually heated rivals in the cloud game.

For the most part, the immensely complex operations which have been built up in our mind for AI will be the exception not the rule. Self-driving cars for instance will be an incredibly complex job for any AI application, as will AI assistance for doctors diagnosing patients, but for every example like this there will be dozens of examples of simple automation. Mundane automation where the process has to correct itself every now and then to recognise anomalies, but it is simple data processing as white collar jobs come under threat.

This is where accessibility will win out. It won’t take an award winning developer to create these programmes, it’ll probably take a normal one, who has been given the tools needed to make a better understanding of machine learning. A good example is in accountancy. The Financial Director of a company needs to know accountancy and the financial trade inside out, a book-keeper who’s role it is to manage the payroll doesn’t need to have such a complex understanding of the trade. Some AI applications will be incredibly in-depth and complicated, some will not.

“Today, AWS and Microsoft announced Gluon, a new open source deep learning interface which allows developers to more easily and quickly build machine learning models, without compromising performance,” said AWS’ Matt Wood.

“Gluon provides a clear, concise API for defining machine learning models using a collection of pre-built, optimized neural network components. Developers who are new to machine learning will find this interface more familiar to traditional code, since machine learning models can be defined and manipulated just like any other data structure.

“More seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed.”

Deep learning engines like Apache MXNet, Microsoft Cognitive Toolkit, and TensorFlow have managed to speed up a training process, but the process still requires the developer to build the training model itself. This is not a simple or quick task. Other models go the other way, allowing for a simple build, but the training takes longer. Gluon aims to sit in the middle.

It is a simple idea, but the best ones usually are. Every developer will want to play around with machine learning, and to incorporate it into their work in some manner, but not all will have the skillset. AWS and Microsoft are trying to simplify the process to democratize what is being touted as a fairly significant breakthrough.

 

Join the conversation about the state of carrier network automation and building sustainable business cases on 2 November in Central London.

 

Telia tries shocking new strategy: improving customer experience

Telcos in the UK might not understand what it means to be customer centric, but the Swedes seem to be having a solid crack at it. Well, Telia at least.

The point of attack here is an app called ‘Min Mobile’, developed by a company called eBuilder, which Telia has a non-controlling stake in. Essentially it is a platform which collects information about you and how you use you device, but then offers advice on how you can improve performance.

It’s an interesting strategy to re-engage customers, who are starting to forget about the operator. Telia’s Gustav Berghog highlighted to us the user is now more concerned with the handset manufacturer and the flashy content providers/OTTs; there is a risk the operator will be thought as nothing more than a commodity, and therefore traded out without much thought or emotional loss.

Ideas like ‘Min Mobile’ are designed to take Telia back into the customer life. The team want to show the operator is more than just a connectivity provider, but provide an experience which adds value. This idea of ‘positive discounting’, as Berghog describes it, removes the idea of an operator relationship being transactional, and aims to create an element of loyalty. That’s ultimately what ‘Min Mobile’ is; a customer retention strategy.

So how does the app work? Once downloaded, the app monitors how you use your device, and aims to predicts any flaws or errors on the device. The app currently monitors four areas; storage, battery, general performance and device condition/age. There are plans to extend in others, but these address the main pain points for consumers for the moment.

After monitoring your device for a while, the app might figure out that your battery performance is 15% less than other users on the same one. Using this information, Telia can make communication with you much more personalised. The might send out a message with tips focused on improving battery life to you, but your partner might get one on storage tricks, as this was an issue highlighted on their device. It is much more appealing, a step away from the general ‘engagement’ messaging which most operators make use of, and it is pretty useful as well.

Data usage is another area which the team might investigate. By monitoring your geographical location and where you turn your wifi on, a pattern will soon emerge. For those who are data conscious, a small reminder to turn on your wifi would be a good little value add. These are not ground breaking ideas, but tie enough of them together and they start to make a difference.

And it seems the Swedes like it as well. Since launching in January, the app has been downloaded 100,000 times, 76% of those downloads retained, and has a rating of 4.5/5 on Google Play. For those who have the app, the Net Promoter Score is 29, compared to a score of 1 when they don’t.  Berghog wasn’t able to say whether this has had a direct positive impact on churn rates, but the early signs are certainly good ones.

But it should be worth noting Telia also plan to make money off this data as well. By collecting information around storage, battery, general performance and device age/condition, and combining that with other data sets such as customer demographic, handset type and historical upgrading behaviour, Telia can start to develop a purchase pattern for each customer. This can be used to approach the customer at the right time to renew an agreement or upsell to more premium products.

Customer retention is clearly an objective for Telia, and creating these purchase patterns mean the team can engage the customer earlier in the process. Potentially the team can start that conversation before the customer get curious by other deals.

Berghog thinks there is also another way the team can provide value by becoming a bit of a broker for the mobile industry. All the data which has been collected so far not only allows Telia to increase engagement, avoid churn and upsell new products, it also tells the team about how you use your device specifically. An ambition for Berghog is to become an independent advisor to the customer.

Imagine you currently have a Samsung handset. With all of this information, Telia might be able to say because the way you use the device, the new Huawei model might be more suitable. It might also be able to suggest not to update to the newest version of an operating system, for example, because that would not suit the way you use your device. Helping the customer make more informed decisions is one way in which Berghog feels Telia can add value and create an emotional connection to the customer.

The best ideas are the ones where both sides of the equation feel they have gained something. This is one of the instances where it could be true. The customer gets a better experience, and potentially a better deal, whereas Telia increases customers loyalty. It is still early days for the moment; Berghog highlighted the team need to validate the benefits to Telia, while also scaling to the rest of the user base, but the early signs are certainly positive.