Nvidia takes 5G to the edge with help from Ericsson and Red Hat

Graphics chip maker Nvidia has unveiled its EGX Edge Supercomputing Platform that is designed to boost 5G, IoT and AI processing at the edge of the network

Nvidia has long been the market leader in GPUs (graphics processing units), which has enabled it to get a strong position in supercomputing, where the parallel processing qualities of GPUs come in especially handy. This EGX initiative seems to be Nvidia’s attempt to translate that position from datacentres to the edge computing.

“We’ve entered a new era, where billions of always-on IoT sensors will be connected by 5G and processed by AI,” said Jensen Huang, Nvidia CEO. “Its foundation requires a new class of highly secure, networked computers operated with ease from far away. We’ve created the Nvidia EGX Edge Supercomputing Platform for this world, where computing moves beyond personal and beyond the cloud to operate at planetary scale.”

There seems to be a fair bit of support for this new platform, with a bunch of companies and even a couple of US cities saying they’re already involved. “Samsung has been an early adopter of both GPU computing and AI from the beginning,” said Charlie Bae, EVP of foundry sales and marketing at Samsung Electronics. “NVIDIA’s EGX platform helps us to extend these manufacturing and design applications smoothly onto our factory floors.”

“At Walmart, we’re using AI to define the future of retail and re-think how technology can further enhance how we operate our stores,” said Mike Hanrahan, CEO of Walmart Intelligent Retail Lab. “With NVIDIA’s EGX edge computing platform, Walmart’s Intelligent Retail Lab is able to bring real-time AI compute to our store, automate processes and free up our associates to create a better and more convenient shopping experience for our customers.”

On the mobile side Ericsson is getting involved to build virtualized 5G RANs on EGX. As you would expect the reason is all about being able to introduce new functions and services more easily and flexibly. More specifically Ericsson hopes the platform will make virtualizing the complete RAN solution cheaper and easier.

“5G is set to turbocharge the intelligent edge revolution,” said Huang. “Fusing 5G, supercomputing, and AI has enabled us to create a revolutionary communications platform supporting, someday, trillions of always-on, AI-enabled smart devices. Combining our world-leading capabilities, Nvidia and Ericsson are helping to invent this exciting future.”

On the software side a key partner for all this virtualized 5G fun will be Red Hat, which is getting its OpenShift Kubernetes container platform involved. It will combine with Nvidia’s own Aerial software developer kit to help operators to make the kind of software-defined RAN tech that can run on EGX.

“The industry is ramping 5G and the ‘smart everything’ revolution is beginning,” said Huang. “Billions of sensors and devices will be sprinkled all over the world enabling new applications and services. We’re working with Red Hat to build a cloud-native, massively scalable, high-performance GPU computing infrastructure for this new 5G world. Powered by the Nvidia EGX Edge Supercomputing Platform, a new wave of applications will emerge, just as with the smartphone revolution.”

Things seemed to have gone a bit quiet on the virtualization front, with NFV, SDN, etc having apparently entered the trough of disillusionment. Nvidia is a substantial cloud player these days, however, and judging by the level of support this new initiative has, EGX could a key factor in moving the telecoms cloud onto the slope of enlightenment.

Nvidia brings its cloud gaming to Android

2019 was already looking like a promising year for cloud gaming and now Nvidia is bringing its own service, GeForce NOW, to Android, the streaming scrap is heating up.

Specifics on timing have not been released just yet, neither have pricing details, though Nvidia has said its streaming service will be available on Android devices over the coming months. With the service already available on PC and Mac devices, entering the Android world adds the potential of another two billion devices.

“Already in beta to the delight of 1 billion underpowered PCs that aren’t game ready, GeForce NOW will soon extend to one of the most popular screens in the world, Android phones – including flagship devices from LG and Samsung,” the team said on its blog.

“Just like on PC, Mac and Shield TV, when the Android mobile app releases it’ll be in beta. We’ll continue improving and optimizing the experience.”

The move into Android will take Nvidia into direct competition with both Google’s Stadia and Microsoft’s xCloud. There are of course pros and cons for all the available services, though a couple of bonus’ for Nvidia will gauge the interest of some gamers. Firstly, second purchases on titles will not be needed for the cloud gaming service, while the GeForce RTX graphics performance will be introduced soon enough.

Google was the first to plug the potential of cloud gaming back in March, promising users they will be able to access their games at all times, and on virtually any screen. The initial launch will be for £8.99 a month, though the team does plan on launching a ‘freemium’ alternative soon after. As you can imagine, Google is always looking for ways the complex data machine can offer content to users for profit.

It didn’t take long for Microsoft to launch its own alternative following the press Google collected. Hyped as the ‘Netflix of video games’, Microsoft will charge $9.99 to access a range of Xbox One and Xbox 360 titles on any screen. Like Stadia and GeForce NOW, a controller would have to plugged into Android devices.

There are some ridiculous figures which are being banded around concerning the percentage of traffic cloud gaming will account for during the 5G era, it is a segment worth keeping an eye on.

Nvidia wins the Mellanox courtship for $6.9 billion to boost datacenter offering

InfiniBand and ethernet technology company Mellanox has been attracting attention from a range of different suitors over the last few months, but Nvidia has won the prize.

Nvidia and Mellanox have officially announced the pair have reached a definitive agreement under which will see the GPU giant sign a cheque for approximately $6.9 billion, or $125 per share. After Microsoft, Intel and Xilinx were reported courting Mellanox, Nvidia comes home with the goods.

“The emergence of AI and data science, as well as billions of simultaneous computer users, is fuelling skyrocketing demand on the world’s datacenters,” said Jensen Huang, CEO of Nvidia. “Addressing this demand will require holistic architectures that connect vast numbers of fast computing nodes over intelligent networking fabrics to form a giant datacenter-scale compute engine.

“We share the same vision for accelerated computing as Nvidia,” said Eyal Waldman, CEO of Mellanox. “Combining our two companies comes as a natural extension of our longstanding partnership and is a great fit given our common performance-driven cultures. This combination will foster the creation of powerful technology and fantastic opportunities for our people.”

The acquisition announcement arrives at a useful time for Nvidia, a company which is seeking to expand outside its traditional markets. The GPU giant has been heavily reliant on the gaming and cryptocurrency segments in by-gone years, though dampening demands hit the financials last year. That said, the growing datacenter business has offset some of the negative trends.

Looking at the most recent financials, datacenter sales at Nvidia accounted for 31% of the total during the last period, up from 19% in the previous year. Adding Mellanox into the mix will further diversify the business, cementing the pursuit of alternative revenues. The pair claim Nvidia computing platform and Mellanox’s interconnects power over 250 of the world’s TOP500 supercomputers.

Another interesting facet to this story is the influence of activist investor Starboard Value.

Having taken a 10.7% stake in Mellanox in November 2017, the team moved to have the entire board replaced in an attempt to refocus the activities of the business. Starboard Value believed the business was focusing too much time on R&D, missing out on commercial opportunities.

Although this could be seen as a nightmare scenario for technologists, quarterlies did improve and share price has increased by 118% since the Starboard Value entry. Say what you will about the disruptive influence of activist investors, but this is an outcome few investors will complain about.

Alarm bells are being rung about the Chinese economy

Nvidia is the latest company to blame a downward forecast revision on deteriorating economic conditions in China.

In an announcement today Nvidia said it expects Q4 2018 revenue to be $2.2 billion instead of its previous guidance of $2.7 billion, a downgrades of around 20%. Nvidia’s previous fourth-quarter guidance had embedded a sequential decline due to excess mid-range channel inventory following the crypto-currency boom,” said the announcement.

“The reduction in that inventory and its impact on the business have proceeded largely inline with management’s expectations. However, deteriorating macroeconomic conditions, particularly in China, impacted consumer demand for Nvidia gaming GPUs.

To make matters worse the initial guidance had been way below analyst expectations of $3.4 billion, with a crash in cryptocurrency-related computing blamed for that too, so it looks like Nvidia needed something else to blame this time. At time of writing Nvidia’s share price was down 14% from yesterday’s close and it has halved since its peak at the start of October 2018.

At the start of this month Apple downgraded its anticipated Q4 revenues from $91 billion to $84 billion (around 10%), with China culpable once more. “While we anticipated some challenges in key emerging markets, we did not foresee the magnitude of the economic deceleration, particularly in Greater China,” said Apple CEO TIM Cook at the time.

A lot of attention is understandably going towards the trade aggros between the US and China, but this macroeconomic declines seems too entrenched to be just down to that. The official line is that China’s lowest GDP growth for decades is all part of the grand plan to stop the economy overheating. But the problem is few people seem to trust the public numbers.

They probably do trust the reporting of US listed companies, however, and will interpret warnings like these and others as a truer measure of the state of the Chinese economy. If more companies blame their rubbish earnings on China there is a real risk of self-reinforcing panic.

Nvidia share price slips as crypto boom fades

Nvidia has reported a 40% year-on-year boost, though weakening demand in the cryptocurrency business unit dampened excitement.

With total revenues standing at $3.12 billion for the three months, cryptocurrency-specific products declined to approximately $100 million with the downward spiral set to continue through the next couple of quarters. While the business anticipated the demand to remain throughout the year, projections account for no contributions going forward.

Looking at the other individual business units, Gaming revenue was $1.8 billion, up 52% from a year ago and up 5% sequentially. Professional Visualization revenue reached $281 million, up 20% year-on-year and up 12% sequentially. Data-centre revenue was $760 million, up 83% from 2017 and 8% sequentially, led by strong sales of Volta architecture products. OEM and IP revenue was $116 million, down 54%, with the crypto business dragging everything down here. Automotive grew 13% to $161 million.

“Growth across every platform – AI, Gaming, Professional Visualization, self-driving cars – drove another great quarter,” said Jensen Huang, CEO of Nvidia. “Fuelling our growth is the widening gap between demand for computing across every industry and the limits reached by traditional computing. Developers are jumping on the GPU-accelerated computing model that we pioneered for the boost they need.

“We announced Turing this week. Turing is the world’s first ray-tracing GPU and completes the Nvidia RTX platform, realizing a 40-year dream of the computer graphics industry. Turing is a giant leap forward and the greatest advance for computing since we introduced CUDA over a decade ago.”

Speaking at the SIGGRAPH professional graphics conference in Vancouver last week, Huang unveiled Turing, Nvidia’s eighth-generation GPU architecture, bringing ray tracing to real-time graphics. The management team believe this is the company’s most important innovation since the invention of the CUDA GPU more than a decade ago.

Turing is claimed to fundamentally changes how computer graphics will be done, and is the result of more than 10,000 engineering-years of effort. It better work now.

Nvidia builds new AI platform to give robots better brains

Nvidia has announced the general availability of its Isaac platform designed to bring the futuristic world of robots for manufacturing, logistics, agriculture, construction and other industries.

The platform was launched at Computex 2018, includes hardware, software and a virtual-world robot simulator, as well as Jetson Xavier, which Nvidia claim’s is world’s first computer designed specifically for robotics.

“AI is the most powerful technology force of our time,” said CEO Jensen Huang. “Its first phase will enable new levels of software automation that boost productivity in many industries. Next, AI, in combination with sensors and actuators, will be the brain of a new generation of autonomous machines. Someday, there will be billions of intelligent machines in manufacturing, home delivery, warehouse logistics and much more.”

Looking specifically of Jetson Xavier, the box contains 9 billion transistors, delivering more than trillion operations per second, while using a third the energy of a lightbulb. Jetson Xavier has six kinds of high-performance processors, including a Volta Tensor Core GPU, an eight-core ARM64 CPU, dual NVDLA deep learning accelerators, an image processor, a vision processor and a video processor. This level of performance is critical due to the complexity of robotics with processes such as sensor processing, odometry, localization and mapping, vision and perception, and path planning.

On the Isaac Robotics Software side of things, Nvidia has billed the platform as a ‘toolbox’ for the simulation, training, verification and deployment of Jetson Xavier. The robotics software consists of Isaac SDK, APIs and tools to develop robotics algorithm software, Isaac IMX, the platforms Intelligent Machine Acceleration applications and Isaac Sim, a virtual simulation environment for training.

Nvidia will have a lot to live up for with these announcements. Aside from making big promises to a segment of artificial intelligence which has struggled to make progress, the team has stated the $1,299 box will have the same processing power as a $10,000 workstation.

Nvidia claims autonomous driving breakthrough, but let’s see

Nvidia has attempted to jump-start the CES PR euphoria, claiming it can achieve Level 5 autonomous driving right now with its Xavier processors.

The chip itself was initially announced 12 months ago, but this quarter has seen the processor delivered to customers. Testing has begun, and Nvidia has been stoking the fire with a very bold claim.

“Delivering the performance of a trunk full of PCs in an auto-grade form factor the size of a license plate, it’s the world’s first AI car supercomputer designed for fully autonomous Level 5 robotaxis,” Nvidia said on its blog.

Hyping up a product to almost undeliverable heights is of course nothing new in the tech world, and Nvidia has learned from the tried and tested playbook. Make an incredibly exceptional claim for a technology which is unlikely to be delivered to the real world for decades.

Xavier will form part of the Nvidia’s Drive software stack, containing 9 billion transistors. It is the product of a four-year project, sucking up $2 billion in research and development funds, with contributions from 2,000 engineers. It is built around an 8-core CPU, a 512-core Volta GPU, a deep learning accelerator, computer vision accelerators and 8K HDR video processors. All to deliver Level 5 autonomous driving.

Just as a recap, Level 5 autonomous driving is the holy grail. At this point, humans will not be needed to interact with the car at any point:

  • Level 0: Full-time performance by the human driver
  • Level 1: Driving assistance of either steering or acceleration/deceleration using information about the driving environment. Human drives the rest of the time.
  • Level 2: The system can be responsible for both steering and acceleration/deceleration using information about the driving environment. This could be described as hands off automation.
  • Level 3: This is known officially as conditional automation. The autonomous driving system will be responsible for almost all aspects of the dynamic driving task. Humans will still need to be aware to intervene in certain circumstances. This could be described as eyes off automation.
  • Level 4: The car will be almost fully-autonomous, though there might be rare circumstances where a human would have to intervene. Aside from the most extreme circumstances, this could be described as mind off automation.
  • Level 5: Full autonomy. You don’t even have to be awake.

During the same pre-CES event, the team also announced AR products, new partnerships and solutions in the gaming space, but Level 5 autonomy is the headline maker. Reaching this level is all well and good, but the technology does not have a foot in reality just yet. Nvidia might be there in terms of technological development, so it claims, but that does not mean autonomous cars will be hitting the roads any time soon. Not by a long way.

Firstly, while the processors might be there, the information is not. Companies like Google have been doing a fantastic job at creating mapping solutions, but the details is still not there for every single location on the planet. Until you can accurately map every single scenario and location a car may or may not end up in, it is impossible to state with 100% accuracy that Level 5 autonomous vehicles are achievable.

Secondly, to live the autonomous dream, a smart city is necessary. To optimize driving conditions, the car will need to receive data from the traffic lights to understand the flow of vehicles, and also any unusual circumstances. To ensure safety and performance, connectivity will have to be ubiquitous. The smart city dream is miles away, and therefore the autonomous vehicles dream is even further.

Thirdly, even if the technology is there, everything else isn’t. Regulations are not set up to support autonomous vehicles, neither is the insurance industry or the judicial system. If an autonomous vehicle is involved in a fatal incident, who get prosecuted? Do individuals need to be insured if they are asleep in the car? There are many unanswered questions.

Finally, when will we accept autonomous vehicles? Some people are incapable of sitting in a passenger seat while a loved one drives, how will these individuals react to a computer taking charge? Culturally, it might be a long time before the drivers of the world are comfortable handing control over to a faceless piece of software.

Nvidia might be shouting the loudest in the race to autonomous vehicles right now, but let’s put things in perspective; it doesn’t actually mean anything.

Nvidia software raises question as to whether creativity actually exists

Software developed by Nvidia is building unique images which asks the question of whether creativity is a real thing?

According to the New York Post, a small team of Nvidia researchers is training software to use certain features from celebrity photos to create new and unique images. And the team isn’t stopping with faces either. The software can also generate unique images of everyday items such as horses, buses, bicycles and plants.

The project is part of Nvidia’s greater ambitions of carving a greater influence in the technology world. AI is at the heart of these efforts, but it also cracks an area of AI which has baffled many; creativity.

Computational creativity is one of the pillars of artificial intelligence which very few people talk about. In fact, few people actually recognise any of them, instead thinking topics like natural language processing and machine learning are peers of AI. AI is the umbrella term which encompasses technologies such as natural language processing and machine learning, as well as computational perception and contextual awareness. Computational creativity is another.

But this is a potentially controversial area, as it is supposed to be a sanctuary when the computers take the rest of the jobs away from us. Unique thought and creating new concepts are supposed to be something human. Can a computer be creative when it doesn’t have a soul, or do we even understand what creativity actually is?

When you look at the most basic definition of creativity, we think a computer can be.

If you assume the purpose of creativity is to create something novel, then what Nvidia has achieved is genuinely creative. But, we can hear the naysayers already; this isn’t creative as it is simply merging together existing features. This is an understandable argument, but is this not what artists of today would call inspiration?

If a painter applies Monet’s techniques to their work, is that inspiration or copying? If an author enjoys The Great Gatsby and writes in a similar descriptive manner, is that inspiration or plagiarism? If a singer buts his own unique twist on a cover song, is that person nothing more than an impersonator?

Nvidia has created software which assesses the information, identifies a gap and then uses the best elements of what it has at its disposal to create something which wasn’t there to start with. Just because there is a scientific methodology behind the process does not mean it is not creative.

There will of course be people who disagree, but then you have to go back to the purpose of creativity (not the only purpose of creativity of course); the formulation of something which is unique, works and, in a business sense, addresses a gap in the market. On a theoretical basis, Nvidia has achieved this.

So what does this mean? Nothing right now, but in the long-term there could be opportunities for AI to think of new business models, or advertising campaigns, or new product ideas. Maybe we will become redundant after all…

Nvidia unveils Titan V with 110 Teraflops of deep learning power

Nvidia has unleashed new desktop GPU, with claims the beast is taming 110 Teraflops of horsepower under the hood, a moody nine times that of its puny predecessor.

Designed for computational processing for machine learning researchers, developers, and data scientists, it’s 21.1 billion transistors can deliver 110 teraflops of processing power, nine times more powerful than the Titan X, and what the company describes as ‘extreme energy efficiency’. The technology version of roid heads must be frothing at the mouth.

“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said CEO Jensen Huang.

“With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”

So where does the extra power come from? Nvidia has pointed towards a redesign of the streaming multiprocessor that is at the centre of the GPU, which it claims doubles energy efficiency compared to the previous generation, which results in the boost in performance in the same power envelope. The team has also highlighted independent parallel integer and floating-point data paths, while also a new combined L1 data cache and shared memory unit which apparently improves performance and simplifies programming.

Some might suggest it is a step backwards, as this is a product which is designed for local use, not necessarily the cloud, but there will be those who prefer the convenience of running workloads on a local machine. Customers will be able to connect to the Nvidia GPU Cloud to make use of software updates, including Nvidia-optimized deep learning frameworks, third-party managed HPC applications. And all this for a cool $2,999.

Nvidia stakes another bold claim for autonomous driving

Google has recently set the roads alight with claims of cracking the self-driving conundrum by the end of the year, and it didn’t take long for Nvidia to start shouting.

The chip company has launched a new system, codenamed Pegasus, will be able to handle Level 5 driverless vehicles – the highest level of automation. And we’re talking pretty much right now. Most people are talking 2020 or 2021 at the earliest for notable steps forward, but Nvidia has said the Pegasus chip will be available for automotive partners in the second half of 2018.

The system will pair two of Nvidia’s Xavier system-on-a-chip processors with two next-generation GPUs with hardware created for accelerating deep learning and computer vision algorithms. The team claim the system will be able to meet the enormous computational power demanded by autonomous driving in a computer the size of a license plate. It is a very bold claim.

“Creating a fully self-driving car is one of society’s most important endeavours – and one of the most challenging to deliver,” said Jensen Huang, Nvidia CEO.

“Driverless cars will enable new ride- and car-sharing services. New types of cars will be invented, resembling offices, living rooms or hotel rooms on wheels. Travelers will simply order up the type of vehicle they want based on their destination and activities planned along the way. The future of society will be reshaped.”

The computational demands of autonomous vehicles should not be underplayed at all, this complexity is the reason for relatively slow progress to date. Every car will need various high-resolution, 360-degree surround cameras and lidars, to detect the surrounding environment, as well as linking directly to mapping technologies to almost perfect accuracy, and factoring in thousands of scenarios of how the environment could change.

All of this has to be done almost instantaneously to ensure safety, which is causing the hold-up. No-one wants to drag around a data centre in their boot, so claims of a license plate sized computer which can run the car will certainly get attention. In fact, the team claim Pegasus will be able to process 320 trillion operations per second; that’s 10 times more than its predecessor

And who knows whether the Nvidia system will actually work properly in the real-world. It’s all well and good making these claims, but realistically we won’t see any autonomous vehicles on the road for years, if not decades. If you think the bureaucrats move slowly normally, just wait until they start to rewrite the rules of the road.

Another area to consider is whether we’ll be ready for self-driving cars in this decade. Handing over control of a vehicle is a big psychological step to take. Some people don’t like sitting in the passenger seat while someone else drives, imagine the freak-outs which will take place when the car drives itself.

This claim might put Nvidia at the front of the self-driving race, but bear in mind how far away the rest of society is from allowing Level 5 autonomous vehicles; everyone else will catch-up in that timeframe.

Levels of Autonomous Driving