Nvidia share price slips as crypto boom fades

Nvidia has reported a 40% year-on-year boost, though weakening demand in the cryptocurrency business unit dampened excitement.

With total revenues standing at $3.12 billion for the three months, cryptocurrency-specific products declined to approximately $100 million with the downward spiral set to continue through the next couple of quarters. While the business anticipated the demand to remain throughout the year, projections account for no contributions going forward.

Looking at the other individual business units, Gaming revenue was $1.8 billion, up 52% from a year ago and up 5% sequentially. Professional Visualization revenue reached $281 million, up 20% year-on-year and up 12% sequentially. Data-centre revenue was $760 million, up 83% from 2017 and 8% sequentially, led by strong sales of Volta architecture products. OEM and IP revenue was $116 million, down 54%, with the crypto business dragging everything down here. Automotive grew 13% to $161 million.

“Growth across every platform – AI, Gaming, Professional Visualization, self-driving cars – drove another great quarter,” said Jensen Huang, CEO of Nvidia. “Fuelling our growth is the widening gap between demand for computing across every industry and the limits reached by traditional computing. Developers are jumping on the GPU-accelerated computing model that we pioneered for the boost they need.

“We announced Turing this week. Turing is the world’s first ray-tracing GPU and completes the Nvidia RTX platform, realizing a 40-year dream of the computer graphics industry. Turing is a giant leap forward and the greatest advance for computing since we introduced CUDA over a decade ago.”

Speaking at the SIGGRAPH professional graphics conference in Vancouver last week, Huang unveiled Turing, Nvidia’s eighth-generation GPU architecture, bringing ray tracing to real-time graphics. The management team believe this is the company’s most important innovation since the invention of the CUDA GPU more than a decade ago.

Turing is claimed to fundamentally changes how computer graphics will be done, and is the result of more than 10,000 engineering-years of effort. It better work now.

Nvidia builds new AI platform to give robots better brains

Nvidia has announced the general availability of its Isaac platform designed to bring the futuristic world of robots for manufacturing, logistics, agriculture, construction and other industries.

The platform was launched at Computex 2018, includes hardware, software and a virtual-world robot simulator, as well as Jetson Xavier, which Nvidia claim’s is world’s first computer designed specifically for robotics.

“AI is the most powerful technology force of our time,” said CEO Jensen Huang. “Its first phase will enable new levels of software automation that boost productivity in many industries. Next, AI, in combination with sensors and actuators, will be the brain of a new generation of autonomous machines. Someday, there will be billions of intelligent machines in manufacturing, home delivery, warehouse logistics and much more.”

Looking specifically of Jetson Xavier, the box contains 9 billion transistors, delivering more than trillion operations per second, while using a third the energy of a lightbulb. Jetson Xavier has six kinds of high-performance processors, including a Volta Tensor Core GPU, an eight-core ARM64 CPU, dual NVDLA deep learning accelerators, an image processor, a vision processor and a video processor. This level of performance is critical due to the complexity of robotics with processes such as sensor processing, odometry, localization and mapping, vision and perception, and path planning.

On the Isaac Robotics Software side of things, Nvidia has billed the platform as a ‘toolbox’ for the simulation, training, verification and deployment of Jetson Xavier. The robotics software consists of Isaac SDK, APIs and tools to develop robotics algorithm software, Isaac IMX, the platforms Intelligent Machine Acceleration applications and Isaac Sim, a virtual simulation environment for training.

Nvidia will have a lot to live up for with these announcements. Aside from making big promises to a segment of artificial intelligence which has struggled to make progress, the team has stated the $1,299 box will have the same processing power as a $10,000 workstation.

Nvidia claims autonomous driving breakthrough, but let’s see

Nvidia has attempted to jump-start the CES PR euphoria, claiming it can achieve Level 5 autonomous driving right now with its Xavier processors.

The chip itself was initially announced 12 months ago, but this quarter has seen the processor delivered to customers. Testing has begun, and Nvidia has been stoking the fire with a very bold claim.

“Delivering the performance of a trunk full of PCs in an auto-grade form factor the size of a license plate, it’s the world’s first AI car supercomputer designed for fully autonomous Level 5 robotaxis,” Nvidia said on its blog.

Hyping up a product to almost undeliverable heights is of course nothing new in the tech world, and Nvidia has learned from the tried and tested playbook. Make an incredibly exceptional claim for a technology which is unlikely to be delivered to the real world for decades.

Xavier will form part of the Nvidia’s Drive software stack, containing 9 billion transistors. It is the product of a four-year project, sucking up $2 billion in research and development funds, with contributions from 2,000 engineers. It is built around an 8-core CPU, a 512-core Volta GPU, a deep learning accelerator, computer vision accelerators and 8K HDR video processors. All to deliver Level 5 autonomous driving.

Just as a recap, Level 5 autonomous driving is the holy grail. At this point, humans will not be needed to interact with the car at any point:

  • Level 0: Full-time performance by the human driver
  • Level 1: Driving assistance of either steering or acceleration/deceleration using information about the driving environment. Human drives the rest of the time.
  • Level 2: The system can be responsible for both steering and acceleration/deceleration using information about the driving environment. This could be described as hands off automation.
  • Level 3: This is known officially as conditional automation. The autonomous driving system will be responsible for almost all aspects of the dynamic driving task. Humans will still need to be aware to intervene in certain circumstances. This could be described as eyes off automation.
  • Level 4: The car will be almost fully-autonomous, though there might be rare circumstances where a human would have to intervene. Aside from the most extreme circumstances, this could be described as mind off automation.
  • Level 5: Full autonomy. You don’t even have to be awake.

During the same pre-CES event, the team also announced AR products, new partnerships and solutions in the gaming space, but Level 5 autonomy is the headline maker. Reaching this level is all well and good, but the technology does not have a foot in reality just yet. Nvidia might be there in terms of technological development, so it claims, but that does not mean autonomous cars will be hitting the roads any time soon. Not by a long way.

Firstly, while the processors might be there, the information is not. Companies like Google have been doing a fantastic job at creating mapping solutions, but the details is still not there for every single location on the planet. Until you can accurately map every single scenario and location a car may or may not end up in, it is impossible to state with 100% accuracy that Level 5 autonomous vehicles are achievable.

Secondly, to live the autonomous dream, a smart city is necessary. To optimize driving conditions, the car will need to receive data from the traffic lights to understand the flow of vehicles, and also any unusual circumstances. To ensure safety and performance, connectivity will have to be ubiquitous. The smart city dream is miles away, and therefore the autonomous vehicles dream is even further.

Thirdly, even if the technology is there, everything else isn’t. Regulations are not set up to support autonomous vehicles, neither is the insurance industry or the judicial system. If an autonomous vehicle is involved in a fatal incident, who get prosecuted? Do individuals need to be insured if they are asleep in the car? There are many unanswered questions.

Finally, when will we accept autonomous vehicles? Some people are incapable of sitting in a passenger seat while a loved one drives, how will these individuals react to a computer taking charge? Culturally, it might be a long time before the drivers of the world are comfortable handing control over to a faceless piece of software.

Nvidia might be shouting the loudest in the race to autonomous vehicles right now, but let’s put things in perspective; it doesn’t actually mean anything.

Nvidia software raises question as to whether creativity actually exists

Software developed by Nvidia is building unique images which asks the question of whether creativity is a real thing?

According to the New York Post, a small team of Nvidia researchers is training software to use certain features from celebrity photos to create new and unique images. And the team isn’t stopping with faces either. The software can also generate unique images of everyday items such as horses, buses, bicycles and plants.

The project is part of Nvidia’s greater ambitions of carving a greater influence in the technology world. AI is at the heart of these efforts, but it also cracks an area of AI which has baffled many; creativity.

Computational creativity is one of the pillars of artificial intelligence which very few people talk about. In fact, few people actually recognise any of them, instead thinking topics like natural language processing and machine learning are peers of AI. AI is the umbrella term which encompasses technologies such as natural language processing and machine learning, as well as computational perception and contextual awareness. Computational creativity is another.

But this is a potentially controversial area, as it is supposed to be a sanctuary when the computers take the rest of the jobs away from us. Unique thought and creating new concepts are supposed to be something human. Can a computer be creative when it doesn’t have a soul, or do we even understand what creativity actually is?

When you look at the most basic definition of creativity, we think a computer can be.

If you assume the purpose of creativity is to create something novel, then what Nvidia has achieved is genuinely creative. But, we can hear the naysayers already; this isn’t creative as it is simply merging together existing features. This is an understandable argument, but is this not what artists of today would call inspiration?

If a painter applies Monet’s techniques to their work, is that inspiration or copying? If an author enjoys The Great Gatsby and writes in a similar descriptive manner, is that inspiration or plagiarism? If a singer buts his own unique twist on a cover song, is that person nothing more than an impersonator?

Nvidia has created software which assesses the information, identifies a gap and then uses the best elements of what it has at its disposal to create something which wasn’t there to start with. Just because there is a scientific methodology behind the process does not mean it is not creative.

There will of course be people who disagree, but then you have to go back to the purpose of creativity (not the only purpose of creativity of course); the formulation of something which is unique, works and, in a business sense, addresses a gap in the market. On a theoretical basis, Nvidia has achieved this.

So what does this mean? Nothing right now, but in the long-term there could be opportunities for AI to think of new business models, or advertising campaigns, or new product ideas. Maybe we will become redundant after all…

Nvidia unveils Titan V with 110 Teraflops of deep learning power

Nvidia has unleashed new desktop GPU, with claims the beast is taming 110 Teraflops of horsepower under the hood, a moody nine times that of its puny predecessor.

Designed for computational processing for machine learning researchers, developers, and data scientists, it’s 21.1 billion transistors can deliver 110 teraflops of processing power, nine times more powerful than the Titan X, and what the company describes as ‘extreme energy efficiency’. The technology version of roid heads must be frothing at the mouth.

“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said CEO Jensen Huang.

“With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”

So where does the extra power come from? Nvidia has pointed towards a redesign of the streaming multiprocessor that is at the centre of the GPU, which it claims doubles energy efficiency compared to the previous generation, which results in the boost in performance in the same power envelope. The team has also highlighted independent parallel integer and floating-point data paths, while also a new combined L1 data cache and shared memory unit which apparently improves performance and simplifies programming.

Some might suggest it is a step backwards, as this is a product which is designed for local use, not necessarily the cloud, but there will be those who prefer the convenience of running workloads on a local machine. Customers will be able to connect to the Nvidia GPU Cloud to make use of software updates, including Nvidia-optimized deep learning frameworks, third-party managed HPC applications. And all this for a cool $2,999.

Nvidia stakes another bold claim for autonomous driving

Google has recently set the roads alight with claims of cracking the self-driving conundrum by the end of the year, and it didn’t take long for Nvidia to start shouting.

The chip company has launched a new system, codenamed Pegasus, will be able to handle Level 5 driverless vehicles – the highest level of automation. And we’re talking pretty much right now. Most people are talking 2020 or 2021 at the earliest for notable steps forward, but Nvidia has said the Pegasus chip will be available for automotive partners in the second half of 2018.

The system will pair two of Nvidia’s Xavier system-on-a-chip processors with two next-generation GPUs with hardware created for accelerating deep learning and computer vision algorithms. The team claim the system will be able to meet the enormous computational power demanded by autonomous driving in a computer the size of a license plate. It is a very bold claim.

“Creating a fully self-driving car is one of society’s most important endeavours – and one of the most challenging to deliver,” said Jensen Huang, Nvidia CEO.

“Driverless cars will enable new ride- and car-sharing services. New types of cars will be invented, resembling offices, living rooms or hotel rooms on wheels. Travelers will simply order up the type of vehicle they want based on their destination and activities planned along the way. The future of society will be reshaped.”

The computational demands of autonomous vehicles should not be underplayed at all, this complexity is the reason for relatively slow progress to date. Every car will need various high-resolution, 360-degree surround cameras and lidars, to detect the surrounding environment, as well as linking directly to mapping technologies to almost perfect accuracy, and factoring in thousands of scenarios of how the environment could change.

All of this has to be done almost instantaneously to ensure safety, which is causing the hold-up. No-one wants to drag around a data centre in their boot, so claims of a license plate sized computer which can run the car will certainly get attention. In fact, the team claim Pegasus will be able to process 320 trillion operations per second; that’s 10 times more than its predecessor

And who knows whether the Nvidia system will actually work properly in the real-world. It’s all well and good making these claims, but realistically we won’t see any autonomous vehicles on the road for years, if not decades. If you think the bureaucrats move slowly normally, just wait until they start to rewrite the rules of the road.

Another area to consider is whether we’ll be ready for self-driving cars in this decade. Handing over control of a vehicle is a big psychological step to take. Some people don’t like sitting in the passenger seat while someone else drives, imagine the freak-outs which will take place when the car drives itself.

This claim might put Nvidia at the front of the self-driving race, but bear in mind how far away the rest of society is from allowing Level 5 autonomous vehicles; everyone else will catch-up in that timeframe.

Levels of Autonomous Driving

Huawei joins forces with Nvidia for server fun

Huawei has started the Friday party early by unveiling the FusionServer G series heterogeneous computing platform. Break out the party hats.

Launched at Huawei Connect 2017, the FusionServer G5500 and G2500 products build out Atlas, Huawei’s intelligent cloud hardware platform, as well as its Boundless Computing strategy. We can basically hear you drooling through the computer screen.

“Today’s enterprise service applications are rapidly evolving, and the types of workloads are also diversifying. These pose tremendous challenges on the efficiency and flexibility of computing platforms. With the G Series heterogeneous computing platform, we can help our customers better meet these challenges,” said Qiu Long, President, IT Server Product Line, Huawei.

“Huawei is happy to join forces with Nvidia for comprehensive, deep collaboration in the AI computing front. We believe that both parties’ innovation power will translate into powerful GPU Accelerated Datacentre platforms to help our customers travel more smoothly and swiftly in the digital transformation journey.”

Huawei claim the G series is engineered with heterogeneous resource pooling capabilities, allowing resources to be intelligently orchestrated based on application workloads. Huawei believe this will enable users to derive higher computing efficiency.

Looking more specifically at the individual products, FusionServer G5500 is a heterogeneous server with a focus on data center deployment, while FusionServer G2500 is a smart video analytics server positioned for application scenarios such as safe city and smart transportation.

We couldn’t imagine a better start to the weekend.