The engineers at NVIDIA claim to have harnessed the power of Thor. Actually, I made it up. Although much of NVIDIA’s press release about its new autonomy chip codenamed Thor is equally arbitrary (or it was just written by a mischievous Loki to confuse us all). Although in all fairness, one thing I’ve noticed all chip makers are guilty of – be it NVIDIA, Tesla, Apple or probably others – is that they show the performance of their new products using metrics, that makes it impossible to really understand how much more useful their new product is compared to what is already on the market or sold by competitors. But more on that later.
First the announcement: NVIDIA has revealed that in 2025 it will launch a new autonomy chip called Thor. This chip will have a staggering 2,000 teraFLOPS of computing power and can be used to replace several kinds of processors currently used in a vehicle, including for: infotainment, various vehicle controls, driving, autonomy, ADAS and more.
With the chip shortages that automakers have suffered for what feels like years now (oh, it’s been years), this is a welcome change that could, in theory, solve some problems. At the same time, it’s also a pretty sneaky move, as this will force NVIDIA’s automaker clients to drop most of their other chip suppliers or pay a lot of money for other redundancy. You see, just to be safe, it is necessary to have redundancy – that way, if a chip fails, the car can continue to function normally. In other words, each car will probably need two of these Thor chips. This is something NVIDIA has taken into account with its “NVLink-C2C chip interconnect technology.” However, I’m sure NVIDIA is more than happy for all their automotive customers to have to buy two of these expensive powerhouse chips for every car they plan to sell.
How fast is this chip really?
All these years, NVIDIA benchmarked the performance of their autonomous chips in TOPS (Tera Operations Per Seconds) when performing tasks in INT8 (a type of 8-bit code). This time they decided to benchmark it in TFLOP (Tera Floating Operations Per Second) when executing FP8 (another type of 8-bit code). I’d say they’re comparing apples to oranges, but that would undermine the fact that they changed the scales at both ends of the graph. Instead, I’d say they basically switched from a race track to one with a very different shape and also decided to switch from an internal combustion engine vehicle to an electric one.
NVIDIA also published a graph full of discrepancies to show how much more powerful its new chip is comparing Thor to NVIDIA’s previous chips. On this graph, it shows the performance of the old chips in TOPS (measured in INT8) with the new one, but instead of giving a metric in INT8 TOPS, it shows the same number of 2,000 that we know they measured in FP8 TFLOPs. So also for Orin, the bottom scale says 250 and the scale on the left seems to show it around 500. Whoever approved this press release and graph in NVIDIA’s PR and marketing departments deserves a stern talking to and maybe should participate in some form for processor terminology seminar.
The only thing that is a bit of an apples-to-apples comparison in NVIDIA’s press release is the number of transistors. Thor will have 77 billion transistors instead of the 17 billion that Orin has (the chip it finally started shipping not too long ago).
What about Tesla?
Most of you are probably wondering how this compares to the Tesla. Tesla’s HW3 autonomy chip can handle 144 TOPS, and in the tiniest footnote in the history of footnotes during the Dojo supercomputer announcement, Elon commented during Q&A that the HW4 will have 4x the power of the HW3, which would equate to around 576 TOPS. Suffice it to say, when it comes to theoretical benchmarks of computing power, I think we can safely place the HW4 somewhere between NVIDIA’s current Orin autonomy chip and its future all-in-one Thor chip. However, it is not a defeat – it is probably a much more efficient allocation of the necessary resources.
In practice, there are many things about Thor that worry me. The first is that NVIDIA did not specify how much power the chip will use. I worry about how efficiently the simulated neural nets will work compared to the dedicated hardware NPU design that Tesla has gone for. I also worry how many automakers will have the technical know-how to even take advantage of a magnificent chip like this. You can certainly count out the likes of Ford, VW, GM and many other legacy automakers – unless there are some very drastic changes in employment and leadership before 2025.
One of NVIDIA’s current customers, XPeng, is one of the few that has the software engineering talent required to work with NVIDIA and take advantage of such a sophisticated chip. But in a quite extraordinary programming feat, XPeng was already able to introduce City NGP (autonomy software with similar functionality to Tesla’s FSD) with just 20 TOPS NVIDIA Xavier chip, something even Tesla was unable to achieve. Now that XPeng is switching to Orin, I can hardly imagine when it will max out the 250 TOPS that chip gives the company.
If anything, NVIDIA should focus on two aspects: making its chips more power efficient, and writing more software itself so that other automakers can actually use its chips. If NVIDIA really wants to succeed, it will need to become more like Intel Mobileye, which offers a full suite of autonomy hardware and software that companies like NIO are more than happy to take advantage of*.
*Editor’s note: I’m not sure if NVIDIA is very different or very far away from it, based on my last interview with Danny Shapiro, Senior Director of Automotive at NVDIA, but you can listen to our conversation via one of the podcast embeds below and judge for yourself. We are also soon ripe for another discussion as that interview is from February 2021!
Do you appreciate CleanTechnica’s originality and cleantech news coverage? Consider becoming a CleanTechnica member, supporter, technician or ambassador – or patron on Patreon.
Don’t want to miss a cleantech story? Sign up for daily news updates from CleanTechnica by email. Or follow us on Google News!
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.