18.9 F
Chicago
Wednesday, January 8, 2025

Key Takeaways From Nvidia CEO Jensen Huang’s CES Keynote On Advancing AI At “Incredible Pace”

Must read

Key Takeaways From Nvidia CEO Jensen Huang’s CES Keynote On Advancing AI At “Incredible Pace”

Nvidia CEO Jensen Huang kicked off CES 2025 on Monday evening with a 90-minute keynote showcasing the latest new products designed to advance gaming, autonomous vehicles, robotics, and agentic AI. 

Huang told thousands at the Michelob Ultra Arena in Las Vegas that artificial intelligence has been “advancing at an incredible pace.” 

“It started with perception AI — understanding images, words, and sounds. Then generative AI — creating text, images, and sound,” Huang explained, emphasizing how we’re on the cusp of entering the era of “physical AI, AI that can proceed, reason, plan, and act.”

Huang explained that Nvidia GPUs and platforms enabled this explosive transformation that allowed breakthroughs across industries, including gaming, robotics, and autonomous vehicles. 

Several of Wall Street’s leading tech research desks attended Jensen Huang’s keynote yesterday.

Notably, Goldman Sachs tech analysts Toshiya Hari and Anmol Makkar attended the keynote speech and provided clients Tuesday morning with seven key takeaways from the presentation:

1. RTX Blackwell family: Mr. Huang introduced the GeForce RTX 50 Series Desktop and Laptop GPUs for gamers, creators and developers based on the Blackwell architecture. Supported by AI-driven rendering (i.e. AI will help boost frame rates by generating three frames per one rendered frame), the RTX 5090 will deliver 2x the performance of the RTX 4090, while the RTX 5070 at $549 will boast performance that is similar to the RTX 4090 at $1,599. The GeForce RTX 5090 GPU will feature 92 billion transistors with 3,352 AI TOPS of computing power.

2. Three scaling laws: consistent with his message on the company’s earnings call in November, Mr. Huang highlighted that there are three scaling laws that will drive demand for accelerated computing going forward; a) pre-training scaling (i.e. more compute applied to more data driving better-quality models), b) post-training scaling (i.e. use of re-inforcement learning in improving the quality of output) and c) test-time scaling or reasoning (i.e. models developing reasoning and thinking capabilities).

3. Blackwell in full production: in contrast to some investor concerns, Mr. Huang stressed that Blackwell-powered systems were in full production, and that every Cloud Service Provider had Blackwell-powered systems up and running. He also discussed the need to drive down the cost of compute as models including OpenAI’s o1 and o3 and Google’s Gemini Pro continue to increase in complexity (i.e. developing reasoning and thinking skills), and highlighted Blackwell’s 4x better performance per watt and 3x better performance per dollar in relation to Hopper. Importantly, we expect Nvidia to innovate and, in turn, deliver lower cost per compute unit on a consistent basis for the foreseeable future with the near-term drivers being the introduction of Blackwell Ultra in 2H25 and Rubin in 2026.

4. Nvidia Llama Nemotron Language Foundation Models: Mr. Huang announced Nvidia Llama Nemotron language foundation models, based on Meta’s Llama LLM and optimized for Agentic AI for enterprises. These models are designed to help developers create and deploy custom AI agents to assist in a broad-range of use cases including fraud detection, customer support and inventory management optimization. The Llama Nemotron model family will be available in three sizes to provide options for different scale deployments, i.e. a) Nano – cost optimized for real-time applications with low latency, b) Super – designed for high-throughput use cases and c) Ultra – designed for highest accuracy and datacenter scale applications.

5. Nvidia Cosmos: Nvidia also announced Cosmos, a comprehensive platform consisting of “world foundation models,” tokenizers, and data processing tools aimed toward enabling physical AI systems such as autonomous vehicles and humanoid robots. While these systems are incredibly expensive, both from a monetary and data/testing intensity perspective, Cosmos (which NVDA has made available under an open model license) democratizes the development process through synthetic data for training and evaluation. Use cases for Cosmos include: a) using alongside Omniverse to generate all future outcomes an AI model could take to select the best path, b) enabling developers to easily find specific training scenarios, like snowy road conditions or warehouse congestion, from video data, or c) generating photo-real videos from controlled 3D scenarios created in the Omniverse platform.

6. Pursuing three robots: NVIDIA aims to enable the development of three robots, which, if successful would be “the largest technology industry the world’s ever seen,” namely, 1) Agentic AI, 2) Self-driving cars, and 3) Humanoid Robots. A critical step in developing these technologies, specifically for Humanoid Robots, is the processing of imitation information. While this tends to be a rather laborious process, NVIDIA seeks to relieve this bottleneck through Omniverse synthetically generated motions – which can allow for imitation training to occur independent of physical human demonstration and thus speeding training times significantly.

7. Nvidia Project DIGITS: Opposite to the trend of supercomputers increasing in size, NVIDIA unveiled DIGITS, a system-on-chip running the GB10 Grace Blackwell Superchip, boasting 128GB of coherent memory and up to 4TB of NVMe storage – optimized into a form-factor capable of fitting on one’s desk. The DIGITS SoC architecture design was supported by MediaTek, and utilizes the GB10 (Grace CPU/Blackwell GPU) Superchip. This compute unit can run up to 200bn parameter LLMs with 1 petaflop of FP4 precision, and maintains access to the extensive NVIDIA AI software and cloud suite. Ultimately, DIGITS enables widespread supercompute capabilities for researchers and developers to train and inference models, without relying on off-premise/cloud AI accelerator clusters. Per Nvidia, DIGITS will be made available during CY2Q25, with pricing beginning at $3,000.

The analysts reiterated a “Buy” rating on Nvidia, maintaining a 12-month price target of $165.

Nvidia shares surged to a record high of around $152 ahead of Huang’s keynote speech on Monday evening. By 1030 ET on Tuesday, shares pulled back 2.5%, trading at $145. Notably, shares faced heavy resistance above $140 for much of late 2024. 

CES will draw over 150,000 attendees and over 4,500 exhibitors through Saturday. 

Tyler Durden
Tue, 01/07/2025 – 11:45

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisement -spot_img

Latest article