Enroll Course

100% Online Study
Web & Video Lectures
Earn Diploma Certificate
Access to Job Openings
Access to CV Builder



online courses

Nvidia CEO Jensen Huang announces new AI chips: ‘We need bigger GPUs

business . 

Nvidia’s announcement of a new generation of artificial intelligence chips and software at its developer’s conference in San Jose underscores its commitment to maintaining its position as a leading supplier for AI companies. Since the onset of the AI boom in late 2022, fueled in part by OpenAI’s ChatGPT, Nvidia’s share price has surged five-fold, and its total sales have more than tripled. The company’s high-end server GPUs play a crucial role in training and deploying large AI models, making them indispensable for tech giants like Microsoft and Meta, which have invested billions of dollars in acquiring these chips.

Nvidia unveiled its new generation of AI graphics processors, named Blackwell, with the first chip, GB200, slated to ship later this year. The introduction of Blackwell aims to address the demand for more powerful chips, as companies and software makers are still struggling to acquire current-generation “Hopper” H100s and similar chips. Nvidia CEO Jensen Huang emphasized the need for bigger GPUs, stating that while Hopper is excellent, there is a demand for even more powerful options.

In addition to hardware advancements, Nvidia introduced revenue-generating software called NIM, designed to simplify AI deployment, offering customers another incentive to opt for Nvidia chips amid increasing competition in the market.

Nvidia executives highlighted the company’s transition from solely a chip provider to a platform provider, likening its role to that of Microsoft or Apple, where other companies can build software on Nvidia’s platform. Huang emphasized that Blackwell represents more than just a chip—it’s a platform for innovation and development.

Nvidia’s enterprise VP, Manuvir Das, explained that while Nvidia’s primary product has traditionally been its GPUs, the company now also focuses on developing commercial software to facilitate the utilization of these GPUs in various applications. While Nvidia continues to provide GPUs for both deployment and development purposes, the emphasis has shifted towards building a robust commercial software business.

Das highlighted the introduction of Nvidia’s new software, which aims to simplify the process of running programs on any of Nvidia’s GPUs, including older models that may be better suited for deployment rather than development of AI models. With this software, called NIM, developers can ensure that their models are runnable on all Nvidia GPUs, thereby reaching a wider audience and maximizing their impact.

Nvidia updates its GPU architecture every two years, delivering significant performance enhancements with each iteration. The latest architecture, Blackwell, promises a substantial leap in performance, particularly for AI applications. For example, the GB200 chip boasts 20 petaflops in AI performance, a significant upgrade compared to the 4 petaflops offered by its predecessor, the H100.

One notable feature of the Blackwell chip is its dedicated “transformer engine,” designed specifically to efficiently run transformer-based AI models, such as those powering ChatGPT. This specialized engine enhances the chip’s capability to handle complex AI workloads.

In terms of manufacturing, the Blackwell GPU is a large chip that integrates two separately manufactured dies into a single chip, manufactured by TSMC. Additionally, Nvidia offers the GB200 NVLink 2 server, which combines 72 Blackwell GPUs along with other Nvidia components tailored for AI model training. This comprehensive solution provides AI companies with the processing power and infrastructure needed to train larger and more intricate AI models effectively.

Amazon, Google, Microsoft, and Oracle have all announced plans to offer access to the GB200 chip through their respective cloud services. The GB200 configuration involves pairing two B200 Blackwell GPUs with one Arm-based Grace CPU. Nvidia revealed that Amazon Web Services (AWS) intends to construct a server cluster featuring 20,000 GB200 chips.

According to Nvidia, the system equipped with GB200 chips can deploy a model with up to 27 trillion parameters. This capability far exceeds the parameter count of even the largest existing models, such as GPT-4, which reportedly has 1.7 trillion parameters. The ability to train and deploy models with such a vast number of parameters is anticipated to unlock new capabilities and advancements in artificial intelligence research and applications.

As for pricing, Nvidia has not disclosed the cost of the GB200 chip or the systems it is integrated into. Analyst estimates suggest that Nvidia’s previous-generation Hopper-based H100 chip ranges in price from $25,000 to $40,000 per chip, with entire systems costing up to $200,000.

Nvidia has introduced a new product called NIM as part of its Nvidia enterprise software subscription. NIM simplifies the utilization of older Nvidia GPUs for inference tasks, which involve running AI software. This allows companies to leverage their existing inventory of Nvidia GPUs for inference, which typically requires less computational power compared to the initial training of AI models. With NIM, companies can run their own AI models rather than relying on AI services provided by companies like OpenAI.

The aim of this strategy is to encourage customers who purchase Nvidia-based servers to subscribe to Nvidia enterprise, which entails a cost of $4,500 per GPU per year for a license. Nvidia will collaborate with AI companies such as Microsoft or Hugging Face to optimize their AI models for compatibility with Nvidia chips. Then, developers can efficiently deploy these models on their own servers or cloud-based Nvidia servers using NIM, simplifying the configuration process.

According to Nvidia, NIM will also facilitate AI deployment on GPU-equipped laptops, enabling AI tasks to be performed locally instead of relying solely on cloud-based servers.

SIIT Courses and Certification

Full List Of IT Professional Courses & Technical Certification Courses Online
Also Online IT Certification Courses & Online Technical Certificate Programs