Talofa Guest

Saini ese j / Tusi resitala

Welcome,{$name}!

/ Saini ese j
Samoa
EnglishDeutschItaliaFrançais한국의русскийSvenskaNederlandespañolPortuguêspolskiSuomiGaeilgeSlovenskáSlovenijaČeštinaMelayuMagyarországHrvatskaDanskromânescIndonesiaΕλλάδαБългарски езикAfrikaansIsiXhosaisiZululietuviųMaoriKongeriketМонголулсO'zbekTiếng ViệtहिंदीاردوKurdîCatalàBosnaEuskera‎العربيةفارسیCorsaChicheŵaעִבְרִיתLatviešuHausaБеларусьአማርኛRepublika e ShqipërisëEesti Vabariikíslenskaမြန်မာМакедонскиLëtzebuergeschსაქართველოCambodiaPilipinoAzərbaycanພາສາລາວবাংলা ভাষারپښتوmalaɡasʲКыргыз тилиAyitiҚазақшаSamoaසිංහලภาษาไทยУкраїнаKiswahiliCрпскиGalegoनेपालीSesothoТоҷикӣTürk diliગુજરાતીಕನ್ನಡkannaḍaमराठी
Aiga > Tala Fou > [{1 1}]

[{1 1}]

On March 20th, a spokesperson for Meta, the social media platform owned by Facebook, revealed to the foreign press that Nvidia's latest flagship artificial intelligence chip is expected to arrive later this year, marking the first batch of chips shipped by Nvidia.

Nvidia, a tech chip giant that powers most cutting-edge artificial intelligence work, announced the B200 “Blackwell” chip at its annual developer conference on Monday, stating that the B200 could speed up tasks such as providing answers by chatbots by thirty times.

Nvidia's Chief Financial Officer, Colette Kress, told financial analysts on Tuesday that "we will be going to market later this year," but also indicated that shipments of the new GPU would not increase until 2025.

Social media giant Meta, one of Nvidia’s largest customers, has previously purchased hundreds of thousands of Nvidia’s previous generation chips. Meta CEO Mark Zuckerberg had disclosed in January that the company planned to have about 350,000 of the earlier chips (known as H100) stored in inventory by the end of this year. The latest news announced by a spokesperson for Meta to the foreign press reveals that they will receive Nvidia’s newly launched AI chips later this year, and also disclosed that these will be part of Nvidia’s first shipment.

Previously, Zuckerberg stated in a declaration on Monday that Meta plans to use Blackwell to train the company’s camel models. The company is currently training its third-generation model on two GPU clusters announced last week, each containing about 24,000 H100 GPUs.

The Meta spokesperson mentioned that Meta plans to continue using these clusters to train Llama 3 and will use Blackwell for future generations of the model.