Quantcast
Channel: iTWire - Business IT - Networking, Open Source, Security & Tech News
Viewing all articles
Browse latest Browse all 1048

Snowflake and NVIDIA partner to boost AI data applications

$
0
0
Snowflake and NVIDIA partner to boost AI data applications

Snowflake has announced a new collaboration with NVIDIA, helping customers and partners rapidly build bespoke AI solutions leveraging NVIDIA AI.

Snowflake CEO Sridhar Ramaswamy took to the stage at Snowflake Summit 2024 to announce the news, alongside NVIDIA founder and CEO Jensen Huang, who attended from Taipei by video call.

The two companies announced their collaboration, in which Snowflake has adopted NVIDIA AI Enterprise software to integrate NeMo Retriever microservices into Snowflake Cortex AI, Snowflake’s fully managed large language model (LLM) and vector search service.

What this means is that organisations of all sizes can seamlessly connect custom models to diverse business data and deliver highly accurate responses, but faster and at lower cost than otherwise possible.

{loadposition david08}

Cortex AI is Snowflake's new AI engine that is embedded into each Snowflake deployment and region, Ramaswamy explained, helping business users interact with data in natural language, and quickly find information across enterprise documents with fully-managed hybrid search. Cortex AI can be fine-tuned, it can support a wide range of LLMs using serverless functions and also provides a secure no-code AI development studio.

Snowflake has collaborated with NVIDIA to make data processing cheaper and faster with accelerated compute, Ramaswamy said, to help customers turn generative AI from aspiration to reality.

"Generative AI is costly, so the faster and the lower cost, the better," Huang said. "Faster AI means faster time to market."

A big part of generative AI is training the model; "Training time is vitally important," he said. "And the work in getting training time down is all to do with the infrastructure across the entire stack. We want to make it as quick as possible, for a better user experience, but also to reduce cost."

“Pairing NVIDIA’s full stack accelerated computing and software with Snowflake’s state-of-the-art AI capabilities in Cortex AI is game-changing,” Ramaswamy said. “Together, we are unlocking a new era of AI where customers from every industry and every skill level can build custom AI applications on their enterprise data with ease, efficiency, and trust.”

“Data is the essential raw material of the AI industrial revolution,” Huang said. “Together, NVIDIA and Snowflake will help enterprises refine their proprietary business data and transform it into valuable generative AI.”

NVIDIA AI Enterprise software capabilities to be offered in Cortex AI include:

  • NVIDIA NeMo Retriever: Provides information retrieval with high accuracy and powerful performance for enterprises building retrieval-augmented generation-based AI applications within Cortex AI. 
  • NVIDIA Triton Inference Server: Provides the ability to deploy, run, and scale AI inference for any application on any platform.

In addition, NVIDIA NIM inference microservices – a set of pre-built AI containers and part of NVIDIA AI Enterprise – can be deployed right within Snowflake as a native app powered by Snowpark Container Services. The app enables organisations to easily deploy a series of foundation models right within Snowflake.

In related news, Snowflake's new Snowflake Arctic - the most open, enterprise-grade LLM - is now fully supported with NVIDIA TensorRT-LLM software, providing users with highly optimised performance. Arctic is now available as an NVIDIA NIM inference microservice, allowing more developers to access Arctic’s efficient intelligence.

The state-of-the-art Snowflake Arctic LLM, was launched in April 2024 and trained on NVIDIA H100 Tensor Core GPUs. It is available as an NVIDIA NIM so users can get started with Arctic in seconds. The Arctic NIM hosted by NVIDIA is live on the NVIDIA API catalogue for developer access using free credits and will be offered as a downloadable NIM, giving users even more choice to deploy the most open enterprise LLM available on their preferred infrastructure.

HowToManageCostOfAI


Viewing all articles
Browse latest Browse all 1048

Trending Articles