Chat With RTX introduces personalized chatbots to Nvidia AI computers


Nvidia is launching “Chat with RTX,” enabling the creation of localized, personalized AI chatbots on Windows AI computers. This move represents Nvidia’s continued effort to make AI on its GPUs a widespread utility accessible to all.

“Chat with RTX” offers the ability for users to leverage advanced generative AI on their personal devices, highlighting the capabilities of retrieval-augmented generation (RAG) and TensorRT-LLM software, without heavily relying on data center resources and enhancing user privacy for AI conversations.

Traditionally, chatbots, which are now a staple in everyday digital communication for many around the world, depend on cloud servers powered by Nvidia GPUs. The “Chat with RTX” technology demonstration, however, changes this dynamic by allowing users to utilize the generative AI advantages locally, powered by Nvidia GeForce RTX 30 Series GPUs (or better) equipped with at least 8GB of VRAM.

Nvidia describes “Chat with RTX” as not just any chatbot but a personalized AI companion that can be tailored with individual content. Utilizing local GeForce-equipped Windows PCs, this tool offers users a fast and private way to engage with generative AI.

This tool uses RAG, TensorRT-LLM software, and Nvidia RTX acceleration to deliver quick, relevant responses from local data. It enables connection to local files, transforming them into a resource for open-source large language models like Mistral or Llama 2, facilitating efficient information retrieval through natural language queries.

Additionally, “Chat with RTX” distinguishes itself by incorporating multimedia content, notably YouTube videos and playlists, into its database, allowing for enriched contextual interactions.

The requirement for using “Chat with RTX” includes having a GeForce RTX 30 Series GPU or better, with at least 8GB of VRAM, and running Windows 10 or 11 with the latest Nvidia drivers.

Developers are invited to explore the acceleration of large language models with RTX GPUs through the TensorRT-LLM RAG developer project on GitHub. Nvidia is also hosting a Generative AI on Nvidia RTX developer contest until February 23, offering prizes like a GeForce RTX 4090 GPU and a pass to the Nvidia GTC conference.

“Chat with RTX” is currently available for free download, presenting a new paradigm in local AI interaction and data privacy.