Dell and Hugging Face collaborate to streamline the deployment of Large Language Models (LLMs).

Today, nearly every enterprise is actively exploring the potential of large language models (LLMs) and generative AI for their business operations. However, akin to the early days of cloud computing and big data analytics, numerous challenges persist. Organizations grapple with questions such as where to begin when implementing this intricate technology, how to safeguard the security and privacy of their proprietary data, and the resource-intensive nature of fine-tuning.

In response to these concerns, Dell and Hugging Face have unveiled a new partnership aimed at addressing these hurdles. Their collaboration aims to simplify the deployment of customized LLMs on-premises and enable enterprises to fully harness the capabilities of this rapidly evolving technology.

According to Matt Baker, SVP for Dell AI strategy, the impact of generative AI and AI in general is poised to be transformative. However, he acknowledges that while generative AI is a hot topic, it remains a complex and daunting technology.

As part of this partnership, Dell and Hugging Face will establish a dedicated Dell portal within the Hugging Face platform. This portal will include custom containers, scripts, and technical documentation designed for deploying open-source models on Hugging Face using Dell servers and data storage systems. Initially available for Dell PowerEdge servers through the APEX console, the service will eventually expand to include Precision and other Dell workstation tools. Over time, the portal will release updated containers optimized for Dell infrastructure to support new-generation AI use cases and models.

Jeff Boudier, head of product at Hugging Face, emphasizes the importance of open-source solutions for organizations to take control of their AI initiatives and become builders, rather than mere users.

Dell’s partnership with Hugging Face is part of its broader efforts to establish a leadership position in generative AI. The company has recently added ObjectScale XF960, an S3-compatible, all-flash appliance tailored for AI and analytics workflows, to its ObjectScale tools lineup. Additionally, Dell has expanded its gen AI portfolio from initial-stage inferencing to encompass model customization, tuning, and deployment.

Despite the potential benefits of generative AI, there are significant challenges in its adoption by enterprises. These challenges include complexity, closed ecosystems, time-to-value, vendor reliability and support, ROI management, and data security concerns. Organizations are wary of exposing their sensitive data while leveraging it for insights and process automation. According to Dell research, 83% of enterprises prefer on-premises or hybrid implementations, particularly when dealing with their most valuable intellectual property assets.

The new Dell Hugging Face portal will feature curated sets of models selected based on their performance, accuracy, use cases, and licenses. Organizations will have the flexibility to choose their preferred model and Dell configuration and seamlessly deploy them within their infrastructure. Use cases span various domains, including marketing and sales content generation, chatbots, virtual assistants, and software development.

Dell’s differentiation lies in its ability to fine-tune models comprehensively, offering enterprises the best configurations for their specific needs. Importantly, Dell ensures that no data is exchanged with public models, ensuring data privacy and ownership. Once a model has been fine-tuned, it becomes the organization’s exclusive asset.

Enterprises currently experimenting with generative AI often utilize retrieval augmented generation (RAG) alongside off-the-shelf LLM tools. RAG integrates external knowledge sources to complement internal data, enabling users to generate stepwise instructions for various tasks. Dell aims to simplify the fine-tuning process by providing containerized tools based on parameter-efficient techniques like LoRA and QLoRA.

Ultimately, the goal is for every enterprise to develop its own vertical, leveraging its specific data combined with AI models to generate customized outcomes. This approach aligns with the concept of verticalization in AI, where domain-specific models are created by combining proprietary data with AI capabilities to cater to the unique needs of each organization.