IBM and Meta announce AI Alliance with 50+ tech members for responsible AI advancement.
IBM and Meta have unveiled the AI Alliance today, collaborating with over 50 founding members and collaborators worldwide. This coalition includes prominent entities like AMD, Anyscale, CERN, Cerebras, Cleveland Clinic, Cornell University, Dartmouth, Dell Technologies, EPFL, ETH, Hugging Face, Imperial College London, Intel, INSAIT, Linux Foundation, MLCommons, MOC Alliance operated by Boston University and Harvard University, NASA, NSF, Oracle, Partnership on AI, Red Hat, Roadzen, ServiceNow, Sony Group, Stability AI, University of California, Berkeley, University of Illinois, University of Notre Dame, The University of Tokyo, and Yale University.
The AI Alliance’s primary objective, according to the group, is to “promote open innovation and open science in AI.” It aims to create an open community that empowers developers and researchers to accelerate responsible AI innovation while upholding scientific rigor, trust, safety, security, diversity, and economic competitiveness. By bringing together leading developers, scientists, academic institutions, companies, and other innovators, the AI Alliance intends to combine resources and knowledge to address safety concerns and provide a platform for sharing and developing solutions that cater to the global research, development, and adoption needs.
Sriram Raghavan, Vice President of IBM Research, clarified in an interview with VentureBeat that the timing of the announcement, closely following events at OpenAI and the final negotiations of the EU AI Act, was coincidental. He emphasized the need for a more comprehensive discussion about AI, moving beyond mere concerns about risky models and potential misuse. Raghavan explained that the AI narrative should encompass open innovation and not devolve into a closed, proprietary approach or excessive regulation.
The AI Alliance, as described by Meta’s President of Global Affairs, Nick Clegg, envisions AI development happening openly to broaden access, encourage innovation, and focus on safety. The Alliance brings together researchers, developers, and companies to share tools and knowledge, fostering progress in AI, regardless of whether models are openly shared or not.
Raghavan highlighted the flexible, project-based approach of the AI Alliance, outlining six key areas of focus:
- Developing and deploying benchmarks, evaluation standards, tools, and resources to enable responsible AI system development globally, including a catalog of verified safety, security, and trust tools.
- Advancing the ecosystem of open foundation models with diverse modalities, including multilingual, multi-modal, and science models to address societal challenges.
- Nurturing a vibrant AI hardware accelerator ecosystem by promoting essential enabling software technology.
- Supporting global AI skills development and exploratory research, engaging the academic community to contribute to essential AI model and tool research projects.
- Creating educational content and resources to inform the public and policymakers about AI benefits, risks, solutions, and precise regulation.
- Launching initiatives to encourage open and responsible AI development, hosting events to explore AI use cases and showcase Alliance members’ responsible use of open AI technology for the greater good.