AI inference platform Groq has raised $640 million in its Series D funding round, aiming to enhance its capabilities and scale operations, while bolstering its leadership team.
Groq Secures $640M in Series D Funding for AI Inference Expansion
Groq, an AI inference platform known for its exceptional processing speed, has successfully raised $640 million in a Series D funding round, reaching a valuation of $2.8 billion. This round of investment, which will significantly enhance the company’s capabilities and scale, was led by funds and accounts managed by BlackRock Private Equity Partners. Other notable investors include Neuberger Berman, Type One Ventures, Cisco Investments, Global Brain’s KDDI Open Innovation Fund III, and Samsung Catalyst Fund.
Groq, renowned for its unique and fast AI inference platform, is experiencing tremendous demand from developers seeking superior speed and efficiency in AI computation. To support this burgeoning growth, Groq plans to deploy an additional 100,000 tensor processing units (LPUs) into GroqCloud, significantly expanding its compute capacity.
In addition to the financial boost, Groq announced key leadership changes to drive its mission forward. Stuart Pann, an industry veteran formerly with HP and Intel, has joined as Chief Operating Officer. Furthermore, Yann LeCun, VP & Chief AI Scientist at Meta, will bring his extensive expertise in the AI field to Groq as a technical advisor.
Investor Confidence and Market Potential
The enthusiasm from investors signals confidence in Groq’s position within the AI compute market. Samir Menon, Managing Director at BlackRock Private Equity Partners, highlighted the significant opportunity Groq presents in meeting the market needs with its vertically integrated solution. Similarly, Marco Chisari from Samsung Semiconductor Innovation Center applauded Groq’s disruptive architecture and market-leading performance.
Groq’s CEO and founder, Jonathan Ross, expressed his vision of democratising AI resources. He emphasised the importance of making advanced AI inference capabilities available to a broad range of developers, not just large tech entities. This vision is set to materialise more robustly with the new influx of funding, which will also drive talent acquisition efforts and expand Groq’s engineering and operational teams.
Growing Developer Base and Expanding Infrastructure
The GroqCloud platform currently supports over 360,000 developers building AI applications using models such as Meta’s Llama 3.1, OpenAI’s Whisper Large V3, Google’s Gemma, and Mistral’s Mixtral. The Series D investment will aid in scaling GroqCloud’s capacity and enhance its tokens-as-a-service (TaaS) offering, along with the integration of new models and features.
Meta’s CEO, Mark Zuckerberg, in a letter endorsing open-source AI, commended Groq for its effective low-latency, low-cost inference capabilities that support the latest AI models. This nod from a major player like Meta underscores the broader industry recognition and trust in Groq’s solutions.
To meet the soaring demand from developers and enterprises, Groq aims to deploy over 108,000 LPUs by the end of Q1 2025. This will mark the largest deployment of AI inference compute by any non-hyperscaler. This scaling effort is supported by a manufacturing partnership with GlobalFoundries.
Global Commercial Efforts and Strategic Partnerships
Leading Groq’s commercial initiatives is Mohsen Moazami, President of International at Groq and former leader of Emerging Markets at Cisco. Moazami is spearheading efforts to build AI compute centres globally. Partnerships include collaborations with Aramco Digital and Earth Wind & Power to expand AI infrastructure, particularly in regions like MENA (Middle East and North Africa).
Tareq Amin, CEO of Aramco Digital, highlighted the transformative potential of their partnership with Groq, aiming to build a substantial AI inference service infrastructure to meet both local and global AI computation demands.
Future Prospects
Groq’s architecturally innovative LPUs are designed with a software-first approach, optimised for the unique demands of AI. This design philosophy has positioned Groq to swiftly introduce new models to developers, maintaining its competitive edge in speed. The fresh investment is set to accelerate the development of the next two generations of LPUs, promising further advancements in AI inference technology.
Morgan Stanley & Co. LLC served as the exclusive Placement Agent for Groq during this funding round.
As Groq continues to scale its operations and enhance its technology, it anticipates leading the charge in the evolving landscape of AI inference, driven by its mission to support a diverse range of developers in creating cutting-edge AI applications.