Mark Zuckerberg is running a familiar open source playbook in AI
Its Llama-series models aren't the first time it's created problems for its competitors with open source tools.
Mark Zuckerberg is continuing his charm offensive in part of his bid to rebuild Meta’s image—and his own—as a leading innovator, with AI as the catalyst.
Meta’s (and Zuckerberg’s) goals seem to have aligned on the goal of its newer rival OpenAI—to build toward artificial general intelligence (AGI)—but under Meta’s classic strategy: try to make it open source, and see what happens. That strategy already worked once, and is well on its way to working a second time with its Llama-series models.
The difference, this time, is it appears Meta’s sights are set on one of the apex goals of AI with AGI, rather than just building open source tools and integrating the best parts with Meta’s existing products.
“Today I’m bringing metas two AI research efforts closer together to support our long term goals of building general intelligence, open sourcing it responsibly, and making it available and useful to everyone in all of our daily lives,” Zuckerberg said in his video posted to Facebook. “It’s become clearer that the next generation of services require building full general intelligence… This technology is so important and the opportunities are so great that we should open source and make it as widely available as we responsibly can so that everyone can benefit.”
(Zuckerberg, in addition to posting an update to its AI strategy directly to Facebook, gave an extensive interview to The Verge’s Alex Heath illuminating some parts of Meta’s quasi-pivot to being The AI Company.)
And Meta has a pretty long—and relatively successful—history with open source in deep learning. While it hasn’t traditionally been considered the hottest landing point in AI (largely due to traditional restraints of access to compute historically not present at Google), Meta and its advanced AI research division, FAIR, continued to push out new projects and releases. And then there is Meta’s crown jewel in deep learning: PyTorch.
With PyTorch, Meta effectively made powerful deep learning tools freely available and worked closely with researchers and developers in the community to improve it. At this point it has effectively supplanted Google’s first-mover tooling TensorFlow as a preferred developer framework (though to be sure, there is still a lot of TensorFlow usage). The strategy was so successful that it effectively ran the same playbook with LLaMA and its non-backronym follow-up, Llama 2. Zuckerberg said in his update, unsurprisingly, that the company is currently working on Llama 3.
Meta is essentially aiming to be a steward of the open source AI community along the same lines as Hugging Face, focusing on the free flow of information and data in service to developing next-generation AI tools and then finding some way to monetize it on the back end. Indeed, Llama 2 is commercially available up to a point, though it doesn’t seem to be positioned as a major money-maker for now.
And while Hugging Face and Meta have both become integral parts of the lives of developers, there’s a whole universe of open source projects that are looking to do the same—including projects like LangChain, LlamaIndex, Ollama, and others. The open source community and those startups are quickly developing and deploying techniques—like quantization to run models on less powerful devices and reasoning engines—that only advance the pace of development of AI models.
The success of LLaMA, originally just a research release and not technically commercially available, essentially led to the emergence Meta’s generative AI efforts and the growth of its team, colloquially referred to as its GenAI Org. And as part of the charm offensive, Zuckerberg alluded to a partial reorganization that is bringing FAIR “closer together” with its GenAI Org.
But all this is effectively in service to one of Zuckerberg’s long-standing dreams: to be seen as a visionary and innovator, and not just helming a company with a bunch of ads and copycats. And the growth of Meta’s ambitions seems set to dovetail with a future powered by customized local models that live on every device and have no need to use APIs.
FAIR, once a separate division in Meta working on AI tools that could potentially benefit Meta and keep talent out of the hands of Google, became Meta’s chance at reclaiming that grand innovator status. And while Meta has long relied on an increasingly walled garden with its interconnected apps, ironically that bet may depend on the open source community.
Ingratiating with open source
Google effectively made the first move in open source AI with the release of TensorFlow in 2015. TensorFlow was one of the first tools to make advanced machine learning tools more broadly accessible to developers, creating a new kind of open source community that continued to grow and mature.
PyTorch came out shortly after it in 2016, and quickly caught fire among developers. It had a number of key quality of life and developer features missing in TensorFlow, which Google would lag behind on adding. And developers commonly praised its documentation and examples, offering a much easier time onboarding to the new framework.
PyTorch has become a critical part of practically all AI frameworks, to the point that it’s simultaneously created a near-permanent reliance on Nvidia’s CUDA framework—and Nvidia hardware in general as a result. The explosion of interest in AI has allowed both to grow immensely in tandem in the eyes of the unfortunate universal grading rubric of publicly-traded companies: the stock price. Meta has nearly reached the $1 trillion club in market cap, and Nvidia already crossed that mark some time ago.
The release of the Llama-series models altered the calculus in language model development, demonstrating that you could get a considerable amount of performance out of smaller models, particularly when customized with additional data and instructions through a process called fine-tuning. There were early signs of this with customized versions of BERT (surprise, another tool out of Google), which still sees a lot of widespread usage. But the Llama-series essentially brought that process mainstream and jump-started the development of AI models.
And its other framework, React, also seems poised to be a parts of how people interact with language models. Startups like Vercel are offering AI-oriented developer toolkits primed around React. One potential path here, described to me by an industry source, was a point where React could be generated on the spot through the use of language models—building customized user experiences in real time.
Today’s newsletter is brought to you by Felicis
Felicis is known for its bets in leading AI and infrastructure startups including Runway, Weights & Biases, MotherDuck, and Supabase, among many others.
Felicis is hosting a special event for founders on February 1: a discussion on go-to-market strategies for AI companies. The event will be moderated by Viviana Faga, a former marketing leader at Salesforce and Yammer and current Felicis GP who led deals in Runway, MotherDuck, and Supabase, and she’ll be joined by Maggie Hott, go-to-market leader at OpenAI, and other panelists with distinct perspectives on both what’s similar and what’s unique about AI revenue strategies compared to SaaS.
The event will only be open to founders and high-level operators of AI and ML companies and will be completely off-the-record.