Yejin Choi, Senior Research Director at Allen Institute of AI, discusses the viability of Small Language Models (SLMs) as a cost-effective alternative to Large Language Models (LLMs) in AI startups, presenting innovative approaches at the Databricks’ Data + AI Summit.
Building AI startups is often hindered by high computational costs, primarily due to the expense of training large language models (LLMs). However, Yejin Choi, the Senior Research Director at Allen Institute of AI, explored the potential of small language models (SLMs) at the Databricks’ Data + AI Summit. She discussed ways to develop SLMs without using GPUs.
One “mission impossible” Choi outlined centered on summarizing sentences without employing reinforcement learning from human feedback (RLHF), large-scale pre-training, and supervised datasets. Another challenge was summarizing documents under similar constraints. Choi’s team also worked on revitalizing older statistical n-gram models to remain relevant against neural language models.
Choi emphasized that SLMs do not necessitate the significant compute power that LLMs require, thereby eliminating the need for GPUs. She illustrated this by referencing Infini.gram, an engine created by researchers at Washington University and the Allen Institute for AI, which runs on standard CPU compute power.
Choi argued that AI models are only as good as the data they are trained on, suggesting that synthetically generated data could be crucial in the future. She pointed to examples like Meta AI’s Segment Anything Model and Microsoft’s ‘Textbooks Are All You Need,’ which demonstrate that high-quality synthesized data can enable SLMs to compete with larger models effectively.
Choi’s overall contention was that with the right synthesized data and a focus on abstraction rather than scale, SLMs can be a viable and less resource-intensive alternative to LLMs.