AI Scenarios in Which Small Language Models Outshine Large Language Models

While increasing scale has been the core driving trend in the development of large language models (LLMs), a contrarian trend has recently emerged: the development of small language models (SLMs). While LLMs have traditionally dominated the development of language models, SLMs offer potential solutions to key challenges identified by functional leaders, including budget constraints, data protection, privacy concerns and risk mitigation associated with AI. In this complimentary Gartner IT webinar, we compare SLMs to LLMs in 4 areas: generic language understanding and generation, in-context learning capabilities, computational requirements for serving and computational requirements for fine-tuning. We then discuss 5 scenarios in which SLMs outshine LLMs: multiple task-specialized models, high user interaction volumes, organizational language models, sensitive data or regulatory restrictions and edge use cases. You will walk away from this session with answers to your vital questions, a copy of the research slides and recommended actions to help you achieve your goals.

  • Understand what are small language models
  • Determine how do small language models compare to large language models
  • Explore scenarios where small language models outshine large language models

Contact us with questions about viewing this webinar.

Event Topic

IT, Security, Technology

Relevant Audiences

All State and Local Government, All Federal Government

Other Agency

Other Federal Agencies
AI Scenarios in Which Small Language Models Outshine Large Language Models
Event Type
Virtual / Online
Event Subtype
Webinar / Webcast
When
Thu, Sep 12, 2024 | 10:00 am - 11:00 am ET
Registration Cost
Complimentary
Website
Click here to view event website
Organizers
Gartner, Inc.