Build for the next wave of AI with purpose-built TPUs

As generative AI models grow in complexity, the computational demands for both training and inference are pushing traditional systems to their limit. Join Powering AI inference at scale: a deep dive into Ironwood TPUs to learn about the specialized hardware and software engineered to solve these challenges.

 

We will cover how to:

  • Accelerate the entire AI workflow: see how Ironwood's architecture is purpose-built for both massive-scale training and high-throughput production serving to gain a strategic advantage
  • Solve for inference at scale: understand Ironwood's inference-first design, engineered to remove technical bottlenecks for your most complex, high-volume models
  • Enable sustainable scale: learn how a 2x improvement in performance-per-watt addresses the economic challenges of large-scale AI, maximizing your infrastructure investment
  • Integrate seamlessly with your ecosystem: discover how the co-designed software stack makes Ironwood's power accessible to your teams' existing workflows in JAX, PyTorch, and vLLM

Speaker Details

Leo Leung
Director, Cloud Compute
Google Cloud

 

Rose Zhu
Sr. Product Manager
Google Cloud

Event Topic

Artificial Intelligence, Technology

Relevant Audiences

All State and Local Government, All Federal Government

Other Agency

Other Federal Agencies
Build for the next wave of AI with purpose-built TPUs
Event Type
Virtual / Online
Event Subtype
Webinar / Webcast
When
Thu, Dec 11, 2025 | 1:00 pm - 1:40 pm ET
Registration Cost
Complimentary
Website
Click here to view event website
Organizer
Google Cloud