āļāļāļāđāļĨāļāđāļāđāļ§āļĒāđāļāļ Player FM !
Inside Nano Banana ð and the Future of Vision-Language Models with Oliver Wang - #748
Manage episode 508093774 series 2355587
Today, weâre joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash Imageâbetter known by its code name, âNano Banana.â We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with Geminiâs world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day âthinkâ and âreasonâ in images.
The complete show notes for this episode can be found at https://twimlai.com/go/748.
768 āļāļāļ
Inside Nano Banana ð and the Future of Vision-Language Models with Oliver Wang - #748
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Manage episode 508093774 series 2355587
Today, weâre joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash Imageâbetter known by its code name, âNano Banana.â We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with Geminiâs world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day âthinkâ and âreasonâ in images.
The complete show notes for this episode can be found at https://twimlai.com/go/748.
768 āļāļāļ
āļāļļāļāļāļāļ
×āļāļāļāđāļāļāļĢāļąāļāļŠāļđāđ Player FM!
Player FM āļāļģāļĨāļąāļāļŦāļēāđāļ§āđāļ