There’s something deeply satisfying about running an AI model on your phone with zero internet connection. No API calls, no cloud latency, no wondering what happens to your data. That’s exactly what [Google AI Edge Gallery](https://developers.googleblog.com/on-device-function-calling-in-google-ai-edge-gallery/) delivers, and the latest update makes it way more interesting than a simple on-device chatbot.
The app landed on [iOS](https://apps.apple.com/us/app/google-ai-edge-gallery/id6749645337) on February 25th, joining its existing [Android version](https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&hl=en_US), and quickly picked up [254 upvotes on Product Hunt](https://www.producthunt.com/products/google-ai-edge-gallery-2) by February 28th. The buzz is warranted. What caught my attention isn’t the chat feature — we’ve all seen on-device chatbots before — but the new “Mobile Actions” and “Tiny Garden” demos that show off actual agentic behavior running entirely on your phone.
The technical piece behind this is [FunctionGemma](https://huggingface.co/google/functiongemma-270m-it), a model Google fine-tuned from Gemma 3 at just 270 million parameters. That’s absurdly small. And yet it can parse something like “Show me the San Francisco airport on a map” and translate that into the right function call to open your maps app. According to Google’s own benchmarks, accuracy jumps from 58% on the base model to 85% after task-specific fine-tuning. For a model that fits comfortably on a mid-range phone, those numbers are hard to ignore.
Beyond the agentic stuff, the app packs Audio Scribe for transcription and translation, a Prompt Lab for code generation and summarization, and multi-turn AI Chat. They’ve also added support for third-party models like Qwen2.5, Phi-4-mini, and DeepSeek-R1, which is a smart move — it turns the app into a playground rather than a locked-down Google-only showcase. The whole thing is [open source on GitHub](https://github.com/google-ai-edge/gallery), so you can dig into the implementation or even fine-tune FunctionGemma yourself using their [published guide](https://ai.google.dev/gemma/docs/mobile-actions).
What makes this more than a tech demo is the timing. On-device AI has been one of those “next year” promises for a while now, but models are finally small enough and hardware is finally fast enough for it to actually work in practice. Google shipping a polished app with real use cases — offline function calling, audio processing, multi-model support — feels like the moment this stops being theoretical. If you’re building anything that touches mobile AI, this is worth installing and poking around with.

Leave a comment