MABEL: On-Device AI
A fully on-device AI translation and chat app powered by Google’s Gemma 2B LLM. No servers. No cloud. No internet. Just your phone.
Read the Write-upThe Gemma 3n Impact Challenge
I co-created MABEL with my cousin, Ahmed Elshami, as our submission for the Google Gemma 3n Impact Challenge. We started this project out of curiosity and a shared passion for building something meaningful.
The name MABEL was inspired by the Tower of Babel — where language once divided people. MABEL does the opposite: it brings understanding back through accessible, private, AI-powered communication. This isn't just another API wrapper; it's a complete on-device AI implementation ensuring total privacy and offline capabilities.
Vision in Action
Point the camera at any object, tap to segment it, and let Gemma identify and describe it — all processed entirely on-device.
Built From the Ground Up
We engineered MABEL natively to leverage MediaPipe's GenAI framework, running Google's Gemma3n 2B model directly on the hardware.
- iOS (Live): Fully functional app built in SwiftUI. SwiftData handles conversation states and thread history seamlessly.
- Android (In Development): Currently being built in Kotlin with Jetpack Compose. It will feature the same private, multimodal experience via MediaPipe.
- Voice Recognition: Integrated Whisper for robust offline speech-to-text transcription.
Multimodal & Contextual
MABEL supports text, voice, and camera inputs. A major feature is "Chat about this", allowing users to hold a past translation and ask contextual questions to explore usage examples.
