Show HN: Aide – A customizable Android assistant (voice, choose your provider)

Aide is an Android app that replaces your default digital assistant. It can register as your default assistant, so corner-swipe and power-button-hold summon it instead of the Google assistant. I wanted to do something other than Google, but ChatGPT and Claude's integration couldn't do anything on device, so I built this.

Bring your own key for Claude, OpenAI, or any OpenAI-compatible endpoint (Ollama, LM Studio, vLLM). Keys are encrypted on-device; conversations go straight to your provider.

Free: chat on any provider, multi-provider switching, web search + URL fetching as built-in tools, custom system prompt, markdown.

Pro ($6.99 launch week, $9.99 after, one-time): voice input + streaming TTS, voice-first overlay, photo/file attachments, device actions (SMS/calendar/alarms, every intent confirmed), screen context, Home Assistant integration.

aideassistant.com

7 points

yincrash

a day ago


5 comments

subscribed 17 hours ago

Looks cool, but I think your maths isn't mathing :)

It's a second day of the first week (as per Google Play), and it shows $9.99 already (£8.99 in the Play Store).

I'm not saying it's expensive, feature wise it's awesome, I'm saying it's inconsistent :)))

BTW, is there any chance for the trial key (even one day)? My phone is running GrapheneOS and I would need to see if all I'd like works (or I can make it work).

Maybe beta programme?

newsdeskx a day ago

does this work with purely local models through Ollama, or do you still need the Ollama server running on another machine? been looking for something that actually works offline for basic voice commands

  • yincrash a day ago

    Still needs a server. You could run a server locally if you had a model that your device could handle then point aide to the localhost URL.

    • subscribed 17 hours ago

      New phones can run Gemma 4 quants pretty nicely. It's a surprisingly good model. Google's Edge Gallery also offers some choice to try.

      • subscribed 12 hours ago

        Missed the window for edit: I agree that ideally I'd have a tiny local MOE-kind of model, able to establish the complexity of the request, route simple local requests to the instantly available local agent, and route all the rest outside (to one of several models).