Show HN: Chirp – Local Windows dictation with ParakeetV3 no executable required
github.comI’ve been working in fairly locked‑down Windows environments where I’m allowed to run Python, but not install or launch new `.exe` files. In addition the built-in windows dictations are blocked (the only good one isn't local anyway). At the same time, I really wanted accurate, fast dictation without sending audio to a cloud service, and without needing a GPU. Most speech‑to‑text setups I tried either required special launchers, GPU access, or were awkward to run day‑to‑day.
To scratch that itch, I built Chirp, a Windows dictation app that runs fully locally, uses NVIDIA’s ParakeetV3 model, and is managed end‑to‑end with `uv`. If you can run Python on your machine, you should be able to run Chirp—no additional executables required.
Under the hood, Chirp uses the Parakeet TDT 0.6B v3 ONNX bundle. ParakeetV3 has accuracy in the same ballpark as Whisper‑large‑v3 (multilingual WER ~4.9 vs ~5.0 in the open ASR leaderboard), but it’s much faster and happy on CPU.
The flow is: - One‑time setup that downloads and prepares the ONNX model: - `uv run python -m chirp.setup` - A long‑running CLI process: - `uv run python -m chirp.main` - A global hotkey that starts/stops recording and injects text into the active window.
A few details that might be interesting technically:
- Local‑only STT: Everything runs on your machine using ONNX Runtime; by default it uses CPU providers, with optional GPU providers if your environment allows.
- Config‑driven behavior: A `config.toml` file controls the global hotkey, model choice, quantization (`int8` option), language, ONNX providers, and threading. There’s also a simple `[word_overrides]` map so you can fix tokens that the model consistently mishears.
- Post‑processing pipeline: After recognition, there’s an optional “style guide” step where you can specify prompts like “sentence case” or “prepend: >>” for the final text.
- No clipboard gymnastics required on Windows: The app types directly into the focused window; there are options for clipboard‑based pasting and cleanup behavior for platforms where that makes more sense.
- Audio feedback: Start/stop sounds (configurable) let you know when the mic is actually recording.
So far I’ve mainly tested this on my own Windows machines with English dictation and CPU‑only setups. There are probably plenty of rough edges (different keyboard layouts, language settings, corporate IT policies, etc.), and I’d love feedback from people who:
- Work in restricted corporate environments and need local dictation. - Have experience with Parakeet/Whisper or ONNX Runtime and see obvious ways to improve performance or robustness. - Want specific features (e.g., better multi‑language support, more advanced post‑processing, or integrations with their editor/IDE).
Repo is here: `https://github.com/Whamp/chirp`
If you try it, I’d be very interested in:
- CPU usage and latency on your hardware, - How well it behaves with your keyboard layout and applications, - Any weird failure cases or usability annoyances you run into.
Happy to answer questions and dig into technical details in the comments.
I've done something similar for Linux and Mac. I originally used Whisper and then switched to Parakeet. I much prefer whisper after playing with both. Maybe I'm not configuring Parakeet correctly, But the transcription that comes out of Whisper is usually pretty much spot on. It automatically removes all the "ooms" and all the "ahs" and it's just way more natural, in my opinion. I'm using Whisper.CPP with CUDA acceleration. This whole comment is just written with me dictating to a whisper, and it's probably going to automatically add quotes correctly, there's going to be no ums, there's going to be no ahs, and everything's just going to be great.
Mind sharing your local setup for Mac?
If you don't mind closed source paid app, I can recommend MacWhisper. You can select different models of Whisper & Parakeet for dictation and transcription. My favorite feature is that it allows sending the transcription output to an LLM for clean-up, or anything you want basically eg. professional polish, translate, write poems etc.
I have enough RAM on my Mac that I can run smaller LLMs locally. So for me the whole thing stays local
https://github.com/lxe/yapyap/tree/parakeet-nemo
It's been a while, so I don't know if it's going to work because of the Nemo toolkit ASR numpy dependency issues.
I use it for Linux using whisper CPP and it works great
Cool use of ONNX! Fluid Inference also have great implementations of Parakeet v2/v3 in CoreML for Apple devices and OpenVINO for Intel:
https://github.com/FluidInference/FluidAudio
https://github.com/FluidInference/eddy-audio
I built something similar for macOS that is a CLI app and generates notes for you. Also has a conversational chat interface to query your notes. Funny enough, it’s also called Chirp.
https://github.com/Code-and-Sorts/chirp-ai-note-app
Is there a macOS equivalent of this?
My use case is to generate subtitles for Youtube videos (downloaded using yt-dlp). Word-level accurracy is also nice to have, because I also translate them using LLMs and edit the subtitles to better fit the translation.
I use MacWhisper[1] with local Parakeet models. It’s got quite a lot of features, I myself only need the dictation.
[1] https://goodsnooze.gumroad.com/l/macwhisper
Here is the huggingface ASR leaderboard for those wondering how parakeet V3 compares to Whisper Large V3
Accuracy Average WER: Whisper-large-v3 4.91 vs Parakeet V3 5.05
Speed RTFx: Whisper-large-v3 126 vs PArakeet V3 2154
~17x faster
https://huggingface.co/spaces/hf-audio/open_asr_leaderboard
> I’m allowed to run Python, but not install or launch new `.exe` files.
> NVIDIA’s ParakeetV3 model
You can't install .exe's, but you can connect to the Internet, download and install approximately two hundred wheels (judging by uv.lock), many of which contain opaque binary blobs, including an AI model?
Why does your organization think this makes any sense?
Never said it did! Working with what I got.
btw this is my first open-source project
how does the quality compare with the windows built in one (Win+H), the one with online models?
I'm using that to dictate prompts, it struggles with technical terms: JSON becomes Jason, but otherwise is fine
In my opinion, attempting to perform live dictation is a solution that is looking for a problem. For example, the way I'm writing this comment is: I hold down a keyboard shortcut on my keyboard, and then I just say stuff. And I can say a really long thing. I don't need to see what it's typing out. I don't need to stream the speech-to-text transcription. When the full thing is ingested, I can then release my keys, and within a second it's going to just paste the entire thing into this comment box. And also, technical terms are going to be just fine with Whisper. For example, Here's a JSON file.
(this was transcribed using whisper.cpp with no edits. took less than a second on a 5090)
Yea whisper has more features and is awesome if you have the hardware to run the big models that are accurate enough. The constraint here is the best cpu only implementation. By no means am I wedded or affiliated with parakeet, it's just the best/fastest within the CPU hardware space.
I’ve been using Parakeet with MacWhisper for a lot of my AI coding interactions. It’s not perfect but generally saves me a lot of time.
I barely use a keyboard for most things anymore.
My project has a built-in word_replacement so you can automatically replace certain terms if that's important to you in the config.toml
i loved whisper but it was insanely slow on cpu only and even then it was with a smaller whisper that isn't as accurate as parakeet.
my windows environment locks down the built-in windows option so i don't have a way to test it. i've heard it's pretty good if you're allowed to use it, but your inputs don't stay local which is why i needed to create this project.