Show HN: Chirp – Local Windows dictation with ParakeetV3 no executable required

github.com

30 points by whamp a day ago

I’ve been working in fairly locked‑down Windows environments where I’m allowed to run Python, but not install or launch new `.exe` files. In addition the built-in windows dictations are blocked (the only good one isn't local anyway). At the same time, I really wanted accurate, fast dictation without sending audio to a cloud service, and without needing a GPU. Most speech‑to‑text setups I tried either required special launchers, GPU access, or were awkward to run day‑to‑day.

To scratch that itch, I built Chirp, a Windows dictation app that runs fully locally, uses NVIDIA’s ParakeetV3 model, and is managed end‑to‑end with `uv`. If you can run Python on your machine, you should be able to run Chirp—no additional executables required.

Under the hood, Chirp uses the Parakeet TDT 0.6B v3 ONNX bundle. ParakeetV3 has accuracy in the same ballpark as Whisper‑large‑v3 (multilingual WER ~4.9 vs ~5.0 in the open ASR leaderboard), but it’s much faster and happy on CPU.

The flow is: - One‑time setup that downloads and prepares the ONNX model: - `uv run python -m chirp.setup` - A long‑running CLI process: - `uv run python -m chirp.main` - A global hotkey that starts/stops recording and injects text into the active window.

A few details that might be interesting technically:

- Local‑only STT: Everything runs on your machine using ONNX Runtime; by default it uses CPU providers, with optional GPU providers if your environment allows.

- Config‑driven behavior: A `config.toml` file controls the global hotkey, model choice, quantization (`int8` option), language, ONNX providers, and threading. There’s also a simple `[word_overrides]` map so you can fix tokens that the model consistently mishears.

- Post‑processing pipeline: After recognition, there’s an optional “style guide” step where you can specify prompts like “sentence case” or “prepend: >>” for the final text.

- No clipboard gymnastics required on Windows: The app types directly into the focused window; there are options for clipboard‑based pasting and cleanup behavior for platforms where that makes more sense.

- Audio feedback: Start/stop sounds (configurable) let you know when the mic is actually recording.

So far I’ve mainly tested this on my own Windows machines with English dictation and CPU‑only setups. There are probably plenty of rough edges (different keyboard layouts, language settings, corporate IT policies, etc.), and I’d love feedback from people who:

- Work in restricted corporate environments and need local dictation. - Have experience with Parakeet/Whisper or ONNX Runtime and see obvious ways to improve performance or robustness. - Want specific features (e.g., better multi‑language support, more advanced post‑processing, or integrations with their editor/IDE).

Repo is here: `https://github.com/Whamp/chirp`

If you try it, I’d be very interested in:

- CPU usage and latency on your hardware, - How well it behaves with your keyboard layout and applications, - Any weird failure cases or usability annoyances you run into.

Happy to answer questions and dig into technical details in the comments.

lxe a day ago

I've done something similar for Linux and Mac. I originally used Whisper and then switched to Parakeet. I much prefer whisper after playing with both. Maybe I'm not configuring Parakeet correctly, But the transcription that comes out of Whisper is usually pretty much spot on. It automatically removes all the "ooms" and all the "ahs" and it's just way more natural, in my opinion. I'm using Whisper.CPP with CUDA acceleration. This whole comment is just written with me dictating to a whisper, and it's probably going to automatically add quotes correctly, there's going to be no ums, there's going to be no ahs, and everything's just going to be great.

  • clueless a day ago

    Mind sharing your local setup for Mac?

    • hasperdi 3 hours ago

      If you don't mind closed source paid app, I can recommend MacWhisper. You can select different models of Whisper & Parakeet for dictation and transcription. My favorite feature is that it allows sending the transcription output to an LLM for clean-up, or anything you want basically eg. professional polish, translate, write poems etc.

      I have enough RAM on my Mac that I can run smaller LLMs locally. So for me the whole thing stays local

hebelehubele 14 hours ago

Is there a macOS equivalent of this?

My use case is to generate subtitles for Youtube videos (downloaded using yt-dlp). Word-level accurracy is also nice to have, because I also translate them using LLMs and edit the subtitles to better fit the translation.

zahlman 20 hours ago

> I’m allowed to run Python, but not install or launch new `.exe` files.

> NVIDIA’s ParakeetV3 model

You can't install .exe's, but you can connect to the Internet, download and install approximately two hundred wheels (judging by uv.lock), many of which contain opaque binary blobs, including an AI model?

Why does your organization think this makes any sense?

  • whamp 7 hours ago

    Never said it did! Working with what I got.

whamp a day ago

btw this is my first open-source project

hastamelo a day ago

how does the quality compare with the windows built in one (Win+H), the one with online models?

I'm using that to dictate prompts, it struggles with technical terms: JSON becomes Jason, but otherwise is fine

  • lxe a day ago

    In my opinion, attempting to perform live dictation is a solution that is looking for a problem. For example, the way I'm writing this comment is: I hold down a keyboard shortcut on my keyboard, and then I just say stuff. And I can say a really long thing. I don't need to see what it's typing out. I don't need to stream the speech-to-text transcription. When the full thing is ingested, I can then release my keys, and within a second it's going to just paste the entire thing into this comment box. And also, technical terms are going to be just fine with Whisper. For example, Here's a JSON file.

    (this was transcribed using whisper.cpp with no edits. took less than a second on a 5090)

    • whamp 7 hours ago

      Yea whisper has more features and is awesome if you have the hardware to run the big models that are accurate enough. The constraint here is the best cpu only implementation. By no means am I wedded or affiliated with parakeet, it's just the best/fastest within the CPU hardware space.

    • atonse a day ago

      I’ve been using Parakeet with MacWhisper for a lot of my AI coding interactions. It’s not perfect but generally saves me a lot of time.

      • lxe a day ago

        I barely use a keyboard for most things anymore.

  • whamp a day ago

    My project has a built-in word_replacement so you can automatically replace certain terms if that's important to you in the config.toml

    i loved whisper but it was insanely slow on cpu only and even then it was with a smaller whisper that isn't as accurate as parakeet.

    my windows environment locks down the built-in windows option so i don't have a way to test it. i've heard it's pretty good if you're allowed to use it, but your inputs don't stay local which is why i needed to create this project.