If you’ve ever spent an afternoon downloading a 70B model only to watch your laptop crawl to a halt, you already know why [llmfit](https://github.com/AlexsJones/llmfit) exists. It’s a Rust-built terminal tool that scans your hardware — RAM, CPU, GPU, VRAM, the whole deal — and tells you which LLM models will actually run well on your specific setup. No guessing, no trial and error.
I ran it on my machine and within seconds it laid out a scored table of models ranked by quality, speed, hardware fit, and context window size. Each score is on a 0-100 scale, so you can quickly eyeball what matters most to you. Need raw quality? Sort by that. Want the fastest inference for your GPU? One keystroke. It even picks the best quantization level automatically based on your available memory, which honestly saved me a rabbit hole of GGUF comparisons.
The tool ships with both an interactive TUI and a straightforward CLI mode. The TUI is surprisingly polished — you can search, filter by fit level, toggle providers, and even kick off model downloads directly from the interface. It supports [Ollama](https://ollama.com/), llama.cpp, and MLX as runtime providers, so it covers pretty much every local inference setup people are actually using right now. There’s also a “plan mode” that flips the question around: instead of asking what fits your hardware, you pick a model and it tells you what hardware you’d need. Neat trick for anyone speccing out a build.
The project showed up on Hacker News and caught fire almost immediately. On [GitHub](https://github.com/AlexsJones/llmfit), it racked up over 8,000 stars in roughly two weeks, which is wild for a CLI tool. You can install it through Homebrew (`brew install llmfit`), cargo, or a one-liner curl script. It’s MIT licensed and the model database currently covers around 500 models across 130+ providers, pulled from the HuggingFace API.
If you’re running local models — or thinking about it — this is one of those tools that should have existed years ago. It takes the annoying homework out of picking the right model for your rig.

Leave a comment