Quants got their own foundation model. Kronos takes the playbook Google used for TimesFM and ports it to financial OHLCV data — feed it candlesticks, get back multi-period forecasts. AAAI 2026 accepted, 23k GitHub stars, still climbing today.
How it works
Two stages. A tokenizer first quantizes continuous multi-dim K-line data (open, high, low, close, volume) into discrete tokens. Then an autoregressive Transformer predicts the next ones, the same way an LLM predicts the next word. Pre-trained on data from 45 global exchanges with up to 2,048 tokens of context. Sizes range from 4.1M to 499.2M parameters, so the small variant runs on a laptop and the large one is worth fine-tuning for a real strategy.
What you can do with it
Weights are on Hugging Face Hub. The repo ships a full fine-tuning pipeline plus a live BTC/USDT demo that streams 24-hour predictions in real time. Batch inference works out of the box — plug it into a backtester, fine-tune on your asset universe, or wire it into an agent loop.
The bet: if Transformers can model language, why not candlesticks?
You Might Also Like
- 708 Github Stars in 48 Hours Claude Token Efficient Universal Claude md and the Fight Over Claudes Most Expensive Habit
- Ggml Llama cpp Joins Hugging Face and Honestly it was Only a Matter of Time
- Astrbot Crosses 22k Github Stars as Developers Flock to its 18 Platform ai Chatbot Framework
- Insforge Hits 1 on Product Hunt and 3600 Github Stars is This What Agent Native Backends Look Like
- Openviking Treats ai Agent Memory Like a File System and 9k Github Stars say its Working

Leave a comment