DFRobot’s HUSKYLENS 2 is an edge AI vision sensor that runs a Model Context Protocol server on-device. Point Claude or ChatGPT at it and the LLM doesn’t get a JPEG — it gets a callable tool returning “I see Alice waving at the table.” Bounding boxes turned into semantic facts.
What’s inside
Kendryte K230 dual-core at 1.6GHz, 6 TOPS NPU, 1GB LPDDR4, 8GB eMMC. 2MP GC2093 sensor at 60fps, 2.4-inch IPS touchscreen, mic and speaker. Twenty-plus models preloaded — face ID, tracking, line following — plus a YOLO pipeline to label, train, and flash a custom detector. UART and I2C wire it into Arduino, ESP32, Raspberry Pi, micro:bit. Swappable macro and night-vision lenses. Optional Wi-Fi $7.90. The sensor itself is $74.90.
What MCP unlocks
Before this, wiring vision to an LLM meant glue code: grab frames, run inference, format prompts. HUSKYLENS 2 exposes its built-in functions as MCP tools, so an agent just asks “is anyone at the door?” and gets words back. A working hackster.io demo is already up. Doorbells that talk back, classroom robots taking voice commands, factory QC where the line manager queries defects in plain English — weekend projects instead of six-month integrations.
You Might Also Like
- Elgato Stream Deck mcp Enabled Becomes the First Consumer Hardware to Speak Model Context Protocol
- Agent Action Protocol aap the Missing Layer Above mcp That Actually Makes Agents Production Ready
- A Evolve Amazon Agentic ai Framework Tops mcp Atlas at 79 4 With Zero Human Tuning
- Raspberry pi ai hat 2 130 Gets you a Local llm box With 40 Tops and its own 8gb ram
- Phi 4 Reasoning Vision 15b Microsofts 15b Model Just Embarrassed gpt 4o on Vision Tasks

Leave a comment