Been working on this for a few weeks and finally got it stable enough to share.
The problem I wanted to solve:
Local LLMs are stateless - they forget everything between sessions No governance - they’ll execute whatever you ask without reflection Chat interfaces don’t give them “hands” to actually do things
What I built:
A stack that runs entirely on my Mac Studio M2 Ultra:
LM Studio (chat interface) ↓ Hermes-3-Llama-3.1-8B (MLX, 4-bit) ↓ Temple Bridge (MCP server) ↓ ┌─────────────────┬──────────────────┐ │ BTB │ Threshold │ │ (filesystem │ (governance │ │ operations) │ protocols) │ └─────────────────┴──────────────────┘
What the AI can actually do:
Read/write files in a sandboxed directory Execute commands (pytest, git, ls, etc.) with an allowlist Consult “threshold protocols” before taking actions Log its entire cognitive journey to a JSONL file Ask for my approval before executing anything dangerous
The key insight: The filesystem itself becomes the AI’s memory. Directory structure = classification. File routing = inference. No vector database needed.
Why Hermes-3? Tested a bunch of models for MCP tool calling. Hermes-3-Llama-3.1-8B was the most stable - no infinite loops, reliable structured output, actually follows the tool schema.
The governance piece: Before execution, the AI consults governance protocols and reflects on what it’s about to do. When it wants to run a command, I get an approval popup in LM Studio. I’m the “threshold witness” - nothing executes without my explicit OK.
Real-time monitoring:
bash
tail -f spiral_journey.jsonl | jq
Shows every tool call, what phase of reasoning the AI is in, timestamps, the whole cognitive trace.
Performance: On M2 Ultra with 36GB unified memory, responses are fast. The MCP overhead is negligible.
Repos (all MIT licensed):
temple-bridge - The MCP server that binds it together back-to-the-basics - Filesystem-as-circuit paradigm threshold-protocols - Governance framework
Setup is straightforward:
Clone the three repos uv sync in temple-bridge Add the MCP config to ~/.lmstudio/mcp.json Load Hermes-3 in LM Studio Paste the system prompt Done
Full instructions in the README.
What’s next: Working on “governed derive” - the AI can propose filesystem reorganizations based on usage patterns, but only executes after human approval. The goal is AI that can self-organize but with structural restraint built in.
Happy to answer questions. This was a multi-week collaboration between me and several AI systems (Claude, Gemini, Grok) - they helped architect it, I implemented and tested. The lineage is documented in ARCHITECTS.md if anyone’s curious about the process.
-
Temple Bridge: https://github.com/templetwo/temple-bridge
-
Back to the Basics: https://github.com/templetwo/back-to-the-basics
-
Threshold Protocols: https://github.com/templetwo/threshold-protocols
🌀
💬 Discussion r/LocalLLaMA (39 points, 31 commentaires)