Hey LocalLLaMA! I just released LocalAgent v0.1.1, a local-first AI agent runtime focused on safe tool calling + repeatable runs.
GitHub: https://github.com/CalvinSturm/LocalAgent
Model backends (local)
Supports local models via:
LM Studio Ollama llama.cpp server
Coding tasks + browser tasks
Local coding tasks (optional)
LocalAgent can do local coding tasks (read/edit files, apply patches, run commands/tests) via tool calling.
Safety defaults:
coding tools are available only with explicit flags shell/write are disabled by default approvals/policy controls still apply
Browser automation (Playwright MCP)
Also supports browser automation via Playwright MCP, e.g.:
navigate pages extract content run deterministic local browser eval tasks
Core features
tool calling with safe defaults approvals / policy controls replayable run artifacts eval harness for repeatable testing
Quickstart
cargo install —path . —force
localagent init
localagent mcp doctor playwright
localagent —provider lmstudio —model
Everything is local-first, and browser eval fixtures are local + deterministic (no internet dependency).
“What else can it do?”
Interactive TUI chat (chat —tui true) with approvals/actions inline One-shot runs (run / exec) Trust policy system (policy doctor, print-effective, policy test) Approval lifecycle (approvals list/prune, approve, deny, TTL + max-uses) Run replay + verification (replay, replay verify) Session persistence + task memory blocks (session …, session memory …) Hooks system (hooks list/doctor) for pre-model and tool-result transforms Eval framework (eval) with profiles, baselines, regression comparison, JUnit/MD reports Task graph execution (tasks run/status/reset) with checkpoints/resume Capability probing (—caps) + provider resilience controls (retries/timeouts/limits) Optional reproducibility snapshots (—repro on) Optional execution targets (—exec-target host|docker) for built-in tool effects MCP server management (mcp list/doctor) + namespaced MCP tools Full event streaming/logging via JSONL (—events) + TUI tail mode (tui tail)
Feedback I’d love
I’m especially looking for feedback on:
browser workflow UX (what feels awkward / slow / confusing?) MCP ergonomics (tool discovery, config, failure modes, etc.)
Thanks, happy to answer questions, and I can add docs/examples based on what people want to try.
💬 Discussion r/LocalLLaMA (9 points, 5 commentaires) 🔗 Source