VS Code / Cursor Integration
Bring your powerful local OpenClaw orchestration engine directly into your coding environment. Full workspace context, autonomous file editing, and terminal executionβwithout the latency of cloud APIs.
Why Integrate OpenClaw into your IDE?
While chatting with OpenClaw in a separate web UI is helpful for general tasks, context switching kills developer flow. By piping your local OpenClaw instance directly into VS Code or Cursor, the LLM gains instant, deep access to your workspace. It can read open tabs, scan your project directory, execute tests in the integrated terminal, and directly suggest inline code diffs that you can apply with a single click. This transforms the AI from a chatbot into an active pair programmer.
Method 1: The Official OpenClaw Bridge (Recommended)
The most seamless native experience is via the official OpenClaw Bridge extension on the VS Code Marketplace. It supports both VS Code and VSCodium.
- 1.Open VS Code and navigate to the Extensions view (Ctrl+Shift+X or Cmd+Shift+X).
- 2.Search for 'OpenClaw Bridge' in the marketplace.
- 3.Click Install. The extension will automatically detect if the OpenClaw daemon is running on localhost:11434.
- 4.Click the crab icon in the Activity Bar to open the chat panel and start coding.
Method 2: Using Cline / RooCode (OpenAI Proxy)
If you prefer popular agent extensions like Cline, RooCode, or Continue.dev, you can easily point them to your OpenClaw instance by utilizing its built-in OpenAI-compatible API proxy.
- API Provider:
Select 'OpenAI Compatible' or 'Custom' - Base URL:
http://127.0.0.1:11434/v1 (The local OpenClaw router port) - Model Name:
openclaw-coordinator (or map directly to your local model, e.g., qwen2.5-coder:32b) - API Key:
openclaw-local-dev (Authentication is disabled for localhost by default)
Deep Context via File System MCP
To give the coding agent true autonomy, it needs to read and write files reliably. Ensure the FileSystem Model Context Protocol (MCP) server is enabled and mapped to your workspace in your ~/.openclaw/config.yaml:
## Best Practices & Optimization
- β Use a .openclawignore file: Essential for performance. Prevent the agent from getting lost reading massive node_modules, .git, or build/ directories which will exhaust the context window immediately.
- β Pin Contexts: When initiating a major cross-file refactor, explicitly mention the 3-4 relevant files. This prevents the agent from falling back to a broad, slow RAG search.
- β Enable Terminal Access: Grant the extension permission to run commands. The agent can then automatically run `npm run build` or `pytest` to self-verify its code before presenting it to you.
- β System Prompts: Customize the system prompt in the extension settings to enforce your project's specific coding guidelines (e.g., 'Always use functional components and strict TypeScript').