Introduction
Dialectic is a VSCode extension that bridges the gap between AI assistants and your IDE. It starts by solving the code review problem - replacing clunky terminal scrolling with GitHub-style review panels - but aims to become a comprehensive platform for AI-assisted development.
By connecting AI assistants to your IDE's Language Server Protocol (LSP), Dialectic will eventually enable sophisticated code understanding, refactoring, and navigation that goes far beyond what's possible in a terminal chat.
The Problem
When working with AI assistants on code, most developers fall into one of two unsatisfactory patterns:
The Micro-Manager: Review every single edit hunk-by-hunk as it's proposed. This is tedious, breaks your flow, and makes it nearly impossible to understand the bigger picture of what's being built.
The Auto-Accepter: Enable auto-accept to avoid micro-management, then ask for a summary afterwards. You end up scrolling through terminal output, losing track of line numbers as you make further changes, and struggling to navigate between the review and your actual code.
Neither approach gives you what you really want: the kind of comprehensive, navigable code review you'd expect from a pull request on GitHub.
The Dialectic Approach
Dialectic provides a dedicated review panel in VSCode where AI-generated reviews appear as structured, navigable documents. Click on code references to jump directly to the relevant lines. Continue your conversation with the AI naturally while the review stays synchronized with your evolving codebase.
The review becomes a living document that can eventually be used as a commit message, preserving the reasoning behind your changes in your git history.
Part of a Larger Ecosystem
Dialectic is designed to work synergistically with other socratic shell tools. The collaborative prompts establish the interaction patterns, Dialectic provides the review infrastructure, and hippo learns from the accumulated dialogue to help improve future collaborations. Together, these tools create an AI partnership that becomes more effective over time, building on both the specific insights preserved in reviews and the meta-patterns of successful collaboration.
Installation
This guide walks you through installing both the VSCode extension and MCP server components.
Prerequisites
- VSCode: Version 1.74.0 or later
- Node.js: Version 18 or later (for MCP server)
- AI Assistant: Compatible with Model Context Protocol (MCP)
VSCode Extension
From VSCode Marketplace (Recommended)
- Open VSCode
- Go to Extensions (Ctrl+Shift+X / Cmd+Shift+X)
- Search for "Dialectic"
- Click "Install" on the Dialectic extension
- Reload VSCode when prompted
From VSIX File
If installing from a local VSIX file:
- Download the
.vsix
file from the releases page - Open VSCode
- Go to Extensions (Ctrl+Shift+X / Cmd+Shift+X)
- Click the "..." menu and select "Install from VSIX..."
- Select the downloaded
.vsix
file - Reload VSCode when prompted
MCP Server
Via npm (Recommended)
npm install -g dialectic-mcp-server
From Source
git clone https://github.com/socratic-shell/dialectic.git
cd dialectic/server
npm install
npm run build
npm link
AI Assistant Configuration
Configure your AI assistant to use the Dialectic MCP server. The exact steps depend on your AI assistant:
For Amazon Q CLI
Add to your MCP configuration:
{
"mcpServers": {
"dialectic": {
"command": "dialectic-mcp-server",
"args": []
}
}
}
For Other MCP-Compatible Assistants
Refer to your AI assistant's documentation for adding MCP servers. Use:
- Command:
dialectic-mcp-server
- Transport: stdio
- Arguments: None required
Verification
Test VSCode Extension
- Open VSCode in a project directory
- Check that "Dialectic" appears in the sidebar (activity bar)
- The extension should show "Ready" status
Test MCP Server
- Start your AI assistant with MCP configuration
- Verify the
present_review
tool is available:You: "What tools do you have available?" AI: "I have access to... present_review (Display a code review in VSCode)..."
End-to-End Test
- Make some code changes in your project
- Ask your AI assistant: "Present a review of my recent changes"
- The review should appear in the Dialectic panel in VSCode
- File references in the review should be clickable
Troubleshooting
Extension Not Loading
- Check VSCode version (must be 1.74.0+)
- Reload VSCode window (Ctrl+Shift+P → "Developer: Reload Window")
- Check VSCode Developer Console for errors (Help → Toggle Developer Tools)
MCP Server Not Found
- Verify installation:
which dialectic-mcp-server
- Check Node.js version:
node --version
(must be 18+) - Try reinstalling:
npm uninstall -g dialectic-mcp-server && npm install -g dialectic-mcp-server
IPC Connection Issues
- Ensure both extension and MCP server are running
- Check that
DIALECTIC_IPC_PATH
environment variable is set in terminal - Restart VSCode to refresh environment variables
File References Not Working
- Verify you're in a workspace (not just a single file)
- Check that file paths in reviews are relative to workspace root
- Ensure files exist at the referenced locations
Configuration
VSCode Extension Settings
The extension provides these configurable settings:
dialectic.autoShow
: Automatically show review panel when reviews are received (default: true)dialectic.maxContentLength
: Maximum review content length in characters (default: 100000)
Access via File → Preferences → Settings → Search "dialectic"
MCP Server Options
The MCP server accepts these command-line options:
--log-level
: Set logging level (debug, info, warn, error) (default: info)--timeout
: IPC timeout in milliseconds (default: 5000)
Example:
dialectic-mcp-server --log-level debug --timeout 10000
Uninstallation
Remove VSCode Extension
- Go to Extensions in VSCode
- Find "Dialectic" extension
- Click gear icon → "Uninstall"
Remove MCP Server
npm uninstall -g dialectic-mcp-server
Clean Configuration
Remove any MCP server configuration from your AI assistant's settings.
Next Steps
- Read the Quick Start guide for your first review
- Learn about Review Format conventions
- Check Troubleshooting for common issues
Quick Start
This guide walks you through a typical Dialectic workflow.
1. Make Code Changes
Work with your AI assistant as usual to make code changes to your project. Enable auto-accept edits to avoid interruptions.
You: "Add a user authentication system"
AI: [Makes changes to multiple files]
2. Request a Review
Ask your AI assistant to present a review of the changes:
You: "Present a review of what you just implemented"
3. View the Review
The review appears in the Dialectic panel in VSCode's sidebar. The review is structured as a markdown document with sections explaining:
- What was implemented and why
- How the code works (narrative walkthrough)
- Key design decisions
- Code references with clickable links
4. Navigate the Code
Click on any file:line reference in the review to jump directly to that location in your editor. The references stay current even as you make further changes.
5. Continue the Conversation
Discuss the implementation with your AI assistant in the terminal as normal:
You: "I think the error handling in the login function could be improved"
AI: "Good point! Let me refactor that and update the review"
The review automatically updates to reflect the changes.
6. Create a Commit (Optional)
When you're satisfied with the changes, use the "Create Commit" button in the review panel. The review content becomes your commit message, preserving the reasoning and context for future reference.
Review Format Specification
This chapter defines the structure and format of review documents.
Markdown Structure
Reviews are structured as commit-ready markdown documents with a brief summary followed by detailed context. The default structure optimizes for eventual use as commit messages:
# Brief summary of what was implemented
## Context
[Why this change was needed, what goal it serves, background information]
## Changes Made
[Logical walkthrough of what was modified/added]
- Added authentication system ([`src/auth.ts:23`][])
- Updated user model to support login ([`src/models/user.ts:45`][])
- Integrated auth middleware ([`src/server.ts:67`][])
## Implementation Details
[More detailed explanations of key components and their interactions]
### Authentication Flow ([`src/auth.ts:23`][])
[How the authentication process works...]
### User Model Updates ([`src/models/user.ts:45`][])
[What changes were made and why...]
## Design Decisions
[Rationale for key choices made, alternatives considered]
Code References
Code references use the format [
file:line][]
and will be converted to clickable links:
[
src/auth.ts:23][]
- Links to line 23 in the auth module[
README.md:1][]
- Links to the top of the README
TODO: Define conventions for referencing ranges, functions, and classes.
Default vs Custom Reviews
Default Structure
The standard format above provides a comprehensive overview suitable for most code changes. It balances commit message utility with detailed technical context.
Custom Review Styles
Users can request alternative focuses when needed:
- "Show me the user flow when X happens" - Trace through specific user journeys
- "Focus on the architecture decisions" - Emphasize design choices and trade-offs
- "Give me the technical deep-dive" - Detailed implementation specifics
- "Walk me through the API changes" - Focus on interface modifications
The AI assistant should adapt the structure while maintaining the commit-friendly summary and context sections.
Examples of these variations will be added as we develop usage patterns.
Frequently asked questions
MCP Server Connection Issues
Q: I'm getting "connect ENOENT" errors when starting the MCP server
A: This is usually caused by a race condition during VSCode startup. The MCP server tries to connect before the Dialectic extension has fully activated and created the IPC socket.
Solution:
- Check the VSCode "OUTPUT" tab and select "Dialectic" from the dropdown
- Verify you see messages like:
Dialectic extension is now active Setting up IPC server at: /tmp/dialectic-[uuid].sock IPC server listening on: /tmp/dialectic-[uuid].sock Set DIALECTIC_IPC_PATH environment variable to: /tmp/dialectic-[uuid].sock
- Open a new terminal in VSCode (
Terminal > New Terminal
) - Run
dialectic-mcp-server
from the new terminal
The new terminal will inherit the updated DIALECTIC_IPC_PATH
environment variable that the extension set after activation.
Building and Testing
This section covers development environment setup, build processes, and testing procedures for contributors.
Development Environment Setup
Prerequisites
- Node.js: Version 18 or later
- npm: Version 8 or later
- VSCode: Version 1.74.0 or later (for extension development)
- Git: For version control
Repository Structure
dialectic/
├── extension/ # VSCode extension
│ ├── src/ # TypeScript source
│ ├── package.json # Extension manifest
│ └── tsconfig.json # TypeScript config
├── server/ # MCP server
│ ├── src/ # TypeScript source
│ ├── package.json # Server package
│ └── tsconfig.json # TypeScript config
├── md/ # Documentation (mdbook)
└── book.toml # mdbook configuration
Initial Setup
-
Clone the repository:
git clone https://github.com/socratic-shell/dialectic.git cd dialectic
-
Install dependencies:
# Install extension dependencies cd extension npm install # Install server dependencies cd ../server npm install # Return to root cd ..
-
Install development tools:
# Install mdbook for documentation cargo install mdbook # Install VSCode extension CLI (optional) npm install -g @vscode/vsce
Building
VSCode Extension
cd extension
npm run compile # Compile TypeScript
npm run watch # Watch mode for development
npm run package # Create .vsix package
Build outputs:
out/
: Compiled JavaScript filesdialectic-*.vsix
: Installable extension package
MCP Server
cd server
npm run build # Compile TypeScript
npm run dev # Development mode with auto-restart
npm run package # Create distributable package
Build outputs:
dist/
: Compiled JavaScript filesdialectic-mcp-server-*.tgz
: npm package
Documentation
mdbook build # Build static documentation
mdbook serve # Serve with live reload
Build outputs:
book/
: Static HTML documentation
Testing
Unit Tests
Extension tests:
cd extension
npm test # Run all tests
npm run test:watch # Watch mode
npm run test:coverage # With coverage report
Server tests:
cd server
npm test # Run all tests
npm run test:watch # Watch mode
npm run test:coverage # With coverage report
Integration Tests
End-to-end workflow:
# Terminal 1: Start extension in debug mode
cd extension
npm run watch
# Terminal 2: Start MCP server in debug mode
cd server
npm run dev
# Terminal 3: Test with AI assistant
q chat --trust-all-tools
Test scenarios:
- Basic review display: AI calls
present_review
→ content appears in VSCode - File navigation: Click file references → VSCode opens correct files/lines
- Content modes: Test replace/append/update-section modes
- Error handling: Invalid parameters, IPC failures, malformed content
- Security: Malicious markdown content, script injection attempts
Manual Testing Checklist
Extension functionality:
- Extension loads without errors
- Sidebar panel appears and shows "Ready" status
- IPC server starts and creates socket file
-
Environment variable
DIALECTIC_IPC_PATH
is set
MCP server functionality:
-
Server starts and registers
present_review
tool - Parameter validation works correctly
- IPC connection to extension succeeds
- Error messages are clear and helpful
Review display:
- Markdown renders correctly with proper formatting
- File references become clickable links
- Clicking references opens correct files at right lines
- Security measures prevent script execution
- Large content doesn't crash the extension
Cross-platform compatibility:
- Unix sockets work on macOS/Linux
- Named pipes work on Windows
- File path resolution works across platforms
Debugging
VSCode Extension Debugging
-
Open extension in VSCode:
cd extension code .
-
Start debug session:
- Press F5 or go to Run → Start Debugging
- This opens a new VSCode window with the extension loaded
- Set breakpoints in TypeScript source files
-
View debug output:
- Help → Toggle Developer Tools → Console
- Output panel → "Dialectic" channel
MCP Server Debugging
-
Enable debug logging:
cd server npm run dev -- --log-level debug
-
Use Node.js debugger:
node --inspect dist/index.js
Then connect with Chrome DevTools or VSCode debugger
-
Test IPC communication:
# Send test message to socket echo '{"type":"present_review","payload":{"content":"# Test","mode":"replace"},"id":"test"}' | nc -U /path/to/socket
Common Issues
Extension not loading:
- Check TypeScript compilation errors:
npm run compile
- Verify package.json activation events and contributions
- Check VSCode version compatibility
IPC connection failures:
- Verify socket file permissions and location
- Check environment variable is set correctly
- Ensure both processes are running
File references not working:
- Verify workspace root detection
- Check file path resolution logic
- Test with different file path formats
Performance Testing
Load Testing
Test with large review content:
// Generate large review for testing
const largeContent = Array(1000).fill(0).map((_, i) =>
`## Section ${i}\nContent for section ${i} with [file${i}.ts:${i}][] reference.`
).join('\n\n');
Memory Profiling
Monitor memory usage during development:
# Extension memory usage
code --inspect-extensions=9229
# Server memory usage
node --inspect --max-old-space-size=4096 dist/index.js
Continuous Integration
GitHub Actions
The repository includes CI workflows for:
- Build verification: Ensure all components compile
- Test execution: Run unit and integration tests
- Security scanning: Check for vulnerabilities
- Documentation: Verify mdbook builds successfully
Pre-commit Hooks
Install pre-commit hooks to ensure code quality:
npm install -g husky
husky install
Hooks include:
- TypeScript compilation check
- ESLint code style validation
- Unit test execution
- Documentation link validation
Release Process
Version Management
-
Update version numbers:
# Extension cd extension && npm version patch # Server cd server && npm version patch
-
Build release packages:
cd extension && npm run package cd server && npm run package
-
Test release packages:
- Install extension from .vsix file
- Install server from .tgz file
- Verify end-to-end functionality
-
Create GitHub release:
- Tag version in git
- Upload packages to release
- Update documentation
Distribution
- VSCode Extension: Published to VSCode Marketplace
- MCP Server: Published to npm registry
- Documentation: Deployed to GitHub Pages
This development workflow ensures reliable builds, comprehensive testing, and smooth releases for both contributors and users.
Design & Implementation Overview
This section documents the design decisions and implementation details for Dialectic. It serves as both a design document during development and a reference for future contributors.
Architecture Summary
Dialectic consists of two main components that communicate via Unix socket IPC:
- VSCode Extension - Provides the review panel UI, handles file navigation, and acts as IPC server
- MCP Server - Acts as a bridge between AI assistants and the VSCode extension via IPC client
The AI assistant generates review content as structured markdown and uses the MCP server's present-review
tool to display it in the VSCode interface through bidirectional IPC communication.
Communication Architecture
AI Assistant → MCP Tool → Unix Socket → VSCode Extension → Review Panel → User
↑ ↓
└─────────── Response ←─────── IPC Response ←─────── User Interaction ←┘
Key Design Decisions:
- Unix Socket/Named Pipe: Secure, efficient local IPC following VSCode extension patterns
- JSON Message Protocol: Simple, debuggable, and extensible communication format
- Promise-Based Tracking: Supports concurrent operations with unique message IDs
- Environment Variable Discovery: VSCode extension sets
DIALECTIC_IPC_PATH
for automatic MCP server connection
Design Philosophy
Dialectic embodies the collaborative patterns from the socratic shell project. The goal is to enable genuine pair programming partnerships with AI assistants, not create another tool for giving commands to a servant.
Collaborative Review - Instead of accepting hunks blindly or micro-managing every change, we work together to understand what was built and why, just like reviewing a colleague's PR.
Thoughtful Interaction - The review format encourages narrative explanation and reasoning, not just "what changed" but "how it works" and "why these decisions were made."
Preserved Context - Reviews become part of your git history, retaining the collaborative thinking process for future reference and team members.
Iterative Refinement - Nothing is right the first time. The review process expects and supports ongoing dialogue, suggestions, and improvements rather than assuming the initial implementation is final.
Implementation Status
✅ MVP Complete - All core features implemented and tested:
- Review Display: Tree-based markdown rendering in VSCode sidebar
- Code Navigation: Clickable
file:line
references that jump to code locations - Content Export: Copy button to export review content for commit messages
- IPC Communication: Full bidirectional communication between AI and extension
Current State: Ready for end-to-end testing with real AI assistants in VSCode environments.
Next Phase: Package extension for distribution and create installation documentation.
Technical Stack
- MCP Server: TypeScript/Node.js with comprehensive unit testing (49/49 tests passing)
- VSCode Extension: TypeScript with VSCode Extension API
- Communication: Unix domain sockets (macOS/Linux) and named pipes (Windows)
- Protocol: JSON messages with unique ID tracking and timeout protection
- Testing: Jest for unit tests with test mode for IPC-free testing
Component Responsibilities
MCP Server (server/
)
- Exposes
present-review
tool to AI assistants - Validates parameters and handles errors gracefully
- Manages IPC client connection to VSCode extension
- Supports concurrent operations with Promise-based tracking
- See
server/src/index.ts
for main server implementation
VSCode Extension (extension/
)
- Creates IPC server and sets environment variables
- Provides tree-based review display in sidebar
- Handles clickable navigation to code locations
- Manages copy-to-clipboard functionality
- See
extension/src/extension.ts
for activation logic
Shared Types (server/src/types.ts
)
- Defines communication protocol interfaces
- Ensures type safety across IPC boundary
- Prevents protocol mismatches during development
Key Implementation Files
server/src/index.ts
- Main MCP server with tool handlersserver/src/ipc.ts
- IPC client communication logicserver/src/validation.ts
- Parameter validation and error handlingextension/src/extension.ts
- VSCode extension activation and IPC serverextension/src/reviewProvider.ts
- Tree view implementation and markdown parsingserver/src/__tests__/
- Comprehensive unit test suite
For detailed implementation specifics, refer to the source code and inline comments marked with 💡
that explain non-obvious design decisions.
Communication Protocol
This chapter defines how the MCP server and VSCode extension communicate via Unix socket IPC.
Architecture Overview
The communication follows a client-server pattern:
- VSCode Extension = IPC Server (creates socket, listens for connections)
- MCP Server = IPC Client (connects to socket, sends messages)
- Discovery = Environment variable
DIALECTIC_IPC_PATH
set by extension
Message Flow
- VSCode Extension starts and creates Unix socket/named pipe
- VSCode Extension sets
DIALECTIC_IPC_PATH
environment variable - AI Assistant calls
present-review
MCP tool with review content - MCP Server validates parameters and creates IPC message with unique ID
- MCP Server connects to socket (if not already connected) and sends JSON message
- VSCode Extension receives message, processes it, and updates review panel
- VSCode Extension sends JSON response back through socket
- MCP Server receives response, resolves Promise, and returns result to AI
Socket Management
Platform Compatibility
- Unix Domain Sockets (macOS/Linux): Socket files in VSCode extension storage
- Named Pipes (Windows): Windows pipe format with automatic cleanup
- Discovery: Extension sets environment variable for MCP server connection
Connection Lifecycle
- Extension creates socket on activation and sets environment variable
- MCP server connects when first
present-review
tool is called - Connection persists for multiple operations to avoid reconnection overhead
- Automatic cleanup when extension deactivates or MCP server closes
Message Protocol
Request/Response Pattern
All communication uses JSON messages with unique IDs for request/response correlation:
Request Message Structure:
type
: Message type identifier (currently only 'present-review')payload
: Tool parameters (content, mode, optional section)id
: UUID for response tracking
Response Message Structure:
id
: Matches request message IDsuccess
: Boolean indicating operation resulterror
: Optional error message when success is false
Review Update Modes
- replace: Complete content replacement (most common)
- append: Add content to end (for incremental reviews)
- update-section: Update specific section (MVP: append with header)
Error Handling
Connection Errors
- Missing Environment Variable: Clear error when not running in VSCode
- Socket Connection Failed: Network-level errors propagated to AI
- Extension Not Running: Connection refused handled gracefully
Message Errors
- Invalid JSON: Extension responds with error for malformed messages
- Unknown Message Type: Error response for unsupported operations
- Validation Failures: Parameter validation errors returned clearly
Timeout Protection
- 5-second timeout prevents hanging requests
- Automatic cleanup of timed-out operations
- Clear error messages for timeout scenarios
Concurrency Support
Request Tracking
- Map-based tracking of pending requests by unique ID
- Support for multiple simultaneous
present-review
calls - Proper cleanup of completed or failed requests
Resource Management
- Single persistent connection per MCP server instance
- Automatic socket cleanup on extension deactivation
- Proper disposal of VSCode API resources
Security Considerations
Local-Only Communication
- Unix sockets and named pipes are local-only by design
- No network exposure or remote access possible
- Socket files created with appropriate permissions
Input Validation
- All message parameters validated before processing
- TypeScript interfaces provide compile-time type safety
- Runtime validation prevents malformed message processing
Testing Strategy
Unit Testing
- Test mode bypasses real socket connections for unit tests
- Mock responses simulate success/error scenarios
- Comprehensive coverage of error conditions and edge cases
Integration Testing
- Environment detection verification
- Platform-specific socket path generation testing
- Message protocol serialization/deserialization validation
Implementation References
For specific implementation details, see:
server/src/ipc.ts
- IPC client implementation with connection managementextension/src/extension.ts
- IPC server setup and message handlingserver/src/types.ts
- Shared message protocol interfacesserver/src/__tests__/ipc.test.ts
- Comprehensive test coverage
The protocol is designed to be simple, reliable, and extensible for future enhancements while maintaining backward compatibility.
Security Considerations
This section documents the security measures implemented in Dialectic's markdown rendering and IPC communication.
Webview Security Model
Dialectic renders markdown content in VSCode webviews, which operate in a sandboxed environment with strict security policies. Our security approach implements defense-in-depth with multiple layers of protection.
Content Security Policy (CSP)
The webview includes proper CSP headers with nonce-based script execution:
<meta http-equiv="Content-Security-Policy"
content="default-src 'none';
script-src 'nonce-${nonce}';
style-src ${webview.cspSource};">
Key protections:
default-src 'none'
- Blocks all resources by defaultscript-src 'nonce-${nonce}'
- Only allows scripts with the generated nonce- Nonce generated using
crypto.randomBytes
for each render - Prevents unauthorized script injection while allowing legitimate functionality
HTML Sanitization
DOMPurify provides additional sanitization of the HTML output from markdown-it:
const cleanHtml = DOMPurify.sanitize(renderedHtml, {
ADD_ATTR: ['data-file-ref'],
ALLOWED_TAGS: [...],
ALLOWED_ATTR: [...]
});
Security benefits:
- Runs in isolated JSDOM environment to prevent DOM manipulation attacks
- Configured to preserve necessary
data-file-ref
attributes for functionality - Blocks potentially malicious content that could bypass CSP
- Defense-in-depth against XSS and other webview vulnerabilities
Secure Link Handling
File references use data attributes instead of href-based manipulation:
// ✅ Secure approach - data attributes
<a data-file-ref="src/auth.ts:23">src/auth.ts:23</a>
// ❌ Vulnerable approach - href manipulation
<a href="javascript:openFile('src/auth.ts:23')">src/auth.ts:23</a>
Advantages:
- Prevents URL-based attacks and malformed protocol injection
- More controlled link processing through event delegation
- Prevents accidental navigation that could escape the webview context
- Clear separation between display and functionality
IPC Communication Security
Process Isolation
The MCP server and VSCode extension run as separate processes, providing natural security boundaries:
- MCP Server: Runs in AI assistant's process context with limited permissions
- VSCode Extension: Runs in VSCode's extension host with VSCode's security model
- Communication: Only through well-defined IPC protocol with structured messages
Input Validation
All IPC messages undergo validation before processing:
// Message structure validation
interface IPCMessage {
id: string; // UUID for request correlation
type: string; // Message type validation
content: string; // Markdown content (sanitized before rendering)
mode: 'replace' | 'update' | 'append'; // Enum validation
}
Validation layers:
- JSON schema validation for message structure
- Content-type validation for markdown input
- Mode parameter validation against allowed values
- Error handling for malformed or oversized messages
Threat Model
What We're Protecting Against
Primary threats:
- Malicious markdown content injecting scripts or HTML
- Crafted file references attempting to access unauthorized locations
- IPC message injection or manipulation
- Webview escape attempts through malformed content
Secondary considerations:
- Accidental vulnerabilities from trusted AI assistant content
- Edge cases in markdown parsing that could be exploited
- Future expansion to untrusted content sources
What We're Not Defending Against
Out of scope:
- Malicious AI assistants actively trying to attack the user (trust model assumes collaborative AI)
- VSCode extension host vulnerabilities (rely on VSCode's security model)
- Operating system level attacks (outside application boundary)
- Network-based attacks (all communication is local IPC)
Security Best Practices
For Contributors
When modifying security-sensitive code:
- Validate all inputs at IPC and webview boundaries
- Use parameterized queries for any dynamic content generation
- Test with malicious inputs including script tags, unusual protocols, oversized content
- Follow principle of least privilege - only enable minimum required capabilities
- Update CSP headers when adding new webview functionality
For AI Assistants
When generating review content:
- Use standard markdown syntax - avoid HTML tags unless necessary
- Validate file paths before including in reviews
- Keep content reasonable size to avoid resource exhaustion
- Use standard file:line reference format for navigation links
Security Updates
This security model was implemented as part of the markdown rendering architectural refactor (July 2024). Key improvements over the previous approach:
- Replaced fragile VSCode internal APIs with industry-standard markdown-it
- Added comprehensive CSP and DOMPurify sanitization
- Implemented secure data-attribute-based link handling
- Established clear security boundaries between components
Future security enhancements should maintain these defense-in-depth principles while enabling new functionality.
MCP Server Design
This chapter details the design and implementation approach of the MCP server component.
Role and Responsibilities
The MCP server acts as a thin communication bridge between AI assistants and the VSCode extension. It does not generate or understand review content - that intelligence stays with the AI assistant.
Key Responsibilities:
- Expose the
present-review
tool to AI assistants via MCP protocol - Validate tool parameters and provide clear error messages
- Establish and maintain IPC connection to VSCode extension
- Forward review content through Unix socket with proper error handling
- Support concurrent operations with unique message tracking
Architecture
┌─────────────────┐ MCP Protocol ┌─────────────────┐ Unix Socket ┌─────────────────┐
│ AI Assistant │ ←─────────────────→ │ MCP Server │ ←─────────────────→ │ VSCode Extension│
└─────────────────┘ └─────────────────┘ └─────────────────┘
The MCP server operates as:
- MCP Protocol Server: Handles stdio communication with AI assistants
- IPC Client: Connects to VSCode extension's Unix socket server
- Message Bridge: Translates between MCP tool calls and IPC messages
Core Tool: present-review
The primary tool exposed by the MCP server provides structured guidance to AI assistants:
Tool Parameters:
content
(required): Markdown review content with structured formatmode
(optional): How to handle content - 'replace', 'update-section', or 'append'section
(optional): Section name for update-section mode
AI Guidance Strategy: The tool description provides multi-line structured guidance including:
- Clear purpose and usage instructions
- Content structure recommendations (summary, findings, suggestions)
- Code reference format (
file:line
pattern) - Parameter usage examples and best practices
Implementation Approach
Technology Stack
- Language: TypeScript running on Node.js
- MCP SDK: Official ModelContextProtocol SDK for protocol handling
- Transport: StdioServerTransport for AI assistant communication
- IPC: Node.js
net
module for Unix socket communication - Testing: Jest with comprehensive unit test coverage
Core Components
DialecticMCPServer: Main server class orchestrating MCP protocol handling
- Initializes MCP server with tool capabilities
- Sets up IPC communicator for VSCode connection
- Handles tool registration and request routing
IPCCommunicator: Manages Unix socket communication with VSCode extension
- Handles connection establishment and lifecycle
- Implements Promise-based request tracking with unique IDs
- Provides timeout protection and error recovery
Validation Module: Comprehensive parameter validation
- Type checking and format validation for all tool parameters
- Clear error messages for invalid inputs
- Runtime safety for all user-provided data
Design Patterns
Promise-Based Concurrency: Each request gets a unique ID and Promise for tracking
- Supports multiple simultaneous operations
- Clean async/await patterns throughout
- Proper error propagation and timeout handling
Test Mode Architecture: Constructor parameter enables testing without real sockets
- Allows comprehensive unit testing
- Simulates success/error scenarios
- Maintains same interface as production mode
Environment-Based Configuration: Uses environment variables for discovery
- VSCode extension sets
DIALECTIC_IPC_PATH
- Clear error messages when not in VSCode environment
- No hardcoded paths or configuration files
Error Handling Strategy
Connection Management
- Environment Detection: Clear error when
DIALECTIC_IPC_PATH
not set - Socket Failures: Network-level errors propagated with context
- Connection Loss: Automatic detection and graceful degradation
Message Processing
- Timeout Protection: 5-second timeout prevents hanging operations
- Invalid Responses: Proper error handling for malformed messages
- Concurrent Safety: Request tracking prevents ID collisions
User Experience
- Clear Error Messages: Specific guidance for common issues
- Graceful Degradation: Continues operation when possible
- Debug Information: Detailed logging for troubleshooting
Testing Strategy
Unit Testing Approach
- Test Mode: Bypasses real socket connections for isolated testing
- Mock Scenarios: Simulates various success/error conditions
- Edge Cases: Comprehensive coverage of error conditions
- Concurrent Operations: Validates multi-request scenarios
Integration Testing
- Environment Validation: Tests proper error when not in VSCode
- Protocol Compliance: Validates MCP protocol adherence
- Message Serialization: Tests JSON protocol implementation
Performance Characteristics
Memory Management
- Request Cleanup: Automatic cleanup of completed/timed-out requests
- Connection Reuse: Single persistent socket per server instance
- Efficient JSON: Minimal serialization overhead
Concurrency Support
- Non-blocking Operations: Promise-based async patterns
- Unique Tracking: UUID-based request correlation
- Resource Isolation: Independent state per server instance
Implementation Files
Core Implementation:
server/src/index.ts
- Main server class and tool handlersserver/src/ipc.ts
- IPC client communication logicserver/src/validation.ts
- Parameter validation and error handlingserver/src/types.ts
- Shared type definitions
Testing:
server/src/__tests__/
- Comprehensive unit test suiteserver/test-mcp-server.js
- Integration test script
Configuration:
server/package.json
- Dependencies and scriptsserver/tsconfig.json
- TypeScript configurationserver/jest.config.js
- Test configuration
Future Enhancements
Enhanced Code References
- Search-Based References: More resilient than line numbers
- Multi-File Support: Handle references across multiple files
- Context Awareness: Understand code structure and relationships
Advanced Review Modes
- Streaming Updates: For very large review content
- Diff-Based Updates: Minimize data transfer for incremental changes
- Multi-Review Sessions: Support multiple concurrent contexts
Monitoring and Observability
- Request Metrics: Track latency and success rates
- Connection Health: Monitor IPC stability
- Error Analytics: Aggregate patterns for debugging
The MCP server is designed to be simple, reliable, and extensible while maintaining a clear separation of concerns between AI intelligence and communication infrastructure.
MCP Tool Interface
This section documents the present_review
tool that AI assistants use to display code reviews in VSCode.
Tool Overview
The present_review
tool is the primary interface between AI assistants and the Dialectic system. It accepts markdown content and displays it in the VSCode review panel with clickable file references and proper formatting.
Tool Definition
The tool is registered with the MCP server and exposed to AI assistants:
// 💡: Register the present-review tool that AI assistants can call
// to display code reviews in the VSCode extension
this.server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: 'present_review',
description: [
'Display a code review in the VSCode review panel.',
'Reviews should be structured markdown with clear sections and actionable feedback.',
'Use [`filename:line`][] format for file references (rustdoc-style).'
].join(' '),
inputSchema: {
type: 'object',
properties: {
content: {
type: 'string',
description: [
'Markdown content of the review. Should include:',
'1) Brief summary suitable for commit message,',
'2) Detailed findings with file references,',
'3) Specific suggestions for improvement.',
'Use [`filename:line`][] format for file references.',
'Example: "Updated authentication in [`src/auth.rs:45`][]"'
].join(' '),
},
baseUri: {
type: 'string',
description: 'Base directory path for resolving relative file references (required).',
},
mode: {
type: 'string',
enum: ['replace', 'update-section', 'append'],
description: [
'How to handle the review content:',
'replace (default) - replace entire review,',
'update-section - update specific section,',
'append - add to existing review'
].join(' '),
default: 'replace',
},
section: {
type: 'string',
description: [
'Section name for update-section mode',
'(e.g., "Summary", "Security Issues", "Performance")'
].join(' '),
},
},
required: ['content', 'baseUri'],
},
} satisfies Tool,
{
name: 'get_selection',
description: [
'Get the currently selected text from any active editor in VSCode.',
'Works with source files, review panels, and any other text editor.',
'Returns null if no text is selected or no active editor is found.'
].join(' '),
inputSchema: {
type: 'object',
properties: {},
additionalProperties: false,
},
} satisfies Tool,
],
};
});
Parameters
The tool accepts parameters defined by the PresentReviewParams
interface:
/**
* Parameters for the present-review MCP tool
*/
export interface PresentReviewParams {
/** Markdown content of the review to display */
content: string;
/** How to handle the review content in the extension */
mode: 'replace' | 'update-section' | 'append';
/** Optional section name for update-section mode */
section?: string;
/** Base directory path for resolving relative file references */
baseUri: string;
}
Parameter Details
content
(required)
- Type:
string
- Description: Markdown content of the review to display
- Format: Standard markdown with support for file references using
[filename:line][]
syntax - Example:
# Authentication System Implementation Added user authentication with secure session handling. ## Key Changes - Login endpoint ([`src/auth.ts:23`][]) - User model updates ([`src/models/user.ts:45`][])
mode
(required)
- Type:
'replace' | 'update-section' | 'append'
- Description: How to handle the review content in the extension
- Values:
'replace'
: Replace entire review panel content'update-section'
: Update specific section (requiressection
parameter)'append'
: Add content to end of existing review
section
(optional)
- Type:
string
- Description: Section name for
update-section
mode - Usage: Allows updating specific parts of a review without replacing everything
baseUri
(required)
- Type:
string
- Description: Base directory path for resolving relative file references
- Usage: Ensures file references resolve correctly as clickable links in VSCode
Response Format
The tool returns a PresentReviewResult
:
/**
* Response from the present-review tool
*/
export interface PresentReviewResult {
/** Whether the review was successfully presented */
success: boolean;
/** Optional message about the operation */
message?: string;
}
Implementation Flow
When an AI assistant calls the tool, the following sequence occurs:
// 💡: Handle present-review tool calls by forwarding to VSCode extension via IPC
this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === 'present_review') {
await this.ipc.sendLog('info', `Received present_review tool call with ${Object.keys(args || {}).length} parameters`);
try {
// Validate and extract parameters
const params = validatePresentReviewParams(args);
await this.ipc.sendLog('debug', `Parameters validated successfully: mode=${params.mode}, content length=${params.content.length}`);
// Forward to VSCode extension via IPC
await this.ipc.sendLog('info', 'Forwarding review to VSCode extension via IPC...');
const result = await this.ipc.presentReview(params);
if (result.success) {
await this.ipc.sendLog('info', 'Review successfully displayed in VSCode');
} else {
- Parameter Validation: Input parameters are validated against the schema
- IPC Communication: Valid parameters are forwarded to the VSCode extension via Unix socket
- Review Display: Extension processes the markdown and updates the review panel
- Response: Success/failure result is returned to the AI assistant
Usage Examples
Basic Review Display
{
"name": "present_review",
"arguments": {
"content": "# Code Review\n\nImplemented user authentication system.\n\n## Changes\n- Added login endpoint ([`src/auth.ts:23`][])\n- Updated user model ([`src/models/user.ts:45`][])",
"mode": "replace"
}
}
Appending Additional Context
{
"name": "present_review",
"arguments": {
"content": "\n## Security Considerations\n\nThe authentication system uses bcrypt for password hashing ([`src/auth.ts:67`][]).",
"mode": "append"
}
}
Updating Specific Section
{
"name": "present_review",
"arguments": {
"content": "## Updated Implementation Details\n\nRefactored the login flow to use JWT tokens ([`src/auth.ts:89`][]).",
"mode": "update-section",
"section": "Implementation Details"
}
}
File Reference Format
File references should use the rustdoc-style format: [filename:line][]
Supported formats:
[
src/auth.ts:23][]
- Links to line 23 in src/auth.ts[
README.md:1][]
- Links to line 1 in README.md[
package.json:15][]
- Links to line 15 in package.json
Processing:
- References are converted to clickable links in the review panel
- Clicking a reference opens the file at the specified line in VSCode
- References remain functional even as code changes (line numbers are preserved)
Error Handling
The tool validates all parameters and returns appropriate error messages:
- Missing content: "Content parameter is required"
- Invalid mode: "Mode must be 'replace', 'update-section', or 'append'"
- Missing section: "Section parameter required for update-section mode"
- IPC failure: "Failed to communicate with VSCode extension"
Best Practices for AI Assistants
Review Structure
- Start with a clear summary of what was implemented
- Use logical sections (Context, Changes Made, Implementation Details)
- Include file references for all significant code locations
- End with design decisions and rationale
File References
- Reference the most important lines, not every change
- Use descriptive context around references
- Group related references together
- Prefer function/class entry points over implementation details
Content Updates
- Use
replace
mode for new reviews - Use
append
mode to add context or respond to questions - Use
update-section
mode sparingly, only for targeted updates - Keep individual tool calls focused and coherent
This tool interface enables rich, interactive code reviews that bridge the gap between AI-generated insights and IDE-native navigation.
AI Guidance Design Considerations
This section documents design decisions made specifically to work well with AI collaboration patterns from the socratic shell ecosystem.
Collaborative Partnership Model
Dialectic is designed around the socratic shell philosophy of genuine AI-human collaboration rather than command-and-control interactions. This influences several key design decisions:
Review as Dialogue, Not Report
Traditional code review tools present static snapshots. Dialectic treats reviews as living documents that evolve through conversation:
- Incremental updates: The
append
andupdate-section
modes allow reviews to grow organically - Conversational flow: Reviews can respond to questions and incorporate new insights
- Preserved context: Previous review content remains visible, maintaining conversation history
Narrative Over Checklist
AI assistants excel at providing narrative explanations rather than mechanical summaries:
- Story-driven structure: Reviews explain "how it works" and "why these decisions"
- Contextual reasoning: Design decisions and trade-offs are preserved alongside code
- Human-readable format: Markdown optimizes for human understanding, not machine parsing
File Reference Philosophy
Rustdoc-Style References
The [filename:line][]
format was chosen to align with AI assistant natural language patterns:
The authentication flow starts in [`src/auth.ts:23`][] and validates tokens using [`src/utils/jwt.ts:45`][].
Design rationale:
- Natural integration: References flow naturally in explanatory text
- No reference definitions: AI doesn't need to maintain separate reference sections
- Familiar syntax: Similar to rustdoc and other documentation tools AI assistants know
Semantic Navigation
File references point to semantically meaningful locations, not just changed lines:
- Function entry points: Reference where functionality begins, not implementation details
- Key decision points: Highlight where important choices are made
- Interface boundaries: Show how components connect and communicate
Tool Interface Design
Single Focused Tool
Rather than multiple specialized tools, Dialectic provides one flexible present_review
tool:
Benefits for AI collaboration:
- Cognitive simplicity: AI assistants can focus on content, not tool selection
- Flexible modes: Same tool handles different update patterns naturally
- Clear purpose: Unambiguous tool function reduces decision complexity
Forgiving Parameter Handling
The tool accepts optional parameters and provides sensible defaults:
// Minimal usage - just content required
{ content: "# Review content", mode: "replace" }
// Full control when needed
{ content: "...", mode: "update-section", section: "Implementation", baseUri: "/project" }
AI-friendly aspects:
- Progressive disclosure: Simple cases are simple, complex cases are possible
- Clear error messages: Validation errors guide AI toward correct usage
- Flexible content: No rigid structure requirements for markdown content
Integration with Socratic Shell Patterns
Meta Moments
Dialectic supports the socratic shell "meta moment" pattern where collaboration itself becomes a topic:
- Review evolution: AI can explain how understanding changed during implementation
- Process reflection: Reviews can include notes about the collaborative process
- Learning capture: Insights about effective collaboration patterns are preserved
Beginner's Mind
The system encourages fresh examination rather than pattern matching:
- No templates: Reviews aren't forced into rigid structures
- Contextual adaptation: Format adapts to what was actually built, not preconceptions
- Open-ended exploration: AI can follow interesting threads without constraint
Persistent Memory
Reviews become part of the project's persistent memory:
- Commit message integration: Reviews can become commit messages, preserving reasoning
- Searchable history: Past reviews remain accessible for future reference
- Knowledge accumulation: Understanding builds over time rather than being lost
Technical Decisions Supporting AI Collaboration
Markdown as Universal Format
Markdown was chosen as the review format because:
- AI native: Most AI assistants are trained extensively on markdown
- Human readable: Developers can read and edit reviews directly
- Tool agnostic: Works across different AI assistants and development environments
- Version controllable: Reviews can be committed alongside code
Stateless Tool Design
The present_review
tool is stateless, requiring no session management:
- Reliable operation: Each tool call is independent and self-contained
- Error recovery: Failed calls don't corrupt ongoing state
- Concurrent usage: Multiple AI assistants could theoretically use the same instance
Graceful Degradation
The system works even when components fail:
- Extension offline: MCP server provides helpful error messages
- IPC failure: Clear feedback about connection issues
- Malformed content: Security measures prevent crashes while showing errors
Future AI Integration Opportunities
Enhanced Code Understanding
The foundation supports future AI-powered features:
- Semantic file references:
function:methodName
orclass:ClassName
references - Intelligent summarization: AI could generate section summaries automatically
- Cross-review connections: Link related reviews across different changes
Collaborative Learning
The system could learn from successful collaboration patterns:
- Review quality metrics: Track which review styles lead to better outcomes
- Reference effectiveness: Learn which file references are most helpful
- Conversation patterns: Identify successful dialogue structures
Multi-AI Coordination
The architecture could support multiple AI assistants:
- Specialized reviewers: Different AIs for security, performance, architecture
- Consensus building: Multiple perspectives on the same changes
- Knowledge sharing: AIs learning from each other's review approaches
These design considerations ensure Dialectic enhances rather than constrains the natural collaborative patterns that emerge between humans and AI assistants.
VSCode Extension Design
This chapter details the design and implementation approach of the VSCode extension component.
Goal
The VSCode extension provides a simple, focused interface for displaying and interacting with AI-generated code reviews. It eliminates the need to scroll through terminal output by bringing reviews directly into the IDE as a first-class citizen.
Core Functionality
The extension enables three essential capabilities:
-
Review Display - Pop up a dedicated panel when the AI assistant presents a review, showing the structured markdown content with proper formatting
-
Code Navigation - Make
file:line
references in the review clickable, allowing instant navigation to the referenced code locations in the editor -
Content Export - Provide a "Copy" button to copy the review content to the clipboard for use in commit messages, documentation, or sharing
These three features support the core workflow: AI generates review → user reads and navigates → user exports for further use.
Architecture
The extension operates as both a UI component and an IPC server:
┌─────────────────┐ IPC Server ┌─────────────────┐ Tree View API ┌─────────────────┐
│ MCP Server │ ←─────────────────→ │ VSCode Extension│ ←─────────────────→ │ VSCode UI │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Key Responsibilities:
- Create and manage Unix socket IPC server for MCP communication
- Set environment variables for MCP server discovery
- Process incoming review messages and update UI components
- Provide tree-based review display with clickable navigation
- Handle copy-to-clipboard functionality
Implementation Approach
Technology Stack
- Language: TypeScript with VSCode Extension API
- UI Framework: VSCode TreeView API for hierarchical review display
- IPC: Node.js
net
module for Unix socket server - Markdown Processing: Custom parser for tree structure generation
- Navigation: VSCode editor commands for file/line jumping
Core Components
Extension Activation: Main entry point that sets up all functionality
- Creates review provider for tree-based display
- Registers tree view with VSCode's sidebar
- Sets up IPC server for MCP communication
- Registers commands and cleanup handlers
IPC Server Implementation: Creates platform-specific socket and handles connections
- Generates appropriate socket paths for Unix/Windows
- Listens for MCP server connections
- Processes incoming review messages
- Sets environment variables for discovery
Review Provider: Implements VSCode's TreeDataProvider interface
- Manages review content and tree structure
- Handles dynamic content updates (replace/append/update-section)
- Provides clickable navigation for code references
- Supports copy-to-clipboard functionality
Design Patterns
Tree-Based UI: Hierarchical display matching markdown structure
- Headers become expandable sections
- Content items become clickable navigation points
- Icons differentiate between content types
- Automatic refresh when content updates
Platform Compatibility: Handles different operating systems
- Unix domain sockets for macOS/Linux
- Named pipes for Windows
- Automatic cleanup and resource management
Event-Driven Updates: Reactive UI updates
- Tree view refreshes when content changes
- Immediate visual feedback for user actions
- Proper event handling for VSCode integration
User Interface Design
Sidebar Integration
The extension adds a top-level sidebar view similar to Explorer or Source Control:
- Dedicated activity bar icon
- Collapsible tree structure
- Context menus and commands
- Integrated with VSCode's theming system
Tree View Structure
📄 Review Title
├── 📁 Summary
│ ├── 🔤 Brief description of changes
│ └── 🔤 Key implementation decisions
├── 📁 Implementation Details
│ ├── 🔧 Authentication Flow (src/auth/middleware.ts:23) [clickable]
│ └── 🔧 Password Security (src/models/user.ts:67) [clickable]
└── 📁 Design Decisions
├── 🔤 Used JWT tokens for stateless authentication
└── 🔤 Chose bcrypt over other hashing algorithms
Interactive Features
- Clickable References:
file:line
patterns become navigation links - Copy Functionality: Button to export review content
- Expand/Collapse: Tree sections can be expanded or collapsed
- Tooltips: Hover information for navigation hints
Message Processing
IPC Message Handling
The extension processes incoming messages from the MCP server:
- JSON Parsing: Validates and parses incoming messages
- Message Routing: Handles different message types appropriately
- Error Responses: Sends structured error responses for invalid messages
- Success Confirmation: Acknowledges successful operations
Content Updates
Supports three update modes for dynamic review management:
- Replace: Complete content replacement (most common)
- Append: Add content to end (for incremental reviews)
- Update-Section: Smart section updates (MVP: append with header)
Navigation Implementation
Converts markdown references to VSCode commands:
- Pattern Detection: Identifies
file:line
references in content - Command Creation: Generates VSCode navigation commands
- Error Handling: Graceful handling of invalid file references
- User Feedback: Clear tooltips and visual indicators
Error Handling and Robustness
IPC Error Recovery
- Connection Failures: Continues operation even if IPC fails
- Malformed Messages: Proper error responses for invalid JSON
- Socket Cleanup: Automatic resource cleanup on extension deactivation
User Experience
- Clear Error Messages: Specific guidance for common issues
- Graceful Degradation: Extension remains functional during errors
- Visual Feedback: Loading states and operation confirmations
Resource Management
- Memory Efficiency: Proper disposal of VSCode API resources
- Socket Lifecycle: Clean creation and destruction of IPC sockets
- Event Cleanup: Proper removal of event listeners
Testing and Validation
Manual Testing Workflow
- Install extension in VSCode
- Verify environment variable setup in terminal
- Test MCP server connection and communication
- Validate review display, navigation, and copy functionality
Integration Points
- VSCode API: Tree view registration and command handling
- File System: Socket file creation and cleanup
- Clipboard: System clipboard integration
- Editor: File navigation and selection
Implementation Files
Core Implementation:
extension/src/extension.ts
- Main activation logic and IPC server setupextension/src/reviewProvider.ts
- Tree view implementation and content managementextension/package.json
- Extension manifest and VSCode integration
Configuration:
extension/tsconfig.json
- TypeScript configurationextension/.vscodeignore
- Files to exclude from extension package
Performance Considerations
Efficient Tree Updates
- Only re-parse markdown when content actually changes
- Use VSCode's built-in tree view virtualization
- Minimize DOM updates through proper event handling
Memory Management
- Automatic cleanup of socket connections
- Efficient string processing for markdown parsing
- Proper disposal of VSCode API resources
Future Enhancements
Advanced UI Features
- Review History: Navigate between previous review versions
- Diff View: Show changes between review iterations
- Search: Find specific content within large reviews
- Custom Themes: Support for personalized review styling
Enhanced Navigation
- Smart References: Support for search-based code references
- Multi-File: Handle references across multiple files
- Context Preview: Show code context without leaving review panel
Collaboration Features
- Comments: Add inline comments to review sections
- Sharing: Export reviews in various formats
- Integration: Connect with external review tools
The extension provides a solid foundation for AI-driven code reviews while maintaining simplicity and focus on the core user experience. The design emphasizes reliability, performance, and seamless integration with VSCode's existing workflows.
Markdown Rendering Pipeline
This section documents the markdown-to-HTML conversion system used in the VSCode extension.
Architecture Overview
Dialectic uses industry-standard markdown-it for markdown processing, with custom renderer rules for file reference handling and comprehensive security measures. This replaced the previous approach of using fragile VSCode internal APIs.
Markdown Content → markdown-it Parser → Custom Renderer Rules → HTML → DOMPurify → Secure Webview
Core Components
markdown-it Configuration
The markdown parser is configured with standard options and custom renderer rules:
// 💡: Using markdown-it aligns with 95% of VSCode extensions and provides
// token-based architecture for precise link control at parsing stage
const md = markdownIt({
html: true, // Enable HTML tags in source
linkify: true, // Auto-convert URLs to links
typographer: true // Enable smart quotes and other typography
});
Key benefits:
- Industry standard: Used by VSCode itself and 95% of markdown extensions
- Token-based architecture: Allows precise control over link handling at parse time
- Extensive plugin ecosystem: 2,000+ plugins available for future enhancements
- Performance: More efficient than post-processing HTML with regex
Custom Renderer Rules
File Reference Detection
Custom link_open
renderer rule detects filename:line
patterns and transforms them into secure, clickable references:
md.renderer.rules.link_open = function(tokens, idx, options, env, self) {
const token = tokens[idx];
const href = token.attrGet('href');
// Detect file:line pattern
if (href && /^[^:]+:\d+$/.test(href)) {
// Transform to secure data attribute
token.attrSet('data-file-ref', href);
token.attrSet('href', '#');
token.attrSet('class', 'file-reference');
}
return defaultRender(tokens, idx, options, env, self);
};
Design decisions:
- Parse-time processing: More reliable than post-processing HTML
- Data attributes: Secure alternative to href manipulation
- Pattern matching: Flexible regex allows various file reference formats
Reference-Style Link Preprocessing
A core ruler preprocesses markdown source to convert [filename:line][]
reference-style links into regular markdown links:
md.core.ruler.before('normalize', 'file_references', function(state) {
// Convert [`src/auth.ts:23`][] → [`src/auth.ts:23`](src/auth.ts:23)
state.src = state.src.replace(
/\[([^\]]+:\d+)\]\[\]/g,
'[$1]($1)'
);
});
Benefits:
- Elegant syntax: Users can write
[file:line][]
without defining references - Source-level transformation: Happens before parsing, integrates naturally
- Backward compatible: Regular
[text](file:line)
links still work
Security Integration
HTML Sanitization
After markdown-it renders to HTML, DOMPurify sanitizes the output while preserving necessary attributes:
const cleanHtml = DOMPurify.sanitize(renderedHtml, {
ADD_ATTR: ['data-file-ref'], // Preserve our custom attributes
FORBID_TAGS: ['script'], // Block dangerous tags
FORBID_ATTR: ['onclick'] // Block event handlers
});
Webview Integration
The sanitized HTML is inserted into a secure webview with proper CSP headers:
const webviewHtml = `
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Security-Policy"
content="default-src 'none'; script-src 'nonce-${nonce}';">
</head>
<body>
${cleanHtml}
<script nonce="${nonce}">
// Event delegation for file reference clicks
document.addEventListener('click', function(e) {
if (e.target.dataset.fileRef) {
vscode.postMessage({
command: 'openFile',
fileRef: e.target.dataset.fileRef
});
}
});
</script>
</body>
</html>`;
File Reference Handling
Click Processing
JavaScript in the webview uses event delegation to handle clicks on file references:
- Click detection: Event listener captures clicks on elements with
data-file-ref
- Message passing: Webview sends structured message to extension host
- File opening: Extension uses VSCode API to open file at specified line
Navigation Implementation
The extension processes file reference messages:
// Handle messages from webview
panel.webview.onDidReceiveMessage(message => {
if (message.command === 'openFile') {
const [filename, line] = message.fileRef.split(':');
const uri = vscode.Uri.file(path.resolve(workspaceRoot, filename));
vscode.window.showTextDocument(uri, {
selection: new vscode.Range(parseInt(line) - 1, 0, parseInt(line) - 1, 0)
});
}
});
Performance Considerations
Single-Pass Processing
The markdown-it pipeline processes everything in one pass:
- Parsing: Tokenization and AST generation
- Rendering: Custom rules applied during HTML generation
- No post-processing: Eliminates need for multiple regex operations
Efficient Token System
markdown-it's token-based architecture is more efficient than string manipulation:
- Structured data: Tokens contain metadata for precise processing
- Minimal overhead: Only processes relevant tokens
- Extensible: Easy to add new link types without performance impact
Future Enhancements
The markdown-it foundation enables easy extension:
Syntax Highlighting
md.use(require('markdown-it-highlightjs'));
Enhanced Features
- Tables:
markdown-it-table
for better table rendering - Footnotes:
markdown-it-footnote
for academic-style references - Math:
markdown-it-katex
for mathematical expressions - Diagrams:
markdown-it-mermaid
for flowcharts and diagrams
Custom Extensions
- Function references:
function:methodName
links to method definitions - Range references:
file:startLine-endLine
for code blocks - Commit references:
commit:hash
links to git history
The modular architecture makes these additions straightforward without affecting core functionality or security measures.
RFC: Exposing IDE capabilities
A natural language interface to VSCode and Language Server Protocol features
Tracking Issue: #8
Problem Statement
Currently, AI assistants working with code need many specific MCP tools to interact with the IDE:
dialectic___get_selection
for getting selected textbuilder_mcp___WorkspaceSearch
for finding code patterns- Separate tools would be needed for each LSP feature (find references, go to definition, etc.)
This creates several problems:
- Tool selection overwhelm: Too many specific tools make it hard for AI to choose the right approach
- Inconsistent interfaces: Each tool has different parameter formats and return structures
- Limited composability: Hard to combine operations (e.g., "find references to the currently selected symbol")
- Poor discoverability: AI assistants must memorize many tool names and signatures
Proposed Solution
Replace multiple specific tools with a single ideCapability(string)
tool that:
- Accepts natural language requests: "find all references to validateToken"
- Returns either results or refinement suggestions: Success with data, or "ambiguous, try one of these options"
- Uses a composable JSON mini-language internally for precise operations
- Provides self-teaching error messages that guide AI assistants toward successful usage
Interface Design
Single Entry Point
ideCapability(request: string) → string
Response Types
Success:
"Success, results: [{"file": "auth.ts", "line": 42, "context": "validateToken(user)"}]"
Ambiguous request:
"Ambiguous request, consider one of the following:
(1) {"findReferences": {"symbol": {"name": "validateToken", "file": "auth.ts", "line": 42}}}
(2) {"findReferences": {"symbol": {"name": "validateToken", "file": "utils.ts", "line": 15}}}"
Capability not available:
"We don't have the ability to do that :("
Internal Architecture
The system has three main layers:
1. Natural Language Interface
- Converts natural language requests to JSON mini-language programs
- Handles ambiguity resolution and provides refinement suggestions
- Acts as the "front door" for AI assistants
2. JSON Mini-Language Runtime
- Executes composable JSON programs
- Manages value types (Symbol, Selection, Location, etc.)
- Handles function composition and error propagation
3. VSCode Integration Layer
- Maps JSON functions to actual VSCode/LSP calls
- Handles async operations and editor state
- Returns results in JSON mini-language format
Benefits
For AI Assistants:
- Single tool to learn instead of many specific ones
- Natural language interface reduces cognitive load
- Self-teaching through error messages
- Composable operations enable complex workflows
For Users:
- More capable AI assistance with IDE operations
- Consistent interface across all IDE features
- Better error messages and suggestions
- Extensible system for future capabilities
For Developers:
- Clean separation between language runtime and IDE integration
- Easy to add new capabilities
- Testable and maintainable architecture
- Reusable across different editors (future)
Open Questions
This RFC establishes the overall approach, but several design questions need resolution:
-
Scripting Language Design: How should the JSON mini-language work? What are the core concepts and composition rules?
-
Natural Language Interface: How do we convert natural language requests to JSON programs? What's the right confidence threshold for execution vs clarification?
-
Capability Registry: What IDE capabilities should we expose initially? What are their function signatures and required value types?
Implementation Strategy
Phase 1: Proof of Concept
- Implement basic JSON mini-language runtime
- Create a few essential capabilities (getSelection, findSymbol, findReferences)
- Build simple natural language interface (possibly rule-based)
- Validate the overall approach
Phase 2: Core Capabilities
- Expand capability set to cover common IDE operations
- Improve natural language processing
- Add comprehensive error handling and suggestions
- Replace existing specific MCP tools
Phase 3: Advanced Features
- Add refactoring operations (rename, extract method, etc.)
- Integrate with more LSP features
- Optimize performance and user experience
- Consider extending to other editors
Success Criteria
This RFC will be considered successful when:
- AI assistants can perform common IDE operations through natural language
- The tool selection problem is significantly reduced
- Error messages effectively guide AI assistants to successful usage
- The system is extensible enough to add new capabilities easily
- User feedback indicates improved AI assistance quality
Next Steps
- Review and refine this overall proposal
- Work through the detailed design questions in the sub-RFCs
- Build a minimal prototype to validate core concepts
- Iterate based on real usage with AI assistants
RFC: Scripting Language
Generic JSON mini-language for composable operations
Overview
The IDE capability tool uses an internal JSON-based scripting language to represent operations precisely. The language itself is completely agnostic - it knows nothing about IDEs, symbols, or code. All domain-specific functionality is provided through extension points.
Design Principles
Made to be authored by LLMs
Programs in this language will be written by LLMs and we are designing with them in mind. We choose JSON because it is a familiar syntax so they can leverage their training better. Functions coerce arguments in obvious ways since LLMs will not necessarily think to do so.
Empower the LLM to resolve ambiguity
When there are multiple ambiguous meanings, we should highlight the possible meanings to the client LLM using natural language. It can decide how to proceed rather than us attempt to guess.
Simple, unambiguous, and extensible core
We are designing a core language that has a clean conceptual mapping (it is essentially a lambda calculus, in fact) and which can be readily extended to support all kinds of IDE and language specific functionality.
Core Language Concepts
JSON Objects as Function Calls
The fundamental concept is that JSON objects represent function calls:
- Field name = function name
- Field value = arguments object (with named parameters)
{"functionName": {"arg1": "value1", "arg2": "value2"}}
Function Composition and Automatic Type Coercion
Functions can be nested as arguments to other functions:
{
"outerFunction": {
"parameter": {"innerFunction": {"arg": "value"}}
}
}
Automatic Type Coercion: When a function receives an argument of a different type than expected, the runtime automatically attempts conversion using registered type converters. For example:
{
"findReferences": {
"symbol": {"getSelection": {}}
}
}
If findReferences
expects a symbol
but getSelection
returns a selection
, the runtime automatically attempts selection → symbol
conversion. If the conversion succeeds, the function proceeds normally. If conversion fails or is ambiguous, the function returns a structured error with suggestions.
Function Execution Outcomes
Functions in the language have three possible execution outcomes:
Success: Function completes and returns a value (which may be a constructor call for custom types)
Unrecoverable Failure: Function encounters an error that cannot be resolved by the user (e.g., network timeout, file system error). These propagate as standard errors.
Ambiguous Failure: Function finds multiple valid results and needs user guidance to proceed. These failures include structured data that the runtime converts into refinement suggestions like:
"The symbol `validateToken` is defined in multiple places. Which did you mean?
If you meant the `validateToken` from `auth.ts:42`, use `{"findSymbol": {"name": "validateToken", "file": "auth.ts", "line": 42}}`.
If you meant the `validateToken` from `utils.ts:15`, use `{"findSymbol": {"name": "validateToken", "file": "utils.ts", "line": 15}}`.
This three-outcome model enables the self-teaching behavior where ambiguous failures guide users toward successful usage rather than simply failing.
Built-in Types
The language runtime provides only minimal built-in types:
JSON-Native Types:
string
: Text valuesnumber
: Numeric valuesboolean
: True/false valuesarray
: Lists of valuesobject
: Key-value maps
Built-in Conversions:
- String ↔ Number (when valid)
- Boolean ↔ Number (0/1)
- Array ↔ String (join/split operations)
Extension Points
All domain-specific functionality is provided through three extension mechanisms:
Signature Definition
Function and type signatures use a common parameter schema system that supports optional parameters:
interface ParameterSchema {
[key: string]: string | dialectic.Optional<string>;
}
// dialectic.Optional wraps a type to indicate it's optional
namespace dialectic {
export function Optional<T>(type: T): OptionalType<T>;
}
Examples:
// Required parameters only
{name: "string", file: "string", line: "number"}
// Mixed required and optional parameters
{name: "string", file: dialectic.Optional("string"), line: dialectic.Optional("number")}
When optional parameters are omitted, functions should attempt to infer reasonable values or return ambiguous failures with suggestions for disambiguation.
Base Callable Interface
All callable entities (functions and type constructors) share a common base class:
abstract class Callable {
name: string; // Function/constructor name
description: string; // Natural language description for LLMs
implementation: Function; // Actual implementation
parameters: ParameterSchema; // Parameter types (required and optional)
constructor(name: string, description: string, implementation: Function, parameters: ParameterSchema) {
this.name = name;
this.description = description;
this.implementation = implementation;
this.parameters = parameters;
}
}
1. Type Descriptions
Type constructors are defined using classes that extend the base Callable:
class TypeDescription extends Callable {
jsClass: Function; // JavaScript class for instanceof checks
// Type constructors return instances of themselves
get returns(): string {
return this.name;
}
constructor(name: string, description: string, implementation: Function,
parameters: ParameterSchema, jsClass: Function) {
super(name, description, implementation, parameters);
this.jsClass = jsClass;
}
}
Example:
new TypeDescription(
"symbol",
"Represents a code symbol like a function, variable, or type definition",
createSymbol,
{
name: "string",
file: dialectic.Optional("string"),
line: dialectic.Optional("number")
},
Symbol
)
2. Function Descriptions
Functions are defined using classes that extend Callable and add return type information:
class FunctionDescription extends Callable {
returns: string; // Return type
constructor(name: string, description: string, implementation: Function,
parameters: ParameterSchema, returns: string) {
super(name, description, implementation, parameters);
this.returns = returns;
}
}
Example:
new FunctionDescription(
"findSymbol",
"Find a code symbol by name, optionally narrowed by file and line",
findSymbolImplementation,
{
name: "string",
file: dialectic.Optional("string"),
line: dialectic.Optional("number")
},
"symbol"
)
3. Type Conversions
Type conversions are defined using classes for consistency and better type safety:
class TypeConversion {
fromType: string; // Source type name
toType: string; // Target type name
description: string; // Natural language description of the conversion
converter: Function; // Conversion implementation
constructor(fromType: string, toType: string, description: string, converter: Function) {
this.fromType = fromType;
this.toType = toType;
this.description = description;
this.converter = converter;
}
}
Example:
new TypeConversion(
"selection",
"symbol",
"Extract the symbol reference from a text selection",
(selection) => extractSymbolFromSelection(selection)
)
Constructor Functions and Self-Evaluation
Values created by constructor functions serialize as executable JSON programs:
// A constructor might return:
{
"customType": {
"property1": "value1",
"property2": 42,
"nestedValue": {"otherType": {"data": "example"}}
}
}
This return value is itself an executable program that would recreate the same value if executed.
Example Usage (IDE Domain)
Note: These examples show how the generic language might be used for IDE operations, but the language itself knows nothing about IDEs.
Simple Operations
{"getCurrentSelection": {}}
{"findByName": {"name": "validateToken"}}
Composed Operations
{
"findReferences": {
"target": {"getCurrentSelection": {}}
}
}
Automatic conversion from selection to symbol
Implementation Architecture
Language Runtime
- Parser: Validates JSON structure and function call format
- Executor: Recursively evaluates nested function calls
- Type System: Manages automatic conversions and type checking
- Error Handler: Provides structured error messages with suggestions
Extension Registry
- Type Registry: Manages custom value types and their constructors
- Function Registry: Maps function names to implementations
- Conversion Registry: Handles automatic and explicit type conversions
Execution Flow
- Parse JSON program into function call tree
- Resolve function names against registry
- Attempt automatic type conversions for arguments
- Execute functions recursively (inner functions first)
- Return results in executable JSON format
Open Design Questions
The core language design is fairly clear, but several questions need resolution:
1. Validation Boundaries
Where should type checking and argument validation happen?
- In the language runtime (engine validates before calling functions)?
- In the function implementations (functions validate their own arguments)?
- Some hybrid approach?
2. Ambiguity Resolution
How should functions implement ambiguity handling and error propagation?
- What's the mechanism for functions to signal ambiguous results (return error objects vs throw exceptions)?
- How do errors propagate through function composition chains?
- What's the format for suggestion data that gets converted to user-facing refinement options?
- How does the runtime convert function-level errors into natural language suggestions?
3. Future Implementation Details
Additional areas that need specification:
- Error format specification: Actual data structures for the three outcome types
- Extension registry mechanics: How extensions are loaded and registered at runtime
- Async operation handling: How the runtime manages async VSCode operations transparently
- Memory management: Cleanup strategies for opaque values when programs complete
Benefits of This Design
Domain Agnostic: The language can be used for any domain, not just IDE operations Composable: Functions naturally combine to create complex operations Extensible: Easy to add new types, functions, and conversions Self-documenting: The JSON structure shows exactly what operations are being performed Type-safe: The runtime can validate that functions receive appropriate value types Debuggable: Programs are human-readable and can be inspected/modified Lambda calculus-like: Programs and data have the same representation
Next Steps
- Resolve the validation boundaries question
- Design the ambiguity resolution mechanism
- Implement a basic runtime with extension point interfaces
- Create example extensions for IDE operations
- Test composition and error handling with real examples
RFC: Validation boundaries
RFC: Ambiguity resolution
RFC: Natural language interface
RFC: Capability registry
Markdown to HTML conversion in VSCode extensions: A comprehensive guide to custom link handling
VSCode extensions predominantly use markdown-it for markdown to HTML conversion in webviews, with custom link handling achieved through a combination of renderer rules, postMessage API communication, and command URIs. The most effective approach involves intercepting links at the markdown parsing level using markdown-it's extensible renderer system, then implementing bidirectional communication between the webview and extension host to handle different link types securely.
The markdown-it ecosystem dominates VSCode extension development
Over 95% of popular VSCode markdown extensions use markdown-it as their core parsing library, establishing it as the de facto standard. This convergence isn't accidental - VSCode's built-in markdown preview uses markdown-it, creating a consistent ecosystem that extension developers can leverage. The library's 2,000+ plugin ecosystem provides extensive customization options while maintaining CommonMark compliance and high performance.
Popular extensions like Markdown All in One demonstrate the typical implementation pattern, using markdown-it with specialized plugins for features like task lists, GitHub alerts, and KaTeX math rendering. The crossnote library, used by Markdown Preview Enhanced, provides an enhanced wrapper around markdown-it that adds support for advanced features like code chunk execution and diagram rendering while maintaining compatibility with the core parser.
The technical preference for markdown-it stems from its token-based architecture that allows precise control over link handling at the parsing stage, rather than requiring post-processing of generated HTML. This architectural advantage makes it particularly well-suited for the security constraints and customization needs of VSCode webviews.
Custom link handling requires a multi-layered approach
Effective link customization in VSCode extensions involves three interconnected layers: markdown parser configuration, webview event handling, and extension-side message processing. At the parser level, markdown-it's renderer rules provide the most powerful customization point:
md.renderer.rules.link_open = function(tokens, idx, options, env, self) {
const token = tokens[idx];
const href = token.attrGet('href');
// Validate and transform links
if (href && !isValidLink(href)) {
token.attrSet('href', '#');
token.attrSet('data-invalid-link', href);
token.attrSet('class', 'invalid-link');
}
// Add custom attributes for handling
token.attrSet('data-link-handler', 'custom');
return defaultRender(tokens, idx, options, env, self);
};
The webview layer implements event interception to capture all link clicks and route them through VSCode's message passing system. This prevents default browser navigation and enables custom handling based on link type, modifier keys, and context. The postMessage API serves as the communication bridge, with the webview sending structured messages containing link information and the extension host determining appropriate actions.
For links where the target may not be formatted as a URL, extensions can implement custom link syntax handlers. Wiki-style links, relative paths, and command URIs all require specialized parsing and transformation. The most robust approach combines markdown-it plugins for syntax recognition with command URI generation that VSCode can execute natively.
VSCode's webview API introduces unique constraints and opportunities
VSCode webviews operate in a sandboxed environment with strict security policies that affect link handling. Navigation within webviews is blocked by default, requiring all link interactions to be explicitly handled through the extension host. This constraint, while limiting, provides opportunities for sophisticated link handling that wouldn't be possible in standard web contexts.
The command URI pattern emerges as particularly powerful for VSCode-specific functionality. By transforming regular links into command URIs during markdown parsing, extensions can trigger any VSCode command with parameters:
const commandUri = `command:extension.handleLink?${encodeURIComponent(JSON.stringify([target]))}`;
Resource access in webviews requires special handling through the asWebviewUri
API, which converts local file paths to URIs that webviews can access securely. The localResourceRoots configuration restricts which directories can be accessed, providing a security boundary that prevents unauthorized file system access while enabling legitimate resource loading.
Security considerations shape implementation decisions
Content Security Policy (CSP) enforcement in VSCode webviews demands careful attention to script execution and resource loading. The recommended approach uses nonce-based CSP headers that allow only explicitly authorized scripts to execute:
<meta http-equiv="Content-Security-Policy"
content="default-src 'none';
script-src 'nonce-${nonce}';
style-src ${webview.cspSource};">
Among markdown parsing libraries, markdown-it provides the strongest built-in security features with URL validation and dangerous protocol blocking. Unlike marked or showdown, which require external sanitization, markdown-it's validateLink function filters potentially harmful URLs at the parsing stage. This defense-in-depth approach, combined with CSP restrictions and post-processing sanitization using libraries like DOMPurify, creates multiple security layers.
Link validation must consider multiple attack vectors including JavaScript URLs, data URIs, and malformed protocols. The most secure implementations validate links at multiple stages: during markdown parsing, in the webview before sending messages, and in the extension host before executing actions. Historical vulnerabilities in markdown parsers, particularly CVE-2017-17461 in marked, underscore the importance of staying current with security updates.
Real-world implementations reveal proven patterns
Analysis of popular VSCode extensions reveals consistent implementation patterns that balance functionality with security. The bidirectional communication pattern stands out as the most flexible approach:
// Extension to webview
panel.webview.postMessage({
command: 'updateContent',
markdown: markdownContent,
baseUri: webview.asWebviewUri(vscode.Uri.file(documentPath))
});
// Webview to extension
vscode.postMessage({
command: 'did-click-link',
data: linkHref,
ctrlKey: event.ctrlKey
});
Protocol-specific handling allows extensions to route different link types appropriately. HTTP links open in external browsers, file links open in VSCode editors, and command links execute VSCode commands. This multi-protocol approach provides users with intuitive behavior while maintaining security boundaries.
The integration between markdown processors and VSCode's message passing system typically follows an event-driven architecture. Link clicks in the rendered markdown trigger DOM events, which are captured and transformed into messages sent to the extension host. The extension then determines the appropriate action based on link type, user preferences, and security policies.
Practical implementation recommendations
For developers implementing custom link handling in VSCode extensions, the recommended technology stack includes markdown-it for parsing, DOMPurify for additional sanitization, and nonce-based CSP for script security. Start with basic link interception and gradually add layers of functionality:
First, implement the markdown-it renderer override to gain control over link generation. Then add webview event handlers to intercept clicks and gather context like modifier keys. Next, implement the extension-side message handler with protocol-specific routing logic. Finally, add validation layers and error handling to ensure robust operation across edge cases.
Testing should cover various link formats, security scenarios, and user interactions. Pay particular attention to relative links, malformed URLs, and links with unusual protocols. The principle of least privilege should guide security decisions - only enable the minimum capabilities required for your extension's functionality.
Conclusion
VSCode extension markdown handling has converged on a well-established pattern combining markdown-it's extensible parsing with VSCode's secure webview architecture. Successful implementations layer multiple techniques: parser-level link transformation, event-based message passing, and context-aware navigation handling. By following these established patterns and security best practices, developers can create extensions that provide rich markdown experiences while maintaining the security boundaries that protect users' systems.
The ecosystem's maturity means developers can leverage proven solutions rather than solving these challenges from scratch. Whether building simple markdown previews or complex documentation systems, the combination of markdown-it's flexibility and VSCode's robust API provides all the tools necessary for sophisticated link handling that enhances rather than compromises the user experience.
CLI Tool to VSCode Extension Communication Patterns
When you want a CLI command running in VSCode's terminal to communicate with your VSCode extension, there are several established patterns. Here are the most reliable cross-platform approaches:
1. Unix Socket/Named Pipe Pattern (Recommended)
This is the most secure and reliable approach used by many VSCode extensions including the built-in Git extension.
Extension Side (IPC Server)
import * as vscode from 'vscode';
import * as net from 'net';
import * as path from 'path';
import * as fs from 'fs';
export function activate(context: vscode.ExtensionContext) {
const server = createIPCServer(context);
// Pass the socket path to terminals via environment variable
const socketPath = getSocketPath(context);
context.environmentVariableCollection.replace("MY_EXTENSION_IPC_PATH", socketPath);
}
function createIPCServer(context: vscode.ExtensionContext): net.Server {
const socketPath = getSocketPath(context);
// Clean up any existing socket
if (fs.existsSync(socketPath)) {
fs.unlinkSync(socketPath);
}
const server = net.createServer((socket) => {
console.log('CLI client connected');
socket.on('data', (data) => {
try {
const message = JSON.parse(data.toString());
handleCliMessage(message, socket);
} catch (error) {
console.error('Failed to parse CLI message:', error);
}
});
socket.on('error', (error) => {
console.error('Socket error:', error);
});
});
server.listen(socketPath);
// Clean up on extension deactivation
context.subscriptions.push({
dispose: () => {
server.close();
if (fs.existsSync(socketPath)) {
fs.unlinkSync(socketPath);
}
}
});
return server;
}
function getSocketPath(context: vscode.ExtensionContext): string {
// Use workspace-specific storage to avoid conflicts
const storageUri = context.storageUri || context.globalStorageUri;
const socketDir = storageUri.fsPath;
// Ensure directory exists
if (!fs.existsSync(socketDir)) {
fs.mkdirSync(socketDir, { recursive: true });
}
// Platform-specific socket naming
if (process.platform === 'win32') {
return `\\\\.\\pipe\\my-extension-${Date.now()}`;
} else {
return path.join(socketDir, 'my-extension.sock');
}
}
function handleCliMessage(message: any, socket: net.Socket) {
switch (message.command) {
case 'openFile':
vscode.window.showTextDocument(vscode.Uri.file(message.path));
socket.write(JSON.stringify({ status: 'success' }));
break;
case 'showMessage':
vscode.window.showInformationMessage(message.text);
socket.write(JSON.stringify({ status: 'success' }));
break;
default:
socket.write(JSON.stringify({
status: 'error',
message: 'Unknown command'
}));
}
}
CLI Tool Side
#!/bin/bash
# my-cli-tool.sh
# Check if we're running in VSCode terminal
if [ -z "$MY_EXTENSION_IPC_PATH" ]; then
echo "Error: Not running in VSCode terminal with extension support"
exit 1
fi
# Function to send message to VSCode extension
send_to_vscode() {
local command="$1"
local data="$2"
local message="{\"command\":\"$command\",\"data\":$data}"
if [[ "$OSTYPE" == "msys" || "$OSTYPE" == "win32" ]]; then
# Windows named pipe
echo "$message" | nc -U "$MY_EXTENSION_IPC_PATH" 2>/dev/null
else
# Unix socket
echo "$message" | socat - "UNIX-CONNECT:$MY_EXTENSION_IPC_PATH" 2>/dev/null
fi
}
# Example usage
send_to_vscode "showMessage" "{\"text\":\"Hello from CLI!\"}"
send_to_vscode "openFile" "{\"path\":\"/path/to/file.txt\"}"
For Node.js-based CLI tools:
#!/usr/bin/env node
const net = require('net');
function sendToVSCode(command, data) {
return new Promise((resolve, reject) => {
const socketPath = process.env.MY_EXTENSION_IPC_PATH;
if (!socketPath) {
reject(new Error('Not running in VSCode terminal with extension support'));
return;
}
const client = net.createConnection(socketPath, () => {
const message = JSON.stringify({ command, ...data });
client.write(message);
});
client.on('data', (data) => {
try {
const response = JSON.parse(data.toString());
resolve(response);
} catch (error) {
reject(error);
}
client.end();
});
client.on('error', reject);
});
}
// Example usage
async function main() {
try {
await sendToVSCode('showMessage', { text: 'Hello from Node CLI!' });
await sendToVSCode('openFile', { path: '/path/to/file.txt' });
} catch (error) {
console.error('Failed to communicate with VSCode:', error.message);
process.exit(1);
}
}
main();
2. HTTP Server Pattern
For simpler scenarios or when you need web-based communication:
Extension Side
import * as vscode from 'vscode';
import * as http from 'http';
export function activate(context: vscode.ExtensionContext) {
const server = createHttpServer(context);
const port = 0; // Let system assign port
server.listen(port, 'localhost', () => {
const address = server.address() as any;
const actualPort = address.port;
// Pass port to terminals
context.environmentVariableCollection.replace("MY_EXTENSION_HTTP_PORT", actualPort.toString());
});
}
function createHttpServer(context: vscode.ExtensionContext): http.Server {
const server = http.createServer((req, res) => {
// Enable CORS
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Allow-Methods', 'POST, GET, OPTIONS');
res.setHeader('Access-Control-Allow-Headers', 'Content-Type');
if (req.method === 'OPTIONS') {
res.writeHead(200);
res.end();
return;
}
if (req.method === 'POST') {
let body = '';
req.on('data', chunk => body += chunk);
req.on('end', () => {
try {
const message = JSON.parse(body);
handleHttpMessage(message, res);
} catch (error) {
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Invalid JSON' }));
}
});
} else {
res.writeHead(405);
res.end('Method not allowed');
}
});
context.subscriptions.push({
dispose: () => server.close()
});
return server;
}
function handleHttpMessage(message: any, res: http.ServerResponse) {
switch (message.command) {
case 'openFile':
vscode.window.showTextDocument(vscode.Uri.file(message.path));
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ status: 'success' }));
break;
default:
res.writeHead(400, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'Unknown command' }));
}
}
CLI Tool Side
#!/bin/bash
# Send HTTP request to VSCode extension
if [ -z "$MY_EXTENSION_HTTP_PORT" ]; then
echo "Error: Extension HTTP port not available"
exit 1
fi
curl -X POST "http://localhost:$MY_EXTENSION_HTTP_PORT" \
-H "Content-Type: application/json" \
-d '{"command":"openFile","path":"/path/to/file.txt"}'
3. File-Based Communication Pattern
For scenarios where real-time communication isn't critical:
Extension Side
import * as vscode from 'vscode';
import * as fs from 'fs';
import * as path from 'path';
export function activate(context: vscode.ExtensionContext) {
const communicationDir = getCommunicationDir(context);
// Set up environment variable for CLI tools
context.environmentVariableCollection.replace("MY_EXTENSION_COMM_DIR", communicationDir);
// Watch for incoming messages
const watcher = fs.watch(communicationDir, (eventType, filename) => {
if (filename && filename.endsWith('.json')) {
handleFileMessage(path.join(communicationDir, filename));
}
});
context.subscriptions.push({
dispose: () => watcher.close()
});
}
function getCommunicationDir(context: vscode.ExtensionContext): string {
const storageUri = context.storageUri || context.globalStorageUri;
const commDir = path.join(storageUri.fsPath, 'cli-comm');
if (!fs.existsSync(commDir)) {
fs.mkdirSync(commDir, { recursive: true });
}
return commDir;
}
function handleFileMessage(filePath: string) {
try {
const content = fs.readFileSync(filePath, 'utf8');
const message = JSON.parse(content);
// Process the message
switch (message.command) {
case 'openFile':
vscode.window.showTextDocument(vscode.Uri.file(message.path));
break;
case 'showMessage':
vscode.window.showInformationMessage(message.text);
break;
}
// Clean up the message file
fs.unlinkSync(filePath);
// Write response file if needed
if (message.responseFile) {
fs.writeFileSync(message.responseFile, JSON.stringify({ status: 'success' }));
}
} catch (error) {
console.error('Failed to process file message:', error);
}
}
CLI Tool Side
#!/bin/bash
if [ -z "$MY_EXTENSION_COMM_DIR" ]; then
echo "Error: Communication directory not available"
exit 1
fi
# Create unique message file
MESSAGE_FILE="$MY_EXTENSION_COMM_DIR/msg_$(date +%s%N).json"
# Send message
cat > "$MESSAGE_FILE" << EOF
{
"command": "openFile",
"path": "/path/to/file.txt",
"timestamp": $(date +%s)
}
EOF
echo "Message sent to VSCode extension"
4. Remote Execution Considerations
For remote environments (SSH, containers, WSL), the socket/named pipe pattern still works best:
SSH/Remote
The environmentVariableCollection
automatically propagates to remote terminals, so your IPC setup works seamlessly. The socket files are created on the remote filesystem.
WSL
VSCode handles WSL communication transparently. Your extension runs in the WSL context, so Unix sockets work normally.
Containers
In dev containers, the socket path needs to be in a volume that's accessible to both the extension and terminal processes. Use the workspace storage path which is typically mounted.
Best Practices
-
Security: Always validate messages from CLI tools. Don't execute arbitrary commands.
-
Error Handling: Implement robust error handling for connection failures, especially when VSCode restarts.
-
Cleanup: Always clean up sockets/files when the extension deactivates.
-
Cross-Platform: Use
environmentVariableCollection
for reliable environment variable propagation. -
Workspace Isolation: Use workspace-specific storage paths to avoid conflicts between different projects.
The Unix socket/named pipe pattern is recommended for most use cases as it's secure, efficient, and handles VSCode's multi-window scenarios well.
Creating a VSCode Extension Sidebar Panel: Complete Implementation Guide
Your VSCode extension compiles but the sidebar panel doesn't appear - this is one of the most common issues in extension development. Based on extensive research of recent VSCode APIs and working implementations, here's a comprehensive guide to solve your problem and create a fully functional sidebar panel.
The critical missing piece in most implementations
The most frequent cause of invisible panels is missing TreeDataProvider registration in the extension's activate function. Even with perfect package.json configuration, VSCode won't display your panel without proper provider registration.
Complete minimal working example
Here's a fully functional implementation that addresses all your requirements:
1. Package.json configuration
{
"name": "review-sidebar",
"displayName": "Review Sidebar",
"version": "0.0.1",
"engines": {
"vscode": "^1.74.0"
},
"categories": ["Other"],
"activationEvents": [],
"main": "./out/extension.js",
"contributes": {
"views": {
"explorer": [
{
"id": "reviewContent",
"name": "Review Content",
"icon": "$(book)",
"contextualTitle": "Review Panel"
}
]
},
"commands": [
{
"command": "reviewContent.refresh",
"title": "Refresh",
"icon": "$(refresh)"
}
],
"menus": {
"view/title": [
{
"command": "reviewContent.refresh",
"when": "view == reviewContent",
"group": "navigation"
}
]
}
},
"scripts": {
"vscode:prepublish": "npm run compile",
"compile": "tsc -p ./",
"watch": "tsc -watch -p ./"
},
"devDependencies": {
"@types/vscode": "^1.74.0",
"@types/node": "16.x",
"typescript": "^4.9.4"
}
}
Key configuration points:
- For VSCode 1.74.0+, leave
activationEvents
empty - views automatically trigger activation - The
id
in views must exactly match the ID used inregisterTreeDataProvider()
- Adding to
"explorer"
places your panel in the file explorer sidebar
2. TreeDataProvider implementation (src/reviewProvider.ts)
import * as vscode from 'vscode';
export class ReviewItem extends vscode.TreeItem {
constructor(
public readonly label: string,
public readonly collapsibleState: vscode.TreeItemCollapsibleState,
public readonly content?: string,
public readonly children?: ReviewItem[]
) {
super(label, collapsibleState);
// Set tooltip and description
this.tooltip = `${this.label}`;
this.description = this.content ? 'Has content' : '';
// Add click command for items with content
if (this.content) {
this.command = {
command: 'reviewContent.showItem',
title: 'Show Content',
arguments: [this]
};
}
// Use built-in icons
this.iconPath = this.children
? new vscode.ThemeIcon('folder')
: new vscode.ThemeIcon('file-text');
}
}
export class ReviewContentProvider implements vscode.TreeDataProvider<ReviewItem> {
private _onDidChangeTreeData: vscode.EventEmitter<ReviewItem | undefined | null | void> =
new vscode.EventEmitter<ReviewItem | undefined | null | void>();
// This exact property name is required by VSCode
readonly onDidChangeTreeData: vscode.Event<ReviewItem | undefined | null | void> =
this._onDidChangeTreeData.event;
private reviewData: ReviewItem[] = [];
constructor() {
this.loadReviewData();
}
getTreeItem(element: ReviewItem): vscode.TreeItem {
return element;
}
getChildren(element?: ReviewItem): Thenable<ReviewItem[]> {
if (!element) {
// Return root elements
return Promise.resolve(this.reviewData);
}
// Return children of the element
return Promise.resolve(element.children || []);
}
refresh(): void {
this.loadReviewData();
this._onDidChangeTreeData.fire();
}
private loadReviewData(): void {
// Example hierarchical review content
this.reviewData = [
new ReviewItem(
'Chapter 1: Introduction',
vscode.TreeItemCollapsibleState.Expanded,
undefined,
[
new ReviewItem('Overview', vscode.TreeItemCollapsibleState.None, 'Introduction content...'),
new ReviewItem('Key Concepts', vscode.TreeItemCollapsibleState.None, 'Main concepts...'),
]
),
new ReviewItem(
'Chapter 2: Implementation',
vscode.TreeItemCollapsibleState.Collapsed,
undefined,
[
new ReviewItem('Setup', vscode.TreeItemCollapsibleState.None, 'Setup instructions...'),
new ReviewItem('Code Examples', vscode.TreeItemCollapsibleState.None, 'Example code...'),
]
),
new ReviewItem('Summary', vscode.TreeItemCollapsibleState.None, 'Summary content...')
];
}
}
3. Extension activation (src/extension.ts)
import * as vscode from 'vscode';
import { ReviewContentProvider, ReviewItem } from './reviewProvider';
export function activate(context: vscode.ExtensionContext) {
console.log('Review extension is activating...');
// Create provider instance
const reviewProvider = new ReviewContentProvider();
// CRITICAL: Register the tree data provider
const treeView = vscode.window.createTreeView('reviewContent', {
treeDataProvider: reviewProvider,
showCollapseAll: true
});
// Register refresh command
const refreshCommand = vscode.commands.registerCommand('reviewContent.refresh', () => {
reviewProvider.refresh();
vscode.window.showInformationMessage('Review content refreshed');
});
// Register item click handler
const showItemCommand = vscode.commands.registerCommand('reviewContent.showItem', (item: ReviewItem) => {
if (item.content) {
// Option 1: Show in output channel
const outputChannel = vscode.window.createOutputChannel('Review Content');
outputChannel.clear();
outputChannel.appendLine(item.content);
outputChannel.show();
// Option 2: Show in new editor (for markdown content)
// const doc = await vscode.workspace.openTextDocument({
// content: item.content,
// language: 'markdown'
// });
// vscode.window.showTextDocument(doc);
}
});
// Add to subscriptions for proper cleanup
context.subscriptions.push(treeView, refreshCommand, showItemCommand);
console.log('Review extension activated successfully');
}
export function deactivate() {
console.log('Review extension deactivated');
}
Debugging your non-appearing panel
Follow these steps to diagnose why your panel isn't showing:
1. Verify extension activation
Press F5
to launch the Extension Development Host, then check the Debug Console:
// Add to activate function
console.log('Extension activating...', context.extensionPath);
2. Check registration success
const treeView = vscode.window.createTreeView('reviewContent', {
treeDataProvider: reviewProvider
});
console.log('TreeView registered:', treeView.visible);
3. Common issues and solutions
Panel not appearing:
- Ensure view ID matches exactly between package.json and registration code
- Check that TreeDataProvider is registered synchronously in activate()
- Verify
getChildren()
returns data for root elements
Extension not activating:
- For VSCode <1.74.0, add explicit activation:
"activationEvents": ["onView:reviewContent"]
- Check for errors in activate() function using try-catch blocks
- Ensure
main
points to correct compiled output file
Build/bundle issues:
- Verify TypeScript compilation succeeds:
npm run compile
- Check that out/extension.js exists after compilation
- Ensure @types/vscode version matches engines.vscode
4. Developer tools debugging
- In Extension Development Host:
Help → Toggle Developer Tools
- Check Console tab for errors
- Use
Developer: Show Running Extensions
to verify your extension loaded
Alternative approach: Webview for rich content
If you need to display formatted markdown or complex layouts, consider a webview panel instead:
export function createReviewWebview(context: vscode.ExtensionContext) {
const panel = vscode.window.createWebviewPanel(
'reviewWebview',
'Review Content',
vscode.ViewColumn.Beside,
{
enableScripts: true,
retainContextWhenHidden: true
}
);
panel.webview.html = `
<!DOCTYPE html>
<html>
<head>
<style>
body { font-family: var(--vscode-font-family); }
h2 { color: var(--vscode-editor-foreground); }
</style>
</head>
<body>
<h2>Review Content</h2>
<div id="content">
<!-- Your formatted content here -->
</div>
</body>
</html>
`;
return panel;
}
Best practices for sidebar extensions
Performance optimization:
- Bundle your extension using esbuild for faster loading
- Lazy-load data only when tree nodes expand
- Use
TreeItemCollapsibleState.Collapsed
for large datasets
User experience:
- Provide refresh functionality for dynamic content
- Use VSCode's built-in theme icons for consistency
- Add context menus for item-specific actions
Maintainability:
- Separate data models from UI providers
- Use TypeScript interfaces for type safety
- Implement proper disposal in deactivate()
Quick troubleshooting checklist
When your panel doesn't appear, verify:
- ✓ View ID matches exactly in package.json and code
- ✓ TreeDataProvider is registered in activate()
- ✓ getChildren() returns data for root elements
- ✓ Extension compiles without errors
- ✓ Correct activation events for your VSCode version
- ✓ No exceptions thrown in activate() function
This implementation provides a solid foundation for your review content sidebar panel. The hierarchical structure works well for organized content, and you can extend it with features like search, filtering, or integration with external data sources as needed.
Language Server Protocol (LSP) - Comprehensive Overview
Executive Summary
The Language Server Protocol (LSP) defines the protocol used between an editor or IDE and a language server that provides language features like auto complete, go to definition, find all references etc. The goal of the Language Server Index Format (LSIF, pronounced like "else if") is to support rich code navigation in development tools or a Web UI without needing a local copy of the source code.
The idea behind the Language Server Protocol (LSP) is to standardize the protocol for how tools and servers communicate, so a single Language Server can be re-used in multiple development tools, and tools can support languages with minimal effort.
Key Benefits:
- Reduces M×N complexity to M+N (one server per language instead of one implementation per editor per language)
- Enables language providers to focus on a single high-quality implementation
- Allows editors to support multiple languages with minimal effort
- Standardized JSON-RPC based communication
Table of Contents
- Architecture & Core Concepts
- Base Protocol
- Message Types
- Capabilities System
- Lifecycle Management
- Document Synchronization
- Language Features
- Workspace Features
- Window Features
- Implementation Considerations
- Version History
Architecture & Core Concepts
Problem Statement
Prior to the design and implementation of the Language Server Protocol for the development of Visual Studio Code, most language services were generally tied to a given IDE or other editor. In the absence of the Language Server Protocol, language services are typically implemented by using a tool-specific extension API.
This created a classic M×N complexity problem where:
- M = Number of editors/IDEs
- N = Number of programming languages
- Total implementations needed = M × N
LSP Solution
The idea behind a Language Server is to provide the language-specific smarts inside a server that can communicate with development tooling over a protocol that enables inter-process communication.
Architecture Components:
- Language Client: The editor/IDE that requests language services
- Language Server: A separate process providing language intelligence
- LSP: The standardized communication protocol between them
Communication Model:
- JSON-RPC 2.0 based messaging
- A language server runs as a separate process and development tools communicate with the server using the language protocol over JSON-RPC.
- Bi-directional communication (client ↔ server)
- Support for synchronous requests and asynchronous notifications
Supported Languages & Environments
LSP is not restricted to programming languages. It can be used for any kind of text-based language, like specifications or domain-specific languages (DSL).
Transport Options:
- stdio (standard input/output)
- Named pipes (Windows) / Unix domain sockets
- TCP sockets
- Node.js IPC
This comprehensive overview provides the foundation for understanding and implementing Language Server Protocol solutions. Each section can be expanded into detailed implementation guides as needed.
Base Protocol
Message Structure
The base protocol consists of a header and a content part (comparable to HTTP). The header and content part are separated by a '\r\n'.
Header Format
Content-Length: <number>\r\n
Content-Type: application/vscode-jsonrpc; charset=utf-8\r\n
\r\n
Required Headers:
Content-Length
: Length of content in bytes (mandatory)Content-Type
: MIME type (optional, defaults toapplication/vscode-jsonrpc; charset=utf-8
)
Content Format
Contains the actual content of the message. The content part of a message uses JSON-RPC to describe requests, responses and notifications.
Example Message:
Content-Length: 126\r\n
\r\n
{
"jsonrpc": "2.0",
"id": 1,
"method": "textDocument/completion",
"params": {
"textDocument": { "uri": "file:///path/to/file.js" },
"position": { "line": 5, "character": 10 }
}
}
JSON-RPC Structure
Base Message
interface Message {
jsonrpc: string; // Always "2.0"
}
Request Message
interface RequestMessage extends Message {
id: integer | string;
method: string;
params?: array | object;
}
Response Message
interface ResponseMessage extends Message {
id: integer | string | null;
result?: any;
error?: ResponseError;
}
Notification Message
interface NotificationMessage extends Message {
method: string;
params?: array | object;
}
Error Handling
Standard Error Codes:
-32700
: Parse error-32600
: Invalid Request-32601
: Method not found-32602
: Invalid params-32603
: Internal error
LSP-Specific Error Codes:
-32803
: RequestFailed-32802
: ServerCancelled-32801
: ContentModified-32800
: RequestCancelled
Language Features
Language Features provide the actual smarts in the language server protocol. They are usually executed on a [text document, position] tuple. The main language feature categories are: code comprehension features like Hover or Goto Definition. coding features like diagnostics, code complete or code actions.
Navigation Features
Go to Definition
textDocument/definition: TextDocumentPositionParams → Location | Location[] | LocationLink[] | null
Go to Declaration
textDocument/declaration: TextDocumentPositionParams → Location | Location[] | LocationLink[] | null
Go to Type Definition
textDocument/typeDefinition: TextDocumentPositionParams → Location | Location[] | LocationLink[] | null
Go to Implementation
textDocument/implementation: TextDocumentPositionParams → Location | Location[] | LocationLink[] | null
Find References
textDocument/references: ReferenceParams → Location[] | null
interface ReferenceParams extends TextDocumentPositionParams {
context: { includeDeclaration: boolean; }
}
Information Features
Hover
textDocument/hover: TextDocumentPositionParams → Hover | null
interface Hover {
contents: MarkedString | MarkedString[] | MarkupContent;
range?: Range;
}
Signature Help
textDocument/signatureHelp: SignatureHelpParams → SignatureHelp | null
interface SignatureHelp {
signatures: SignatureInformation[];
activeSignature?: uinteger;
activeParameter?: uinteger;
}
Document Symbols
textDocument/documentSymbol: DocumentSymbolParams → DocumentSymbol[] | SymbolInformation[] | null
Workspace Symbols
workspace/symbol: WorkspaceSymbolParams → SymbolInformation[] | WorkspaceSymbol[] | null
Code Intelligence Features
Code Completion
textDocument/completion: CompletionParams → CompletionItem[] | CompletionList | null
interface CompletionList {
isIncomplete: boolean;
items: CompletionItem[];
}
interface CompletionItem {
label: string;
kind?: CompletionItemKind;
detail?: string;
documentation?: string | MarkupContent;
sortText?: string;
filterText?: string;
insertText?: string;
textEdit?: TextEdit;
additionalTextEdits?: TextEdit[];
}
Completion Triggers:
- User invoked (Ctrl+Space)
- Trigger characters (
.
,->
, etc.) - Incomplete completion re-trigger
Code Actions
textDocument/codeAction: CodeActionParams → (Command | CodeAction)[] | null
interface CodeAction {
title: string;
kind?: CodeActionKind;
diagnostics?: Diagnostic[];
isPreferred?: boolean;
disabled?: { reason: string; };
edit?: WorkspaceEdit;
command?: Command;
}
Code Action Kinds:
quickfix
- Fix problemsrefactor
- Refactoring operationssource
- Source code actions (organize imports, etc.)
Code Lens
textDocument/codeLens: CodeLensParams → CodeLens[] | null
interface CodeLens {
range: Range;
command?: Command;
data?: any; // For resolve support
}
Formatting Features
Document Formatting
textDocument/formatting: DocumentFormattingParams → TextEdit[] | null
Range Formatting
textDocument/rangeFormatting: DocumentRangeFormattingParams → TextEdit[] | null
On-Type Formatting
textDocument/onTypeFormatting: DocumentOnTypeFormattingParams → TextEdit[] | null
Semantic Features
Semantic Tokens
Since version 3.16.0. The request is sent from the client to the server to resolve semantic tokens for a given file. Semantic tokens are used to add additional color information to a file that depends on language specific symbol information.
textDocument/semanticTokens/full: SemanticTokensParams → SemanticTokens | null
textDocument/semanticTokens/range: SemanticTokensRangeParams → SemanticTokens | null
textDocument/semanticTokens/full/delta: SemanticTokensDeltaParams → SemanticTokens | SemanticTokensDelta | null
Token Encoding:
- 5 integers per token:
[deltaLine, deltaStart, length, tokenType, tokenModifiers]
- Relative positioning for efficiency
- Bit flags for modifiers
Inlay Hints
textDocument/inlayHint: InlayHintParams → InlayHint[] | null
interface InlayHint {
position: Position;
label: string | InlayHintLabelPart[];
kind?: InlayHintKind; // Type | Parameter
tooltip?: string | MarkupContent;
paddingLeft?: boolean;
paddingRight?: boolean;
}
Diagnostics
Push Model (Traditional)
textDocument/publishDiagnostics: PublishDiagnosticsParams
interface PublishDiagnosticsParams {
uri: DocumentUri;
version?: integer;
diagnostics: Diagnostic[];
}
Pull Model (Since 3.17)
textDocument/diagnostic: DocumentDiagnosticParams → DocumentDiagnosticReport
workspace/diagnostic: WorkspaceDiagnosticParams → WorkspaceDiagnosticReport
Diagnostic Structure:
interface Diagnostic {
range: Range;
severity?: DiagnosticSeverity; // Error | Warning | Information | Hint
code?: integer | string;
source?: string; // e.g., "typescript"
message: string;
tags?: DiagnosticTag[]; // Unnecessary | Deprecated
relatedInformation?: DiagnosticRelatedInformation[];
}
Implementation Guide
Performance Guidelines
Message Ordering: Responses to requests should be sent in roughly the same order as the requests appear on the server or client side.
State Management:
- Servers should handle partial/incomplete requests gracefully
- Use
ContentModified
error for outdated results - Implement proper cancellation support
Resource Management:
- Language servers run in separate processes
- Avoid memory leaks in long-running servers
- Implement proper cleanup on shutdown
Error Handling
Client Responsibilities:
- Restart crashed servers (with exponential backoff)
- Handle
ContentModified
errors gracefully - Validate server responses
Server Responsibilities:
- Return appropriate error codes
- Handle malformed/outdated requests
- Monitor client process health
Transport Considerations
Command Line Arguments:
language-server --stdio # Use stdio
language-server --pipe=<n> # Use named pipe/socket
language-server --socket --port=<port> # Use TCP socket
language-server --node-ipc # Use Node.js IPC
language-server --clientProcessId=<pid> # Monitor client process
Testing Strategies
Unit Testing:
- Mock LSP message exchange
- Test individual feature implementations
- Validate message serialization/deserialization
Integration Testing:
- End-to-end editor integration
- Multi-document scenarios
- Error condition handling
Performance Testing:
- Large file handling
- Memory usage patterns
- Response time benchmarks
Advanced Topics
Custom Extensions
Experimental Capabilities:
interface ClientCapabilities {
experimental?: {
customFeature?: boolean;
vendorSpecificExtension?: any;
};
}
Custom Methods:
- Use vendor prefixes:
mycompany/customFeature
- Document custom protocol extensions
- Ensure graceful degradation
Security Considerations
Process Isolation:
- Language servers run in separate processes
- Limit file system access appropriately
- Validate all input from untrusted sources
Content Validation:
- Sanitize file paths and URIs
- Validate document versions
- Implement proper input validation
Multi-Language Support
Language Identification:
interface TextDocumentItem {
uri: DocumentUri;
languageId: string; // "typescript", "python", etc.
version: integer;
text: string;
}
Document Selectors:
type DocumentSelector = DocumentFilter[];
interface DocumentFilter {
language?: string; // "typescript"
scheme?: string; // "file", "untitled"
pattern?: string; // "**/*.{ts,js}"
}
Message Reference
Message Types
Request/Response Pattern
Client-to-Server Requests:
initialize
- Server initializationtextDocument/hover
- Get hover informationtextDocument/completion
- Get code completionstextDocument/definition
- Go to definition
Server-to-Client Requests:
client/registerCapability
- Register new capabilitiesworkspace/configuration
- Get configuration settingswindow/showMessageRequest
- Show message with actions
Notification Pattern
Client-to-Server Notifications:
initialized
- Initialization completetextDocument/didOpen
- Document openedtextDocument/didChange
- Document changedtextDocument/didSave
- Document savedtextDocument/didClose
- Document closed
Server-to-Client Notifications:
textDocument/publishDiagnostics
- Send diagnosticswindow/showMessage
- Display messagetelemetry/event
- Send telemetry data
Special Messages
Dollar Prefixed Messages: Notifications and requests whose methods start with '$/' are messages which are protocol implementation dependent and might not be implementable in all clients or servers.
Examples:
$/cancelRequest
- Cancel ongoing request$/progress
- Progress reporting$/setTrace
- Set trace level
Capabilities System
Not every language server can support all features defined by the protocol. LSP therefore provides 'capabilities'. A capability groups a set of language features.
Capability Exchange
During Initialization:
- Client announces capabilities in
initialize
request - Server announces capabilities in
initialize
response - Both sides adapt behavior based on announced capabilities
Client Capabilities Structure
interface ClientCapabilities {
workspace?: WorkspaceClientCapabilities;
textDocument?: TextDocumentClientCapabilities;
window?: WindowClientCapabilities;
general?: GeneralClientCapabilities;
experimental?: any;
}
Key Client Capabilities:
textDocument.hover.dynamicRegistration
- Support dynamic hover registrationtextDocument.completion.contextSupport
- Support completion contextworkspace.workspaceFolders
- Multi-root workspace supportwindow.workDoneProgress
- Progress reporting support
Server Capabilities Structure
interface ServerCapabilities {
textDocumentSync?: TextDocumentSyncKind | TextDocumentSyncOptions;
completionProvider?: CompletionOptions;
hoverProvider?: boolean | HoverOptions;
definitionProvider?: boolean | DefinitionOptions;
referencesProvider?: boolean | ReferenceOptions;
documentSymbolProvider?: boolean | DocumentSymbolOptions;
workspaceSymbolProvider?: boolean | WorkspaceSymbolOptions;
codeActionProvider?: boolean | CodeActionOptions;
// ... many more
}
Dynamic Registration
Servers can register/unregister capabilities after initialization:
// Register new capability
client/registerCapability: {
registrations: [{
id: "uuid",
method: "textDocument/willSaveWaitUntil",
registerOptions: { documentSelector: [{ language: "javascript" }] }
}]
}
// Unregister capability
client/unregisterCapability: {
unregisterations: [{ id: "uuid", method: "textDocument/willSaveWaitUntil" }]
}
Lifecycle Management
Initialization Sequence
-
Client → Server:
initialize
requestinterface InitializeParams { processId: integer | null; clientInfo?: { name: string; version?: string; }; rootUri: DocumentUri | null; initializationOptions?: any; capabilities: ClientCapabilities; workspaceFolders?: WorkspaceFolder[] | null; }
-
Server → Client:
initialize
responseinterface InitializeResult { capabilities: ServerCapabilities; serverInfo?: { name: string; version?: string; }; }
-
Client → Server:
initialized
notification- Signals completion of initialization
- Server can now send requests to client
Shutdown Sequence
-
Client → Server:
shutdown
request- Server must not accept new requests (except
exit
) - Server should finish processing ongoing requests
- Server must not accept new requests (except
-
Client → Server:
exit
notification- Server should exit immediately
- Exit code: 0 if shutdown was called, 1 otherwise
Process Monitoring
Client Process Monitoring:
- Server can monitor client process via
processId
from initialize - Server should exit if client process dies
Server Crash Handling:
- Client should restart crashed servers
- Implement exponential backoff to prevent restart loops
Document Synchronization
Client support for textDocument/didOpen, textDocument/didChange and textDocument/didClose notifications is mandatory in the protocol and clients can not opt out supporting them.
Text Document Sync Modes
enum TextDocumentSyncKind {
None = 0, // No synchronization
Full = 1, // Full document sync on every change
Incremental = 2 // Incremental sync (deltas only)
}
Document Lifecycle
Document Open
textDocument/didOpen: {
textDocument: {
uri: "file:///path/to/file.js",
languageId: "javascript",
version: 1,
text: "console.log('hello');"
}
}
Document Change
textDocument/didChange: {
textDocument: { uri: "file:///path/to/file.js", version: 2 },
contentChanges: [{
range: { start: { line: 0, character: 12 }, end: { line: 0, character: 17 } },
text: "world"
}]
}
Change Event Types:
- Full text: Replace entire document
- Incremental: Specify range and replacement text
Document Save
// Optional: Before save
textDocument/willSave: {
textDocument: { uri: "file:///path/to/file.js" },
reason: TextDocumentSaveReason.Manual
}
// Optional: Before save with text edits
textDocument/willSaveWaitUntil → TextEdit[]
// After save
textDocument/didSave: {
textDocument: { uri: "file:///path/to/file.js" },
text?: "optional full text"
}
Document Close
textDocument/didClose: {
textDocument: { uri: "file:///path/to/file.js" }
}
Position Encoding
Prior to 3.17 the offsets were always based on a UTF-16 string representation. Since 3.17 clients and servers can agree on a different string encoding representation (e.g. UTF-8).
Supported Encodings:
utf-16
(default, mandatory)utf-8
utf-32
Position Structure:
interface Position {
line: uinteger; // Zero-based line number
character: uinteger; // Zero-based character offset
}
interface Range {
start: Position;
end: Position;
}
Workspace Features
Multi-Root Workspaces
workspace/workspaceFolders → WorkspaceFolder[] | null
interface WorkspaceFolder {
uri: URI;
name: string;
}
// Notification when folders change
workspace/didChangeWorkspaceFolders: DidChangeWorkspaceFoldersParams
Configuration Management
// Server requests configuration from client
workspace/configuration: ConfigurationParams → any[]
interface ConfigurationItem {
scopeUri?: URI; // Scope (file/folder) for the setting
section?: string; // Setting name (e.g., "typescript.preferences")
}
// Client notifies server of configuration changes
workspace/didChangeConfiguration: DidChangeConfigurationParams
File Operations
File Watching
workspace/didChangeWatchedFiles: DidChangeWatchedFilesParams
interface FileEvent {
uri: DocumentUri;
type: FileChangeType; // Created | Changed | Deleted
}
File System Operations
// Before operations (can return WorkspaceEdit)
workspace/willCreateFiles: CreateFilesParams → WorkspaceEdit | null
workspace/willRenameFiles: RenameFilesParams → WorkspaceEdit | null
workspace/willDeleteFiles: DeleteFilesParams → WorkspaceEdit | null
// After operations (notifications)
workspace/didCreateFiles: CreateFilesParams
workspace/didRenameFiles: RenameFilesParams
workspace/didDeleteFiles: DeleteFilesParams
Command Execution
workspace/executeCommand: ExecuteCommandParams → any
interface ExecuteCommandParams {
command: string; // Command identifier
arguments?: any[]; // Command arguments
}
// Server applies edits to workspace
workspace/applyEdit: ApplyWorkspaceEditParams → ApplyWorkspaceEditResult
Window Features
Message Display
Show Message (Notification)
window/showMessage: ShowMessageParams
interface ShowMessageParams {
type: MessageType; // Error | Warning | Info | Log | Debug
message: string;
}
Show Message Request
window/showMessageRequest: ShowMessageRequestParams → MessageActionItem | null
interface ShowMessageRequestParams {
type: MessageType;
message: string;
actions?: MessageActionItem[]; // Buttons to show
}
Show Document
window/showDocument: ShowDocumentParams → ShowDocumentResult
interface ShowDocumentParams {
uri: URI;
external?: boolean; // Open in external program
takeFocus?: boolean; // Focus the document
selection?: Range; // Select range in document
}
Progress Reporting
Work Done Progress
// Server creates progress token
window/workDoneProgress/create: WorkDoneProgressCreateParams → void
// Report progress using $/progress
$/progress: ProgressParams<WorkDoneProgressBegin | WorkDoneProgressReport | WorkDoneProgressEnd>
// Client can cancel progress
window/workDoneProgress/cancel: WorkDoneProgressCancelParams
Progress Reporting Pattern
// Begin
{ kind: "begin", title: "Indexing", cancellable: true, percentage: 0 }
// Report
{ kind: "report", message: "Processing file.ts", percentage: 25 }
// End
{ kind: "end", message: "Indexing complete" }
Logging & Telemetry
window/logMessage: LogMessageParams // Development logs
telemetry/event: any // Usage analytics
Version History
LSP 3.17 (Current)
Major new feature are: type hierarchy, inline values, inlay hints, notebook document support and a meta model that describes the 3.17 LSP version.
Key Features:
- Type hierarchy support
- Inline value provider
- Inlay hints
- Notebook document synchronization
- Diagnostic pull model
- Position encoding negotiation
LSP 3.16
Key Features:
- Semantic tokens
- Call hierarchy
- Moniker support
- File operation events
- Linked editing ranges
- Code action resolve
LSP 3.15
Key Features:
- Progress reporting
- Selection ranges
- Signature help context
LSP 3.0
Breaking Changes:
- Client capabilities system
- Dynamic registration
- Workspace folders
- Document link support
VSCode extension development patterns with separate server components
VSCode extensions that integrate separate server components represent a sophisticated architecture pattern enabling powerful developer tooling. Based on extensive research across Language Server Protocol implementations, Debug Adapter Protocol systems, and modern Model Context Protocol servers, clear patterns emerge for building developer-friendly, maintainable extensions.
Standard patterns for local VSCode extension development
The Extension Development Host serves as the cornerstone of VSCode extension development. Pressing F5 launches a separate VSCode instance with your extension loaded, enabling immediate testing and debugging. The standard workflow utilizes the Yeoman generator (npx yo code
) to scaffold projects with proper TypeScript compilation, debug configurations, and watch modes already configured.
For local installation beyond the Extension Development Host, developers primarily use vsce packaging combined with manual installation. The vsce package
command creates a .vsix
file that can be installed via code --install-extension path/to/extension.vsix
or through the VSCode UI. This approach proves invaluable for testing production-like behavior before marketplace publishing.
Symlink methods offer an alternative for rapid iteration. Creating a symbolic link from your development directory to ~/.vscode/extensions/
enables VSCode to load the extension directly from source, though this requires manual reloads and careful package.json configuration.
Node.js CLI tools and local development patterns
The Node.js ecosystem provides multiple approaches for "install from local checkout" scenarios. While npm link remains the traditional approach, creating global symlinks that can be referenced in other projects, it suffers from well-documented issues with duplicate dependencies, platform inconsistencies, and peer dependency conflicts.
yalc emerges as the superior alternative, avoiding symlink problems by copying files to a .yalc
directory while maintaining proper dependency resolution. The workflow involves yalc publish
in the package directory followed by yalc add package-name
in consuming projects, with yalc push
enabling instant updates across all consumers.
For monorepo architectures, workspace configurations in npm, yarn, or pnpm provide native support for local dependencies. The "workspace:*"
protocol in pnpm offers particularly elegant handling of internal dependencies while maintaining compatibility with standard npm workflows.
Architecture patterns for VSCode extensions with server components
The Language Server Protocol exemplifies the most mature pattern for extension-server communication. Extensions act as thin clients managing the server lifecycle while the server handles computationally intensive language analysis. Communication occurs via JSON-RPC over stdio, with the vscode-languageclient
and vscode-languageserver
npm packages providing robust abstractions.
Monorepo structures dominate successful implementations, organizing code into client/
, server/
, and shared/
directories. This approach simplifies dependency management, enables code sharing for protocol definitions, and supports unified build processes. Projects like rust-analyzer and the TypeScript language server demonstrate this pattern effectively.
IPC mechanisms vary by use case. While stdio remains standard for language servers, some extensions utilize TCP sockets for network communication, Node.js IPC for tighter integration, or HTTP APIs for web services. The new Model Context Protocol introduces Server-Sent Events as an alternative for real-time streaming scenarios.
Concrete examples from the ecosystem
Language servers represent the most numerous examples. The TypeScript language server wraps the official TypeScript compiler services, rust-analyzer provides incremental compilation for Rust, gopls serves as the official Go language server, and python-lsp-server offers plugin-based Python support. Each demonstrates slightly different architectural choices while following core LSP patterns.
Debug adapters follow similar architectural patterns through the Debug Adapter Protocol. Extensions launch debug adapter processes that translate between VSCode's generic debugging UI and language-specific debuggers, enabling consistent debugging experiences across languages.
The Model Context Protocol ecosystem, with over 200 server implementations, showcases modern patterns for AI-powered extensions. MCP servers handle everything from filesystem access to database queries, demonstrating how extensions can safely delegate complex operations to separate processes.
Best practices for setup and documentation
Successful projects prioritize one-command setup experiences. Scripts like npm run setup
or dedicated setup.sh
files handle dependency installation, compilation, and environment validation. Cross-platform compatibility requires careful attention - using Node.js-based setup scripts often provides better portability than shell scripts.
Documentation structure follows consistent patterns across successful projects. README files begin with quick start instructions, followed by architecture overviews using ASCII diagrams, detailed development workflows, and troubleshooting guides. The most effective documentation includes both conceptual explanations and concrete command examples.
Developer experience optimizations include automatic environment validation, clear error messages with suggested fixes, and IDE configuration files. Providing .vscode/launch.json
configurations for debugging both extension and server components simultaneously significantly improves the development experience.
Technical implementation approaches
Extension activation should utilize the most specific activation events possible. Language-specific activation (onLanguage:typescript
) provides better performance than universal activation. Lazy loading strategies defer heavy imports until actually needed, reducing startup time.
Development vs production configurations require environment-aware connection logic. Development typically uses localhost connections with relaxed security, while production might involve authenticated HTTPS connections. Configuration schemas in package.json enable users to customize these settings.
Hot reload mechanisms vary by component. While VSCode extensions require manual reloads (Ctrl+R in the Extension Development Host), server components can utilize nodemon, ts-node-dev, or webpack watch modes for automatic recompilation. State preservation between reloads improves the development experience.
Debugging configurations benefit from compound launch configurations that start both extension and server with attached debuggers. Comprehensive logging systems with configurable verbosity levels prove essential for diagnosing issues across process boundaries.
Developer workflow recommendations
For rapid local development, combine the Extension Development Host for extension code with yalc for any separate npm packages. This avoids npm link issues while maintaining a fast feedback loop. Use watch modes for automatic compilation but expect to manually reload the Extension Development Host.
Project structure should follow monorepo patterns even for simple extensions. This provides a clear upgrade path as complexity grows and establishes patterns that scale. Use workspaces (npm, yarn, or pnpm) to manage dependencies across components.
Communication patterns should start with the simplest approach that meets requirements. Stdio suffices for most scenarios, with upgrades to sockets or HTTP only when specific features demand it. Following established protocols like LSP or DAP provides battle-tested patterns and existing tooling.
Testing strategies must cover both components. Extension tests run in the Extension Development Host environment, while server tests can use standard Node.js testing frameworks. Integration tests that verify communication between components prevent subtle protocol mismatches.
Key takeaways for implementation
Building VSCode extensions with separate server components requires balancing sophistication with developer experience. The Extension Development Host provides an excellent inner loop for extension development, while tools like yalc solve local dependency management. Monorepo structures with clear client/server separation enable scalable architectures.
Following established patterns from successful language servers and debug adapters provides a roadmap for implementation. Focus on developer experience through comprehensive setup scripts, clear documentation, and robust debugging configurations. Most importantly, start simple with stdio communication and monorepo structure, adding complexity only as requirements demand.
The ecosystem demonstrates that this architecture enables powerful developer tools while maintaining reasonable complexity. By following these patterns and learning from successful implementations, developers can create extensions that provide rich functionality while remaining maintainable and approachable for contributors.