Introduction

Here we go again for a new post! This time, I’m sharing something that has been bugging me for a while now.

I’ve been using WSL2 as my daily driver for months — coding, running containers, building projects. But I always had this feeling that something was off. My Windows machine would become sluggish after a few hours of WSL2 usage, I was only using a fraction of my CPU cores, and every time I wanted to test a web application visually, I had to leave the terminal and switch to the browser manually.

Then I discovered two things that changed my setup completely:

  1. WSL2 has two configuration files — and the defaults are terrible for development workloads
  2. Chrome DevTools Protocol can be exposed to WSL2, so agentic CLI tools like Claude Code, Gemini CLI, and GitHub Copilot can actually see and interact with your browser

Let me show you what I did.


Part 1: Optimizing WSL2 Configuration

WSL2 has two configuration files that control its behavior:

FileLocationScope
.wslconfigC:\Users\<you>\.wslconfigGlobal — applies to all WSL2 distros
wsl.conf/etc/wsl.conf inside the distroPer-distro settings

Both require a wsl --shutdown from PowerShell to take effect. Don’t be scared by the number of settings — most of them are set-and-forget.

The .wslconfig file (Windows-side)

This file controls how much hardware WSL2 can use. The defaults are surprisingly conservative — or in some cases, surprisingly greedy.

Before writing your config, let’s check what you’re working with:

1
2
3
# In PowerShell — get your total logical processors and RAM
wmic cpu get NumberOfLogicalProcessors
systeminfo | findstr "Total Physical Memory"

Now create or edit C:\Users\<you>\.wslconfig:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[wsl2]
# Allocate ~75% of your total RAM to WSL2
# Example: 24GB for a 32GB system, 48GB for 64GB, etc.
memory=<75% of your RAM>

# Give WSL2 roughly 60-70% of your CPU threads, leaving the rest for Windows
# Example: 6 out of 8, 10 out of 16, 14 out of 22, etc.
processors=<see table below>

# Generous swap prevents OOM-kills during heavy builds (Docker, Webpack, etc.)
swap=16GB

# Share host network directly — localhost is the same in Windows and WSL2
# Great for VPN/corporate networks. Note: can conflict with Docker Desktop
networkingMode=mirrored

# Inherit Windows proxy settings automatically
autoProxy=true

[experimental]
# WSL2 historically never releases RAM back to Windows. This fixes it.
autoMemoryReclaim=gradual

# The WSL2 virtual disk grows as you add files but never shrinks. This enables auto-compaction.
sparseVhd=true

Sizing guide

Use this as a starting point — adjust based on how much multitasking you do on the Windows side:

Your total RAMmemory=Your logical CPUsprocessors=
16 GB12GB86
32 GB24GB128
64 GB48GB1610
128 GB96GB22+14-16

The goal is to give WSL2 enough resources for builds, containers, and dev servers — while keeping Windows responsive for your browser, editor, and meetings.

What’s going on here?

processors: A common mistake is leaving this at the default (which varies by WSL version) or setting it too low. Run the PowerShell commands above to see your actual thread count, and allocate accordingly.

networkingMode=mirrored: This is the key setting that makes the Chrome DevTools setup work! With mirrored networking, localhost is shared between Windows and WSL2. A Chrome instance listening on localhost:9222 on Windows is directly accessible from WSL2 without any port forwarding. Cool!

⚠️ Warning: mirrored networking can break Docker Desktop. This is actually a good reason to ditch Docker Desktop entirely and run Docker Engine natively inside WSL2 instead.

Docker Desktop adds a separate WSL2 backend distro, its own networking layer, and a GUI process that consumes resources — all unnecessary overhead when you already have a full Linux environment in WSL2.

Installing Docker Engine directly inside your WSL2 distro gives you:

  • No networking conflicts with mirrored mode
  • Lower memory usage — no extra Docker Desktop VM or backend distro
  • Full control over Docker daemon configuration
  • Faster I/O — containers access the native ext4 filesystem directly

Follow the official Docker Engine install guide for Ubuntu (or your distro) inside WSL2. Once installed, just sudo systemctl enable docker and you’re set — no Desktop app needed.

autoMemoryReclaim=gradual: This one is a game changer. Without it, WSL2 acts like a memory black hole — it allocates RAM as needed but never releases it, even after processes exit. The gradual setting lets the Linux kernel slowly return unused pages to Windows. If you’ve ever wondered why your Windows machine feels sluggish after running WSL2 for a while — this is why.

sparseVhd=true: WSL2 stores your Linux filesystem in a .vhdx virtual disk. By default, this file only grows. Delete 10GB of files? The .vhdx stays the same size. This setting enables automatic compaction. Trust me, you want this.

The wsl.conf file (Linux-side)

Now let’s configure the Linux side. This file lives inside your WSL2 distro at /etc/wsl.conf:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
[boot]
systemd=true
command="sysctl -w vm.max_map_count=262144"

[automount]
enabled = true
root = /mnt/
options = "metadata,rw,umask=22,fmask=11,case=off"
mountFsTab = false

[network]
generateHosts = true
generateResolvConf = true

[interop]
enabled=true
appendWindowsPath=true

What each section does

systemd=true: Enables systemd inside WSL2, which is required for running services like Docker, PostgreSQL, or any daemon that expects a proper init system.

command="sysctl -w vm.max_map_count=262144": This kernel parameter is required by Elasticsearch and OpenSearch. Without it, these services crash on startup with a cryptic error. Setting it in the [boot] section ensures it’s applied every time WSL starts.

metadata mount option: This one is critical. Without it, all files on Windows-mounted drives (/mnt/c/, etc.) appear with chmod 777 permissions. This breaks:

  • SSH keys (which require 600 permissions)
  • Git operations (which detect permission changes as modifications)
  • Any tool that checks file permissions

case=off: Prevents case-sensitivity issues when Windows tools (VS Code, file explorer) interact with files on mounted drives. Without this, you can accidentally create File.txt and file.txt in the same directory — which Windows doesn’t support.

appendWindowsPath=true: Lets you call Windows executables directly from WSL2. This means code . opens VS Code, explorer.exe . opens File Explorer, and — importantly for our next section — you can launch Chrome from the WSL2 terminal.

Let’s verify it works!

After editing both files, shut down WSL2:

1
2
# From PowerShell on Windows
wsl --shutdown

Then reopen your WSL2 terminal and verify:

1
2
3
4
5
# Should show your new processor count
nproc

# Should show the updated vm.max_map_count
sysctl vm.max_map_count

Good ! If everything looks good — we’re ready for the fun part!


Part 2: Chrome DevTools MCP from WSL2

What is MCP and why should I care?

The Model Context Protocol (MCP) is an open standard that lets AI tools connect to external systems through a unified interface. Think of it as a plugin system for AI assistants.

The Chrome DevTools MCP server exposes Chrome’s DevTools Protocol through MCP, allowing agentic CLI tools to:

  • Navigate to URLs and take screenshots
  • Click elements, fill forms, and interact with pages
  • Inspect network requests and console logs
  • Run Lighthouse audits for performance, accessibility, and SEO
  • Execute JavaScript in the page context

This works with any agentic CLI that supports MCP servers — Claude Code, Gemini CLI, GitHub Copilot CLI, and more.

Architecture

The setup is actually simple thanks to networkingMode=mirrored:

Chrome on Windows connected to agentic CLI tools in WSL2 via chrome-devtools-mcp

Step 1: Create a Chrome launcher script

Chrome needs to be started with --remote-debugging-port to expose the DevTools Protocol. Let’s create a helper script:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
mkdir -p ~/.local/bin

cat > ~/.local/bin/chrome-debug << 'SCRIPT'
#!/bin/bash
# Launch Windows Chrome with remote debugging for CDP/MCP integration

PORT=${CHROME_DEBUG_PORT:-9222}
URL="${1:-about:blank}"
CHROME="/mnt/c/Program Files/Google/Chrome/Application/chrome.exe"

# Check if Chrome is already running with debugging
if curl -s "http://localhost:$PORT/json/version" > /dev/null 2>&1; then
    echo "Chrome DevTools already listening on port $PORT"
    curl -s "http://localhost:$PORT/json/version" | python3 -m json.tool 2>/dev/null
    exit 0
fi

echo "Launching Chrome with remote debugging on port $PORT..."
"$CHROME" \
    --remote-debugging-port=$PORT \
    --user-data-dir="C:\\Temp\\chrome-debug-profile" \
    --no-first-run \
    --no-default-browser-check \
    "$URL" &

# Wait for Chrome to start
for i in $(seq 1 10); do
    if curl -s "http://localhost:$PORT/json/version" > /dev/null 2>&1; then
        echo "Chrome DevTools ready on port $PORT"
        curl -s "http://localhost:$PORT/json/version" | python3 -m json.tool 2>/dev/null
        exit 0
    fi
    sleep 1
done

echo "ERROR: Chrome did not start within 10 seconds"
exit 1
SCRIPT

chmod +x ~/.local/bin/chrome-debug

ℹ️ Note: The --user-data-dir flag creates a separate Chrome profile in C:\Temp\chrome-debug-profile. This means the debug Chrome instance won’t interfere with your regular browsing session — different tabs, different extensions, different cookies.

Good ! Make sure ~/.local/bin is in your PATH. Add this to your ~/.zshrc or ~/.bashrc if it’s not:

1
export PATH="$HOME/.local/bin:$PATH"

Step 2: Configure the MCP server

Now, the configuration depends on which agentic CLI tool you use. Keep calm, I got you covered for the most popular ones. Here we go!

Claude Code

Add to ~/.claude.json under mcpServers:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
{
  "mcpServers": {
    "chrome-devtools": {
      "type": "stdio",
      "command": "npx",
      "args": [
        "-y",
        "chrome-devtools-mcp@latest",
        "-u",
        "http://localhost:9222"
      ]
    }
  }
}

Gemini CLI

Add to ~/.gemini/settings.json:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
  "mcpServers": {
    "chrome-devtools": {
      "command": "npx",
      "args": [
        "-y",
        "chrome-devtools-mcp@latest",
        "-u",
        "http://localhost:9222"
      ]
    }
  }
}

GitHub Copilot (VS Code)

Add to your VS Code settings.json:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
{
  "mcp": {
    "servers": {
      "chrome-devtools": {
        "command": "npx",
        "args": [
          "-y",
          "chrome-devtools-mcp@latest",
          "-u",
          "http://localhost:9222"
        ]
      }
    }
  }
}

GitHub Copilot CLI

Add to ~/.copilot/mcp-config.json:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
{
  "mcpServers": {
    "chrome-devtools": {
      "type": "local",
      "command": "npx",
      "args": [
        "-y",
        "chrome-devtools-mcp@latest",
        "-u",
        "http://localhost:9222"
      ],
      "tools": ["*"]
    }
  }
}

ℹ️ Note: The Copilot CLI uses "type": "local" and requires a "tools" field to specify which tools to enable. ["*"] enables all available tools.

Step 3: Let’s test it!

  1. Launch Chrome with debugging:
1
chrome-debug https://example.com
  1. Verify the connection:
1
curl -s http://localhost:9222/json/version | python3 -m json.tool

You should see something like:

1
2
3
4
5
6
7
8
{
    "Browser": "Chrome/134.0.6998.89",
    "Protocol-Version": "1.3",
    "User-Agent": "Mozilla/5.0 ...",
    "V8-Version": "13.4.114.11",
    "WebKit-Version": "537.36",
    "webSocketDebuggerUrl": "ws://localhost:9222/devtools/browser/..."
}

If you see this — Cool ! It works!

  1. Now from your agentic CLI, try interacting with the browser. For example, in Claude Code:

“Navigate to https://example.com and take a screenshot”

The tool should navigate Chrome, capture the page, and return the screenshot directly in the conversation. Pretty cool, right?

What can you do with this?

Once connected, your agentic CLI can:

CapabilityExample use case
Navigate & screenshotVisual regression testing, design review
Click & fill formsEnd-to-end testing, form automation
Network inspectionDebug API calls, check request/response payloads
Console logsCatch JavaScript errors during development
Lighthouse auditsPerformance, accessibility, and SEO checks
Execute JavaScriptExtract data, test DOM manipulations
Performance tracingProfile page load, find bottlenecks

This turns your terminal-based AI assistant into something that can actually see and interact with your web applications.


Conclusion

That’s all folks! In this post, we went through two things that significantly improved my daily development workflow on Windows with WSL2:

  1. Tuning WSL2 — two small configuration files (.wslconfig and wsl.conf) that can dramatically change how WSL2 behaves, especially autoMemoryReclaim and networkingMode=mirrored
  2. Chrome DevTools MCP — connecting Chrome running on Windows to agentic CLI tools inside WSL2, so they can actually see and interact with web pages

The Chrome DevTools MCP server is open source and actively maintained at github.com/ChromeDevTools/chrome-devtools-mcp. It works with any MCP-compatible tool, making it a vendor-neutral solution for browser automation from the terminal.

I really encourage you to try this setup — especially the mirrored networking + Docker Engine combo. Once you get rid of Docker Desktop, you won’t miss it!

Cheers!