Connecting to chrome dev tools mcp

Hi,

Will StudioAssist in Agent Mode support connecting to Chrome DevTools MCP-style servers in the next 1-2 releases to enable deeper UI inspection and automatic locator generation?

Is there a feature-request ticket or public roadmap item to enable browser-side instrumentation(Chrome dev tools mcp) to feed object capture into StudioAssist?

https://github.com/ChromeDevTools/chrome-devtools-mcp

Thanks

3 Likes

Hi friend,

Thank you for the great question, this is an important topic as many teams are exploring deeper UI inspection and more advanced object-capture workflows.

Looping in @Shin from our product team to help provide clarity on:

  • Whether StudioAssist Agent Mode will support connecting to Chrome DevTools MCP-style servers in the upcoming releases
  • Any existing feature request, internal ticket, or public roadmap item related to browser-side instrumentation for object capture
  • How this aligns with future enhancements for automatic locator generation

Shin, your insights here would be super helpful. :folded_hands:

Thanks again for raising this!

1 Like

Also does katalon gonna allow LLM/SLM locally if user doesn’t like remote options

1 Like

Dinesh - Is your question about whether you will be able to use a self-hosted LLM/SLM? If so, you should be able to use the Use Open-AI compatible provider setting for StudioAssist. For example, here is my StudioAssist configured to use the GPT OSS model running in LM Studio on my MacBook Pro.

2 Likes

You can actually use Chrome DevTools MCP with StudioAssist today, but it requires a little extra work to configure it. You will configure it as an SSE MCP server running via mcp-proxy. Once you have mcp-proxy installed, you can start Chrome DevTools MCP using the command:

mcp-proxy --host=0.0.0.0 --port=8000 – npx -y chrome-devtools-mcp@latest

(You may need to pick another port if 8000 is already in use.)

Once mcp-proxy is running, you can configure Chrome DevTools MCP in StudioAssist:

Here you can see the Chrome DevTools MCP tools:

You can then use those tools in StudioAssist, e.g.:

However, in walking through this, I ran into an error when StudioAssist tried to use the take_snapshot tool, so I wasn’t able to complete your locator generation use case. I was able to do other things like get console messages:

I will look into the issue with snapshots and report back.

5 Likes

Thank you so much on the detailed response!! yes self-hosted LLM/SLM on local developer machine. MCP aren’t enabled widely across enterprise since it’s still evolving, security/compliance/governance being worked out across bigger organisations. Other than Open AI provider, can we use any other SLM/LLM in studio assist as of today?

2 Likes

You should be able to use any model that can present an OpenAPI-compatible API. I just showed GPT OSS because it was what I had loaded. Here is my configuration DeepSeek-R1-0528 selected. In both cases, I’m running the model in LM Studio, which provides an OpenAI-compatible endpoint.

Note that we have not evaluated our default prompts on OSS models and these models tend to be more constrained in terms of things like context length, etc. than commercial models. One constraint you may run into is context length—particularly if you have lots of MCP tools loaded. You might want to prune your tools in Agent mode or use Ask mode.

4 Likes

I wanted to update this to note that SSE connections to mcp-proxy are not working in 10.4.0 and later.

However, HTTP does work. You can enable that by using this command:

mcp-proxy --transport streamablehttp --port 8080 -- npx -y chrome-devtools-mcp@latest

Note, however, that you will want to use a different server URL. Rather than /sse you will want /mcp, e.g.:

2 Likes

thought industry wants to move away from SSE