Will StudioAssist in Agent Mode support connecting to Chrome DevTools MCP-style servers in the next 1-2 releases to enable deeper UI inspection and automatic locator generation?
Is there a feature-request ticket or public roadmap item to enable browser-side instrumentation(Chrome dev tools mcp) to feed object capture into StudioAssist?
Thank you for the great question, this is an important topic as many teams are exploring deeper UI inspection and more advanced object-capture workflows.
Looping in @Shin from our product team to help provide clarity on:
Whether StudioAssist Agent Mode will support connecting to Chrome DevTools MCP-style servers in the upcoming releases
Any existing feature request, internal ticket, or public roadmap item related to browser-side instrumentation for object capture
How this aligns with future enhancements for automatic locator generation
Dinesh - Is your question about whether you will be able to use a self-hosted LLM/SLM? If so, you should be able to use the Use Open-AI compatible provider setting for StudioAssist. For example, here is my StudioAssist configured to use the GPT OSS model running in LM Studio on my MacBook Pro.
You can actually use Chrome DevTools MCP with StudioAssist today, but it requires a little extra work to configure it. You will configure it as an SSE MCP server running via mcp-proxy. Once you have mcp-proxy installed, you can start Chrome DevTools MCP using the command:
However, in walking through this, I ran into an error when StudioAssist tried to use the take_snapshot tool, so I wasn’t able to complete your locator generation use case. I was able to do other things like get console messages:
Thank you so much on the detailed response!! yes self-hosted LLM/SLM on local developer machine. MCP aren’t enabled widely across enterprise since it’s still evolving, security/compliance/governance being worked out across bigger organisations. Other than Open AI provider, can we use any other SLM/LLM in studio assist as of today?
You should be able to use any model that can present an OpenAPI-compatible API. I just showed GPT OSS because it was what I had loaded. Here is my configuration DeepSeek-R1-0528 selected. In both cases, I’m running the model in LM Studio, which provides an OpenAI-compatible endpoint.
Note that we have not evaluated our default prompts on OSS models and these models tend to be more constrained in terms of things like context length, etc. than commercial models. One constraint you may run into is context length—particularly if you have lots of MCP tools loaded. You might want to prune your tools in Agent mode or use Ask mode.