Your MCP Tools Deserve Better
Jason Sich
When adding an MCP server to an existing mature application, developers face a choice: whether or not to wrap the existing API. A quick Google search will present you with the dichotomy of "how to re-use your existing API" or "why you don't want to wrap an API." Wrapping an existing API certainly has its advantages, but there are some important considerations to be aware of before continuing down that path.
Wrapping as a way to start
An MCP Server may seem like a complicated new way to expose your API to an LLM. Why not just add an MCP wrapper over an existing API? In practice this works fine as long as the LLM can make sense of what the tools do, what the inputs/outputs are and how to chain multiple tools together.
It can actually be a great starting point. An existing API can provide documentation, implementation, authentication and authorization for a new MCP Server. That's a pretty big win. By leveraging this foundation to quickly get started you can get a feel for what it's like to interface with your application via a conversation with an LLM (which is amazing, by the way). You can now give this new interface to your users, so that they can experience the feeling of talking to their very own Jarvis.
What I think you'll find with this approach is that the LLM can't consistently execute on specific use cases. It gets confused as to what or how to do something with the tools you've given it. This is the point where you realize that your API and the MCP Server will diverge.
The number of tools matters
Giving an LLM too many choices can result in poor outcomes. LLMs are non-deterministic and giving them too many choices they will end up making the wrong one.
Recently at Rails World 2025, I attended a talk by Shopify engineers Andrew McNamara & Charlie Lee, that covered Shopify's Sidekick in-app agent. One of my biggest takeaways was the relationship between quality and the number of tools given to the agent. Their findings looked like this:
- 0-20 tools - Clear boundaries and easy to debug
- 20-50 tools - Tool boundaries become unclear and the combinations of tools start leading to undesired outcomes
- 50+ tools - Multiple ways to do the same thing and difficult to reason about the system.
Another consideration is the model that you're using. Each model will handle tool calling in its own way and results may vary between models.
Tool specs are not API specs
Generating your tool specs based on OpenAPI specs can work for an initial rough draft, but it won't be sufficient for the LLM to make sense of your tools. You'll find yourself adding more information and details to help the LLM understand how to use the tool. The tool descriptions and input schemas end up looking like mini-prompts to the LLM. Some developers even use XML for added structure along with concrete examples of how to use the tool directly in descriptions.
One example of this is giving the LLM a hint about other tools they can use to accomplish their task. For example, in a create_blog_post tool, you might guide the LLM to use other tools for gathering information:
...
authorId: z
.string()
.describe(
"The id of the author for the blog post. Use the get_authors tool for a listing of valid authors with their ids."
),
....Outputs may vary
Guiding the LLM doesn't just stop at inputs. Sometimes you will want to include additional information in the output that you would not normally include in a normal API call.
Like inputs, outputs can include mini-prompts to help the LLM understand what to do next. It can also be wise to include information that would otherwise require additional tool calls to obtain. This may not follow REST best practices, but it optimizes LLM performance by eliminating additional tool calls.
Different Use Cases
The majority of APIs today are RESTful. They provide discrete operations (POST, PUT, etc.) on discrete resources. The vast combination of operations and resources in REST APIs gives developers the flexibility that they need to create a wide variety of applications.
For an LLM these combinations represent choice and complexity, which work against it. Creating a limited tool set that is tailored to specific use cases for users yields the best results.
Technical Considerations
The official MCP specification is constantly evolving. New features are being developed to help with usability, but may be incompatible with your API implementation. Currently the following concepts are not yet mainstream, but they are worth keeping an eye on:
- Sampling - Allows a tool call to outsource work back to the LLM. This might look like a tool asking the LLM to generate ideas for tag names.
- Progress reporting - Allows the MCP Server to send progress notifications back to the calling LLM so that it can notify the user of progress for long running tool calls.
- Cancellation - Allows for the LLM to request the cancellation of a tool call.
Summary
Wrapping an existing API is a quick way to make progress when developing an MCP Server. It allows developers to quickly get an idea of what value an MCP Server could offer their users. However your existing API will not offer an optimized experience when dealing with an LLM. You'll need to figure out how users will interact with it, and then build the right MCP tools for the job.