Read the latest in our blog: Our Approach to Leveraging and Augmenting Model Context Protocol Learn more
Brandon Stein - Software Engineer at Thread AI
William Long - Software Engineer at Thread AI
September 30, 2025
Our core mission has always been to enable AI to seamlessly connect and collaborate with the world around it. This is why our Lemma platform was built from the ground up with the infrastructure to transform any third-party call into an AI-ready tool, providing a powerful foundation for building advanced, agentic workflows.
We've been closely following the evolution of Model Context Protocol (MCP) and its potential as a unifying standard for integrating external tools with AI models. We are also keenly aware of the ongoing security discussions and vulnerabilities associated with its current state. Recognizing both its promise and its challenges, we've made a deliberate decision to supplement our existing capabilities by integrating MCP into our platform. This integration isn't about relying solely on MCP to build agentic workflows; it's about giving developers another powerful tool in their arsenal, one that enhances the connective tissue and intelligent operational backbone of your systems, all while maintaining our commitment to security and robust infrastructure.
MCP's Value Proposition in an AI-Powered Ecosystem
LLMs and AI applications do not inherently connect with external data sources like APIs, databases, or cloud storage, and are limited in their ability to interact with the real world. To bridge this gap, tools were introduced, allowing models to call external functions to perform actions or fetch information. However, this approach often necessitates writing unique, bespoke integration code for each specific tool call, creating complexity at scale.
Model Context Protocol (MCP) is an elegant solution to this shortcoming of tools. It acts as a universal communication layer, a standardized "language" that allows any language model to interact with any external tool. This open-source standard provides a clear format for describing tools and their capabilities, as well as a defined process for how an AI can call and interact with them. By creating a unified interface, MCP dramatically simplifies development, enhances interoperability, and enables AI to effectively interact with the real world.
The protocol works by providing the AI agent with a set of tools, each with a tool schema, a structured description of a tool's capabilities and required parameters. When the AI determines it needs to use a tool to complete a task, it generates a tool call, a structured request that specifies the tool and the arguments it wants to use. This request is then passed to an external system, which is responsible for executing the tool call and returning the result to the AI. This clear separation of concerns allows the AI to focus on reasoning and planning, while a dedicated system handles the execution of actions.
Secure and reliable connections to MCP Servers are a powerful addition to our Function Registry, giving our library of APIs and functions a significant boost in capability. This unique access is another powerful accelerator to moving past simple, one-off applications and enter into a new era of tool-calling and agentic AI. By leveraging our Function Registry alongside integrations with MCP servers, we can seamlessly facilitate complex actions within our workflows, enabling real-world scenarios such as:
Intelligent Root Cause Analysis & Security Triage
Automate the critical work of security analysts by having an iterative AI agent triage complex security alerts. The agent uses tools and MCP connections to query and correlate security logs across systems like CrowdStrike and Okta, building a full picture of an incident. It generates a comprehensive report, suggests a fix, and then waits for Human-in-the-Loop review and sign-off before executing any final compensating actions, such as isolating a compromised machine.
Context-Aware Service Operations
Supercharge your service desk or operations team by enabling an AI agent to perform complex, multi-step actions. For example, the agent can use information fetched via our Function Registry to retrieve a customer's usage metrics from a private API, update their support ticket in Jira or Asana, and then immediately post a summary of the situation and the planned resolution into a dedicated Slack channel.
Dynamic Document and Content Workflow Automation
Automate the creation and management of complex business documents. An AI application can pull up-to-the-minute project status from Notion or Asana, retrieve confidential data from a secure cloud storage, and then generate a tailored, executive-ready report that's automatically distributed or filed.
While Model Context Protocol offers a powerful, standardized connectivity solution, its design introduces new security challenges that are fundamentally different from a regular API call. A standard API call is a direct, isolated request. In contrast, an AI agent interacting via MCP enables autonomous, multi-step workflows that require the agent to access a selection of sensitive, internal data sources to build context and execute a request. This means the agent's broad access to internal data and the autonomous nature of its decision-making creates a risk of data leakage and misuse. Furthermore, this autonomous execution often leads to a lack of observability into the agent's decision-making process, increasing the risk that the AI could send incorrect or malicious information during tool calls without clear human review. This combination of broad data access and autonomous execution creates a significant attack surface for potential data leakage or misuse.
One possible attack vector for a malicious actor through MCP is tool poisoning. This is a sophisticated form of prompt injection where an MCP server returns malicious instructions in a tool description or a malicious response from a tool call which seems innocent based on its name and tool schema. This can cause the AI to take unintended actions that could expose sensitive data, delete important documents, or even manipulate external systems. This is particularly dangerous in autonomous systems, as a compromised tool could operate undetected, without any human oversight. An article from Docker identified a study that found prompt injection flaws in 43% of MCP servers, and that another 2025 study found that 22% of servers (of the 1899 tested) exhibit file leakage vulnerabilities.
Another significant challenge when using MCP is the lack of observability. In complex workflows with multiple tool calls, it can be difficult to trace the sequence of events and understand why the AI chose a specific tool or what data was used in the process. This "black box" problem can make debugging difficult and create a significant hurdle for auditing and compliance. When a tool call fails, for instance, it can be a challenge to determine if the issue was with the AI's reasoning, the tool's implementation, or a problem with the data it was provided.
Finally, insufficient control over the AI's context can lead to unpredictable outcomes. While the protocol standardizes the communication, it does not guarantee that the data provided to the tools is accurate or complete. An AI could misinterpret a user's request and provide a tool with incorrect parameters, leading to unintended and potentially harmful actions. This is especially true for tools that interact with real-world systems, where a simple error could have serious consequences. Robust safeguards are therefore critical for ensuring data integrity and reliable, predictable performance in any real-world deployment.
Prior to the genesis of MCP, we saw the necessity of secure, controlled automation as the critical direction for AI-powered workflows. Inside our Function Registry, we established the foundational capability by creating the ability to configure OpenAPI and HTTP functions as Tools. This process allows developers to take complex APIs and define simplified Tool Configurations, creating a simplified schema that an LLM can understand and call reliably without needing to see the full, raw API specification or credentials.
Thread AI's integration with MCP Servers is a natural evolution of this strategy, building directly upon our existing foundation of extensibility and control. The Lemma platform acts as the middleware layer, providing you with complete control and visibility over the entire tool-calling process. This design allows you to build custom, multi-step workflows that go beyond a simple tool call. For example, you can insert custom logic to ensure accuracy and security for critical tasks before they are executed. Lemma's comprehensive observability also provides you with a clear, traceable record of every tool call or interaction made within a workflow, making it easy to debug and audit.
MCP complements our proprietary Function Registry by extending its reach and enhancing its utility. While the Function Registry natively incorporates an organization's unique business logic and connections to its systems, our new MCP integration allows users to securely connect to a remote MCP server and import any available tools directly for use in our composable workflow builder.
This process is faster than manually defining each tool and allows you to rapidly integrate a large library of external functions. However, for those who need more control, we still support fine-grained tool definition for OpenAPI and HTTP functions, enabling intricate customization of every parameter and input. Once imported or defined, these tools become a native part of your workflow, ready to be used and combined with other steps. This process gives you the ultimate control over which tools are integrated, providing a high degree of usability and security.
Ultimately, our goal is to make the powerful capabilities of external tools accessible to everyone without sacrificing ease of use, security, or observability. Lemma provides a simple interface that allows you to effortlessly browse, test, select, and integrate MCP tools into your workflows with just a few clicks. This powerful integration ensures that developers can focus on building innovative solutions rather than spending time on complex API management.
We recognize that connecting AI to external services comes with inherent risks. Our commitment to security is paramount, which is why we've implemented a multi-layered strategy to protect our users from potential threats.
Our primary security measure is the controlled, user-approved integration of tools. Unlike some platforms that allow for dynamic, on-the-fly tool execution, our system requires that a tool be explicitly vetted and accepted by a user before it can be used in a workflow. This approach ensures that tool names and descriptions are immutable outside of a controlled, user-approved process, providing an extra layer of security and predictability. The tool call itself is made by Lemma, not the LLM, which means your sensitive data (credentials, PII, etc.) is completely obfuscated from the LLM. This leaves you in full control of your data within the Lemma Console.
To further enhance security and accuracy, you can integrate handoff states with your tool calls, pausing the worker run and waiting for human intervention. This allows a team member to vet what tool calls are being made and control what data is sent to external APIs, acting as a critical human-in-the-loop safeguard for sensitive operations. The platform also offers secure, native credential management, where secrets are encrypted and stored outside of your workflow logic. This is complemented by fine-grained access controls, allowing you to restrict which users or groups can access specific workflows or authorize connections to an MCP server. This level of control allows security teams to create policies around the use of tools within workflows, ensuring strict guardrails for critical workflows and flexible rules for others.
Furthermore, we support OAuth 2.0 Dynamic Client Registration, which is rapidly becoming the industry standard for MCP servers. This protocol enables a BYOI (bring your own identity) approach, automatically redirecting users to the MCP server's authorization server to log in and frictionlessly grant access. Coupled with our secure credential management, this ensures secure and controlled authorization to remote services, a stark contrast to less secure methods like plain-text API keys. By managing this complex authentication process, Lemma provides a robust and secure connection to external tools, giving you peace of mind that your data and systems remain protected.
This robust foundation unlocks sophisticated agentic patterns, allowing you to embed advanced reasoning and context directly into your AI workflows.
Lemma is now positioned to support powerful architectures like:
Planning (Orchestrator-Worker)
A central planner LLM can break down a complex goal into smaller subtasks and delegate them to specialized worker agents. These workers then use MCP tools to execute each specific task, from drafting a social media post to creating a project plan.
Example:
Plan a marketing campaign for the new feature.
Routing
An initial LLM can classify a user request and route it to the most appropriate downstream agent or workflow. For instance, a customer query about an invoice can be routed to a dedicated billing agent, while a technical question is sent to a support agent.
The facilitation of these agentic patterns, enabled by our proprietary integration of Model Context Protocol, coupled with our Function Registry and support for Tool Configurations, represents a significant strategic advancement for the Lemma platform. By combining the universal interoperability of MCP with a robust, security-first implementation, we are laying the critical infrastructure to fulfill our vision of enabling enterprise-grade agentic AI. This move empowers developers to build more impactful intelligent workflows, paving the way for true enterprise AI transformation.
Interested in learning more? Let's connect.
Compliance
CJIS
GDPR
HIPAA
SOC 2 Type 2