Read the latest in our series Powered by Lemma: AI as a Force-Multiplier for Security Analysts. Read Blog

Orchestrating the Unpredictable

A Bounded Approach to
Non-deterministic Workflows at Scale

Martin McRoy – Head of Engineering at Thread AI

June 2, 2025

Lemma Logo

Today we are excited to share with you our first Engineering Blog. We are offering up a look into how we have laid the foundational building blocks that have enabled us to go from zero to one with our flagship product, Lemma – serving enterprises in regulated environments amid the frenzy driving the interest in agentic, AI-powered workflows.

At Thread AI, we value intentional, deep research and learning from prior art, as key tenets in our engineering philosophy. These tenets became particularly important given the sense of perceived urgency to rush to create new interfaces and protocols for AI. Our approach emphasizes re-use, re-imagination, and building on battle-tested primitives; with a focus on durability, extensible execution modalities, deep understanding of low-trust vs. high-trust code, and above all, simplicity and intuitive design. This philosophy guided the key technical decisions behind Lemma.

This post explores three of those foundational decisions: choosing a Domain-Specific Language (DSL) approach based on an extended Serverless Workflow specification, implementing a robust Function Registry to securely manage business logic, and integrating AI capabilities in a traceable and controlled manner. Over the upcoming months, we will be sharing technical deep dives into each of these components.


Table of Contents

Choosing the DSL Route:

Extending Serverless Workflow for Collaboration and Observability
Expressiveness for Traditional and AI Needs

We fundamentally believe that at its core, building reliable workflows – agentic or otherwise – is an orchestration problem. When building Lemma, we sought a framework that allowed us to natively combine services, compose compounding workflows, and focus on fundamentals while adapting to evolving paradigms.

Our research led us to the CNCF Serverless Workflow (SWF) specification – a vendor-neutral, declarative language for describing stateful, event-driven workflows. It provides a fantastic starting point for codifying intricate processes. However, for an AI-first, agentic world, we needed more. We made the strategic decision to extend SWF, adding critical AI building blocks natively into the orchestration layer. This includes primitives for human-in-the-loop (HITL) actions, compensating actions (to potentially undo autonomous decisions), richer authentication integration, and AI-native tool-calling methodologies. This extended DSL allows us to express not only traditional control flow but also the nuanced interactions required by modern AI systems, without forfeiting reliability guarantees.

Enabling Process Visualization, Understanding, and Collaboration

The structured nature of our extended SWF DSL naturally supports process visualization. Explicitly defined states, transitions, and even AI-specific steps allow us to generate graphical representations of the workflow. This clear visualization is crucial for debugging, auditing, and communicating complex agentic behaviors. Lemma's platform acts as the runtime engine bringing the spec to life, while our SDK and UI enable diverse users to leverage this visual understanding to test, observe, scale, and manage these powerful workflows.

Lemma Worker
Lemma Worker

By choosing a specification-based DSL, we explicitly prioritized observability and collaboration. The choice is often between a DSL or a code-first approach. Code-first often caters primarily to developers. However, we believe AI workflows are fundamentally cross-functional. They involve diverse personas: data scientists selecting models, systems integrators curating tools, subject matter experts reviewing AI decisions, and business users consuming the output. A DSL provides a common, relatively readable format that allows these varied stakeholders to construct, observe, and maintain workflows together. Components best suited for pure code can still be codified as such and invoked by the workflow, but the overarching process remains clear.

Customer Role

How DSL Addresses Their Needs

Data Scientist / Engineer

Standardized definition of complex data pipelines

Security Engineer

Embed & enforce mandatory security/compliance steps

Product Manager

Formalize business logic & rules into executable spec

Business Analyst

Clear process modeling, analysis, and documentation

Operations Engineer

Structured view for monitoring & troubleshooting execution

Developer

Implement business logic orchestrated by the DSL


The Function Registry:

Securely Executing Federated Business Logic

At the core of all Agents and AI Workflows are AI models and business logic, often distributed across various services. Lemma needed a way to easily and safely embed these logical building blocks. Our answer is a powerful Function Registry designed to securely embed business logic across first and third party APIs natively into processes.

Protocol Agnostic, Secure, and Stateless by Design

Our Registry allows builders to operationalize powerful existing services by importing their existing API schemas (gRPC, OpenAPI, and soon GraphQL). This avoids creating intermediate proxy servers (like those potentially used with MCP – Model Context Protocol) which can diverge from the source, add maintenance overhead, and introduce security risks. Functions in our registry are stateless by design; state (or Context/Memory) is explicitly managed as a separate Lemma primitive, bound to a single workflow run. This clear separation enhances observability and enables strict data governance, preventing issues like context bleeding between runs (e.g., an agent acting on Patient A accidentally using data from Patient B). Every function registration includes its own resource credential configuration (supporting API Keys, OAuth, Service Accounts, AWS Signatures, IdP integrations like Okta), ensuring that workflows operate within the exact security posture defined for each underlying tool or service.

Individual Functions – and by extension, "AI Tools" – adhere to strict version control within the Registry. Publishers can update functions and create new versions without breaking existing workflows that depend on older versions. This provides granular control and stability, allowing for phased rollouts and reliable operations, potentially offering more robustness than simpler versioning schemes. While users can bring their own functions, Lemma populates the registry with pre-built, secured functions. Access to all functions is governed by Lemma's authentication and authorization system, ensuring workflows only execute logic they are permitted to, leveraging the granular credentials defined during registration.

Function Registry
Enabling AI Tool Calling Across Protocols

Beyond Determinism:

Controlled AI

Lemma is designed to handle both deterministic and non-deterministic workflows, as most real-world use cases involve a mix. Our approach focuses on enabling powerful AI capabilities while ensuring they remain bounded, constrained, and observable.

AI Agents: Secure Tool Use and Planning

True Agentic behavior relies heavily on planning and securely executing a number of different “tools”. Our Function Registry provides the foundation for secure "tool use," ensuring agents invoke external services or functions using the correct, securely managed credentials and versions.

The extended DSL allows defining states where an agent might perform dynamic routing or tool selection based on its inputs and goals. We rely on the use of common mapper patterns when defining AI configurations which can be applied to any function in our Function Registry. Mappers allow us to define function defaults as well as use simplified request bodies optimized for agentic use. This pattern is effective for both rapid iterations using existing business logic as well as reducing the lift of maintaining entirely separate servers as is done in the MCP pattern. Given that this pattern is applied at the function level, this allows us to mix and match functions that are of different source protocols (e.g. gRPC and REST) across different models for AI “tool use.”

AI Configuration on a Function in the Registry
AI Configuration on a Function in the Registry
Lemma Logo
Dynamic Workflows and Bounded Non-Determinism with HITL Controls

In Lemma, workflow non-determinism is typically bound to specific language-model-based States, ensuring clear demarcation of control flows. Traceability is paramount – we explicitly build for a world where all actions, especially AI-driven ones, can be traced and audited. Our extensions to SWF allow for ergonomic and observable ways to handle common AI patterns like dynamic tool selection based on context or routing flows based on unstructured inputs (common in triage, support, etc.), replacing brittle RPA or complex rules engines.

AI Necessitates Control

We've built a human-in-the-loop primitive directly into our extended SWF DSL as a core primitive. Workflows can explicitly pause for human review, approval, or intervention. Compensating actions provide mechanisms to potentially revert autonomous steps if needed. Our emphasis on traceability, the stateless nature of functions, the explicitly scoped Context primitive, and per-function credential management within the Registry all act as critical guardrails, ensuring compliance and security, particularly in regulated industries where data governance and auditability are non-negotiable.

Handoff Inbox
Handoff Inbox

The Bounded Advantage:

Concluding Thoughts

Building Lemma required intentional choices rooted in our engineering philosophy of leveraging prior art while innovating for the future of AI in the enterprise. By extending the Serverless Workflow DSL, we created an expressive, observable, and collaborative foundation. Our secure, stateless Function Registry allows enterprises to safely federate and operationalize distributed logic and AI tools. Finally, our approach to AI integration emphasizes control, traceability, and human oversight through features like bounded non-determinism and native HITL support.

These foundational decisions allow Lemma to operate effectively across complex deployment patterns (multi-cloud and on-prem) and meet the stringent requirements of regulated industries, providing a robust platform for building the next generation of intelligent, reliable, and secure workflows. We look forward to sharing more deep dives into specific areas, including our deployment methodologies, in future posts.


If any of these problems excite you, we'd love to hear from you. We are just getting started.

Come build with us.

Thread AI

Company

Careers

Contact

Compliance

CJIS

GDPR

HIPAA

SOC 2 Type 2

Made In NY badge

©️ 2025 Thread AI, Inc.

131 Varick Street. Suite 1029. New York, NY 10013

Vector