Read the latest blog: Orchestrated Authentication: Secure AI Workflow Collaboration Learn more
Jen Hilibrand - Chief of Staff at Thread AI
September 16, 2025
Enterprises now face a fragmented landscape populated by everything from foundational models-as-a-service, such as OpenAI's GPT-5, to highly specialized vertical applications that address singular business needs or specific workflows. While this presents considerable opportunity, it also introduces a strategic challenge: traditional procurement models can be ill-equipped to evaluate and integrate these disparate, modular components effectively.
In response to this complexity, a distinct architectural approach is gaining prominence: AI orchestration platforms. This category of software provides a unified infrastructure layer to connect and manage workflows across different models, internal systems, and third-party APIs for complex business process automation. More importantly, these platforms are reshaping the long-standing "build versus buy" dilemma. Enterprises are no longer faced with a simple binary choice between a costly, resource-intensive internal build and a generic, off-the-shelf purchase. Instead, an alternative model is emerging—one that allows an enterprise to retain ownership over its strategic assets, such as proprietary business logic and data, while leveraging a standardized infrastructure for execution, governance, and security.
This framework is designed to provide clarity in this evolving landscape, offering a structured approach to the evaluation and procurement of modern AI systems.
Categories of AI Software and Tools
Disrupting Procurement Cycles
The Build versus Buy Dilemma
Best Practices for Procuring AI-Native Software
Specific AI models and models-as-a-service that are useful in singular contexts and do discrete tasks. Models can be created from scratch, “fine-tuned”, or used off the shelf. Off-the-shelf models can be open-source, meaning anyone can download and use, or closed-source, meaning you can interact with them as a black box and do not have access to the internals.
Investments for companies at this layer are complicated and not made in isolation. Foundational model providers provide functionality for individual use, but require engineering teams to integrate and maintain longer term for business process automation.
OpenAI’s GPT-5
Large Language Model as a Service
Google’s Speech-to-Text Service
Transcription Model as as Service
YOLOv11
Open Source Object Detection Model
Infrastructure supporting individual components required for all AI workflows, including AI data storage and management. Infrastructure can include specific hardware and data infrastructure for training and developing models, technologies for packaging and executing models, model evaluation and monitoring frameworks, and model orchestration technologies.
Investments at this layer require dedicated engineering organizations or contractors to maintain and integrate with existing systems.
PGVector
Vector Database Extension to Postgres
Weights & Biases
ML Ops Developer Platform
Patronus AI
LLM Evaluation Metrics Platform
Dedicated user interfaces and applications that do targeted AI tasks and specific workflows. They usually do one or a few things and do them very well. There often is limited room for expansion into other use cases within these applications.
Investments at this layer in an organization can be one of the best options for companies without a deep engineering organization. However, a strategy of acquiring a new application from a different provider for every desired AI workflow easily can become expensive and inefficient organizationally.
Otter AI
Meeting Note Transcription and Summarization Software
Synthesia
Text-to-Video Platform
Cursor
Code Completion Tool for Developers
Extensible AI frameworks that enable wide variety of different kinds of AI workflows and applications. Platforms can encompass combinations of models and services, infrastructure, and applications into one cohesive system. Levels of technicality required for builders can vary, but often implementation requires a level of engineering literacy.
While some platforms offer white-glove services, their underlying architecture is what truly matters. Making the right choice here is critical, as a purpose-built platform provides the inherent governance, observability, and enterprise-grade security that homegrown solutions often struggle to achieve.
Lemma by Thread AI
AI Orchestration Platform for Intelligent Process Automation
Superblocks
Low-Code Platform to Build Apps for Customer Operations
LangGraph by LangChain
Framework for Building and Monitoring on Top of LLMs
Artificial intelligence is rapidly reshaping the landscape of software procurement. Historically, enterprises in critical industries such as financial services engage in procurement cycles that may span 18-24 months - a lengthy process that involves multiple stages such as defining requirements, vendor selection, evaluation, and contract negotiation. This cycle is now being disrupted by the rise of AI-native software solutions. These tools introduce new variables that enterprises must consider at every step of the procurement process, and evaluate in a timely way to keep up with the accelerated pace of innovation.
The first step in this procurement process involves understanding the current internal technical landscape and engaging key stakeholders across business units to define the problem space. This can mean identifying technical gaps and assessing if a problem can be solved internally (built) or requires an external vendor (bought). This process culminates in the creation of a comprehensive set of requirements for what a solution would entail.
Once a comprehensive set of requirements has been established, an enterprise begins the research and identification stage for solutions. This often times involves RFI, RFP, or RFQ processes, feature analysis and comparison across vendors, and an analysis of a vendor’s ecosystem- are they a vertical vendor with specific domain expertise? Or are they a horizontal vendor that can offer comprehensive best practices?
Diligence on vendors can vary greatly depending on the stage and sector of a vendor. This can include assessing vendor financial stability (via financial statements, credit checks, or funding announcements), cybersecurity posture (via security certifications, audits, security controls and policies), and references (via current customers, investors, or other connections)
Evaluating solutions can include validating functionality against defined requirements, and if often times done via a Pilot or Proof-of-Concept structure, assessing integration capabilities with existing infrastructure, and performance benchmarking in a controlled environment.
Negotiated terms between the enterprise and the vendor are not merely a commercial discussion, but a critical risk management exercise, where the terms of the agreement must reflect the key considerations with respect to data security, regulatory compliance, system reliability, and more. This means gaining alignment on key SLAs with respect to uptime, performance, and resolution.
This is often a complex and challenging phase, which requires integrating new software with an existing IT environment and rolling it out to end-users. This requires significant planning with respect to sensitive data migration and security best practices. Implementation is commonly rolled out in three patterns: phased rollout, upfront rollout, or a parallel run (where a new system and old system run concurrently).
The emergence of powerful AI-native tools is disrupting the decades old procurement paradigm of build versus buy. As more modular AI platforms have emerged, companies can reduce their reliance on off-the-shelf software and enable deployment without heavy investments in technology, infrastructure, and talent.
The AI vendor landscape is highly fragmented, with numerous companies offering overlapping solutions across various industries and use cases. This creates challenges for enterprises trying to research and select the best solution.
With new functionality comes new risk factors, and enterprises now need to add data leakage, model poisoning, model bias, model explainability and interpretability, model IP, NHI security, and other data security concerns to their diligence checklist.
Clear PoC design can improve the fidelity of the procurement process. This means starting with clear objectives, ensuring data quality and availability, the involvement of key stakeholders, the choice of the right use cases, and the incorporation of risk and performance assessments.
Value-based or usage-based pricing has shifted the focus from fixed, upfront costs to pricing models that align more closely with value derived from software providers. This shift allows for enterprises to pay for what they use, and makes it easier to scale costs based on adoption and performance. This means contracts are becoming more flexible and dynamic, leading to more tailored agreements.
The incorporation of AI-native software in enterprises often requires new methods of measuring ROI, with value manifesting across improvements in automations, decision-making efficiency, predictive insights, and more. This means enterprises need to track not only tangible outcomes such as cost savings or revenue increases, but also intangible benefits like enhanced customer satisfaction or risk mitigation.
The first step in this procurement process involves understanding the current internal technical landscape and engaging key stakeholders across business units to define the problem space. This can mean identifying technical gaps and assessing if a problem can be solved internally (built) or requires an external vendor (bought). This process culminates in the creation of a comprehensive set of requirements for what a solution would entail.
The emergence of powerful AI-native tools is disrupting the decades old procurement paradigm of build versus buy. As more modular AI platforms have emerged, companies can reduce their reliance on off-the-shelf software and enable deployment without heavy investments in technology, infrastructure, and talent.
Once a comprehensive set of requirements has been established, an enterprise begins the research and identification stage for solutions. This often times involves RFI, RFP, or RFQ processes, feature analysis and comparison across vendors, and an analysis of a vendor’s ecosystem- are they a vertical vendor with specific domain expertise? Or are they a horizontal vendor that can offer comprehensive best practices?
The AI vendor landscape is highly fragmented, with numerous companies offering overlapping solutions across various industries and use cases. This creates challenges for enterprises trying to research and select the best solution.
Diligence on vendors can vary greatly depending on the stage and sector of a vendor. This can include assessing vendor financial stability (via financial statements, credit checks, or funding announcements), cybersecurity posture (via security certifications, audits, security controls and policies), and references (via current customers, investors, or other connections)
With new functionality comes new risk factors, and enterprises now need to add data leakage, model poisoning, model bias, model explainability and interpretability, model IP, NHI security, and other data security concerns to their diligence checklist.
Evaluating solutions can include validating functionality against defined requirements, and if often times done via a Pilot or Proof-of-Concept structure, assessing integration capabilities with existing infrastructure, and performance benchmarking in a controlled environment.
Clear PoC design can improve the fidelity of the procurement process. This means starting with clear objectives, ensuring data quality and availability, the involvement of key stakeholders, the choice of the right use cases, and the incorporation of risk and performance assessments.
Negotiated terms between the enterprise and the vendor are not merely a commercial discussion, but a critical risk management exercise, where the terms of the agreement must reflect the key considerations with respect to data security, regulatory compliance, system reliability, and more. This means gaining alignment on key SLAs with respect to uptime, performance, and resolution.
Value-based or usage-based pricing has shifted the focus from fixed, upfront costs to pricing models that align more closely with value derived from software providers. This shift allows for enterprises to pay for what they use, and makes it easier to scale costs based on adoption and performance. This means contracts are becoming more flexible and dynamic, leading to more tailored agreements.
This is often a complex and challenging phase, which requires integrating new software with an existing IT environment and rolling it out to end-users. This requires significant planning with respect to sensitive data migration and security best practices. Implementation is commonly rolled out in three patterns: phased rollout, upfront rollout, or a parallel run (where a new system and old system run concurrently).
The incorporation of AI-native software in enterprises often requires new methods of measuring ROI, with value manifesting across improvements in automations, decision-making efficiency, predictive insights, and more. This means enterprises need to track not only tangible outcomes such as cost savings or revenue increases, but also intangible benefits like enhanced customer satisfaction or risk mitigation.
Choosing to build requires large scale investments in talent, technology, and infrastructure in order to connect and orchestrate AI across different products, platforms, and cloud providers, as well as to handle different error codes, authentication policies, and API protocols. Without the right in-house expertise, it can be difficult to ensure long-term sustainability and scalability for built solutions.
Expensive
Involves purchasing and maintaining multiple platforms and products including both open source as well as managed services.
Time Consuming
Can be an extensive undertaking to both build and maintain these systems. This means large scale time investments from engineering upfront, as well as an ongoing basis.
Talent Intensive
Often times requires specific ML and distributed systems engineering expertise to implement these systems at scale. These hires can be expensive and are in high demand.

Buying forces enterprises to proxy specific logic into generic application layer solutions, that don’t compound. This means buying multiple different service subscriptions and be forced to pay for incremental upgrades and expansions, without owning the final workflow and automation. Often times this decision comes with long-term consequences in terms of cost, maintenance, and flexibility.
Generic
Involves purchasing generic applications for each use case, which means proxying proprietary business logic into unspecific solutions
Unscalable
Each automation requires it’s own application, which creates a considerable amount of software bloat. This also neglects to compound learnings or orchestrations across applications.
Vulnerabilities
Using many generic applications can expose sensitive data through a variety of ways without the proper controls and observability. This becomes challenging for an enterprise to get a unified view of usage.

AI-native orchestration platform such as Thread AI’s Lemma are breaking this decades old paradigm of build versus buy. With platforms like Lemma, enterprises can de-couple the need to invest heavily in talent and infrastructure in order to own the proprietary business logic that powers automations. Enterprises can build custom, secure, and scalable automation without the overhead.
Often times enterprises look to “Buy” outside of core differentiators, but are left without solutions customized to their business - leading to further inefficiencies. This is where flexible and composable solutions like Lemma, enable enterprises to mix and match which elements of their technology stack they believe makes the most sense to own in the long run (proprietary models, proprietary data, security controls, etc.), while still deploying cutting edge AI-powered automations in their business.
Composability
High flexibility allows for users to mix and match which elements of their stack they want to own and manage themselves, versus what they want to abstract away. This means highly customized stacks that is tailored to a businesses needs and priorities. This is the essence of our composable architecture: providing the fundamental building blocks for innovation, rather than a rigid, one-size-fits-all solution.
Compounding
Horizontal platforms that connect disparate data and systems unlock the ability for business logic to compound, meaning workflows from different functional areas can build upon each other. With Lemma, every asset that travels through the platform—a piece of data, a model, a piece of human feedback—can become a secure, re-usable component in a shared enterprise registry, making the next workflow you build faster and more powerful.
Security and Ownership
Enterprises can own their most valuable assets, their proprietary business logic and their data, while leveraging cutting edge systems to implement organizational efficiency. This is underpinned by Lemma's robust security framework, which provides the granular access controls, immutable audit trails, and human-in-the-loop oversight necessary to operate with confidence in even the most demanding regulatory environments.

The changing software landscape requires new approaches and principles for procurement and evaluation. These approaches, much like the software they aim to procure and evaluate, are rapidly evolving as well. With traditional software procurement being characterized by lengthy evaluation cycles and approval processes, the rapid pace of AI innovation has created a tension: firms need to be agile enough to leverage new AI capabilities, but must also conduct thorough due diligence as to the implications of any new vendor on security, compliance, and more. Enterprises cannot afford to wait years to vet tools that may be outdated by the time of deployment, but they also cannot bypass risk and compliance checks. This is what we have seen in the field as best-in-class practices for procuring AI-native software today.
Choose Iteration Partners
Much like AI, the right solutions for an enterprise are often iterative. Waiting for the perfect solution prevents enterprises from keeping up with innovation cycles, but compliance and security should never be sacrificed. This means enterprises should consider adopting more agile and iterative procurement methodologies that are flexible and involve continuous collaboration and incremental delivery of value.
Procurement methodologies that involve lightweight PoCs, Pilot programs, or iterative collaboration allow organizations to test AI solutions on a smaller scale, asses their value, and identify potential risks before committing to full-scale deployment. In tandem, these processes often allow new technologies to find more ways to deliver value to a business. Finding a vendor that can co-build alongside your organization often leads to solutions that are more aligned with the needs of the business, and allows enterprises to deploy incremental improvements in the process.
Identify Use Cases First
Identifying your enterprise’s potential use cases in advance, can benefit your process in multiple dimensions. Most notably, it prevents AI for AI’s sake, and ensures procurement is driven by a business need. This can mean defining the problem and scope, identifying the key personnel that interact with or consume different parts of a process, and determining what resources and data are needed to train, fine-tune, or power a system.
Clarifying use cases, stakeholders, and outcomes in advance not only aligns the procurement process with the business, but it also prepares the enterprise for a more successful PoC or Pilot. Well defined inputs and expected outputs creates a clear measuring stick for vendors to understand how to deliver tangible value to a business.
Clarify Your Strengths and Weaknesses
Enterprises should seek to find vendors that can “meet them where they are.” This means finding vendors that can compliment the organization’s technical sophistication, while also being equipped to address domain-specific concerns such as security, compliance, and regulations. This can mean searching for vendors that offer horizontal insights across industries, deep expertise that compliments an organizations engineering team, and hands-on experience.
Cross-functional collaboration is often times needed in this process, to identify where there are gaps and where a new vendor can add the most value. The best vendors go beyond just customized solutions- they offer best-in-class learnings across industries to enable the enterprise, helping it improve their operations and optimize workflows. Effective vendors offer thought leadership in areas of expertise, but listen carefully to enterprises specific needs.
Create Space for Sandboxes
Once a vendor is being seriously considered or evaluated, creating the right structured sandbox for assessment can be crucial. These sandboxes can be instrumental in offering a faster way to evaluate potential vendors, as they ideally closely mirror the enterprise’s production environment, with representative data and with clear monitoring capabilities.
These controlled environments provide an important space to rigorously assess potential solutions in relevant scenarios, moving beyond theoretical claims to practical and tangible applications to the business. By creating realistic yet isolated environments, companies can define and measure success against clear, pre-determined metrics, while accelerating time-to-value by allowing vendors to show relevant results. This ultimately mitigates risk, accelerates evaluation, and maximizes the potential impact that an AI-native solution can have.
Moving from theory to practice is the most critical step in any new strategic initiative. We hope the concepts in this framework can be used to spark a conversation with your key stakeholders—from engineering and product to security and business operations.
Contact us to schedule a strategic workshop on putting this framework into practice. We’ll explore how the Lemma platform provides the architectural foundation to build reliable, secure, and scalable AI workflows that don’t just get you started, but deliver compounding value for your AI initiatives.
Compliance
CJIS
GDPR
HIPAA
SOC 2 Type 2