From runtime architecture to process design
In a recent InfoWorld article, I introduced the concept of the Agent Tier — a runtime architecture that separates deterministic enterprise execution from contextual reasoning. The core idea was straightforward: as enterprise workflows incorporate more signals and adaptive models, embedding contextual judgment directly inside branching logic leads to increasingly fragile systems. A dedicated runtime layer can interpret context and determine the next appropriate action, while deterministic systems continue to enforce authoritative state transitions.
That architectural separation raises a deeper question. If contextual reasoning is handled by a dedicated runtime layer rather than embedded directly in workflow branches, how should enterprise workflows themselves be designed?
For decades, enterprise workflows have been structured as decision trees. Business rules define eligibility conditions, workflows encode branching logic and systems progress through predefined sequences of steps. The model works well when variation is limited and scenarios can be anticipated.
Modern operational workflows incorporate far more signals: Behavioral indicators from digital channels, fraud detection scores, identity verification services, machine learning predictions and regulatory policy checks. These signals must often be interpreted together to determine how a case should progress.
Modern enterprise processes rarely rely on a single signal; decisions depend on how signals interact. An identity verification result acceptable in isolation may require scrutiny when combined with unusual device characteristics or inconsistent geolocation. What matters is their combined interpretation. As signals increase, representing these interactions through explicit branches becomes difficult to maintain.
A different process model addresses this complexity: evidence-driven workflows. Rather than enumerating every possible path through a process in advance, evidence-driven workflows accumulate signals about a case and determine the next appropriate action dynamically. In this model, progression is governed not by static branching logic, but by the evolving evidence associated with the case.
To explore how such workflows behave operationally, I built a small prototype of an evidence-driven onboarding process described later in this article.
The limits of branch-driven workflows
Traditional enterprise workflow engines operate through decision trees. Each step evaluates a set of conditions and directs the process toward the appropriate branch. When inputs are limited and scenarios are predictable, this structure remains manageable.
A contemporary customer onboarding process may evaluate several categories of information:
- Identity confidence signals. Document verification, biometric validation and third-party identity services.
- Behavioral indicators. Interaction patterns within digital channels, typing cadence, session activity and navigation behavior.
- Risk signals. Fraud detection scores, device reputation, geolocation anomalies and network intelligence.
- Regulatory checks. Sanctions screening, anti-money laundering obligations and customer due diligence requirements.
Each category may contain multiple signals, and decisions depend on their combination. When designers attempt to represent these combinations through branching logic, several operational problems emerge:
- Branch explosion. The number of possible workflow paths grows exponentially.
- Logic fragmentation. Decision logic spreads across workflow engines, rule systems and application services.
- Excessive information collection. Workflows request information required for every branch rather than for the specific case.
- Operational escalation. Ambiguous cases default to manual review because branching logic cannot resolve them.
Over time, this fragmentation produces processes that are difficult to understand, difficult to modify and increasingly difficult to govern. The issue is not simply complexity. It is the assumption that designers can anticipate the important scenarios in advance.
These limitations point to a different way of structuring workflow progression. Instead of encoding all possible paths in advance, workflows can be designed to accumulate signals about a case and evaluate them as evidence. Progression is then determined dynamically based on the evolving state of that evidence rather than predefined branches.
In this model, the workflow does not attempt to anticipate every scenario. Instead, it assembles context, evaluates the strength of available evidence and determines the next appropriate action at each stage. As additional signals are gathered, the understanding of the case evolves, and the workflow adapts accordingly. Figure 1 illustrates the difference between static branching and evidence-driven progression.
Nitesh Varma
How complex operational systems handle uncertainty
In operational environments where uncertainty and signal complexity are high, decision trees have long been replaced by more adaptive control structures.
Military command systems provide a well-known example through the OODA loop (Observe → Orient → Decide → Act) developed by strategist John Boyd. Rather than predicting every possible scenario, commanders observe signals from the environment, interpret the situation, determine an appropriate action and execute it. Each outcome generates new signals that initiate another cycle of evaluation.
A similar pattern appears in domains such as air traffic control, where controllers continuously assess aircraft positions, trajectories, weather and runway availability to determine the next instruction. Across these environments, progression follows a recurring pattern: signals are observed, context is interpreted, an action is selected and the outcome informs the next cycle. The key distinction is that progression is governed not by predetermined branches but by the continuous interpretation of evidence. As enterprise systems incorporate more signals and adaptive models, operational workflows increasingly resemble these reasoning loops.
Evidence-driven workflows in enterprise systems
Evidence-driven workflows apply this reasoning-loop structure to enterprise operations. A typical evidence-driven workflow cycle includes several stages:
- Context assembly. Signals from operational systems are aggregated into a contextual representation of the case.
- Evidence evaluation. The system evaluates the signals to determine confidence levels associated with potential outcomes.
- Action selection. Based on that evaluation, the workflow determines the next appropriate step.
- Execution. Deterministic enterprise systems carry out the selected action.
- Context update. The outcome updates the contextual state for the next evaluation cycle.
A useful metaphor comes from intelligence analysis. Analysts rarely rely on a single signal. Instead, they assemble information from multiple sources and gradually build a picture of the situation as evidence accumulates. Each new signal may confirm, contradict, or refine the emerging hypothesis.
Seen this way, workflows resemble evidence-gathering processes rather than fixed sequences of steps. The objective is not to follow a predefined path but to reach sufficient confidence to perform an authoritative action. Each stage contributes signals that clarify the situation. Straightforward cases progress quickly because the evidence already supports a decision, while ambiguous cases trigger additional verification or human review. The workflow adapts to the information available rather than forcing every case through the same path.
This approach offers practical advantages. Workflows request additional information only when confidence is insufficient. Low-risk cases progress quickly, while ambiguous cases trigger verification or human review. Process designers can focus on evidence evaluation logic rather than anticipating every possible scenario.
In this prototype, the reasoning layer drives this evaluation cycle but does not execute actions directly. Instead, it evaluates the accumulated evidence associated with the case and recommends the next step in the workflow. Deterministic systems still perform the execution, with policy validation ensuring that only permitted actions occur. In effect, the reasoning layer functions as an agentic orchestrator — interpreting evidence and guiding workflow progression while leaving operational authority with the underlying systems.
A small prototype of an evidence-driven workflow
To explore how this model behaves in practice, I built a small prototype of an evidence-driven onboarding workflow.
The prototype simulated a simplified customer onboarding process. Instead of encoding the workflow as a fixed decision tree, the system maintained a contextual representation of the case that evolved after each step.
At any moment, the case contained a set of signals such as:
- Identity confidence from verification services
- Biometric match strength
- Fraud detection scores
- Device and geolocation risk indicators
- Sanctions screening results
- Documentation submitted by the applicant
Rather than evaluating these signals through static workflow branches, the system followed a reasoning cycle like the one described earlier:
- Context assembly. Signals from identity verification, fraud scoring and device intelligence were aggregated into a case context.
- Evidence evaluation. The system evaluated the strength of identity and fraud evidence based on the signals available.
- Action recommendation. A reasoning layer determined the next appropriate step (for example, run a fraud check, request documents, or approve the account).
- Deterministic execution. Enterprise services executed the recommended action.
- Context evolution. The outcome of that action updated the case context, which then informed the next reasoning cycle.
Each action, therefore, strengthened or clarified the evidence associated with the case.
In one simulated onboarding case, the workflow progressed through the following sequence:
Run identity verification
→ Run fraud check
→ Approve account
Identity verification increased confidence in the applicant’s identity, while the fraud check strengthened the fraud-risk assessment. Once sufficient evidence had accumulated, approval could be executed with high confidence.
In another case, conflicting signals produced a different path:
Run fraud check
→ Request additional documentation
→ Escalate to manual review
Here, strong device-risk signals and geolocation inconsistencies prevented a confident automated decision, triggering escalation to human review.
The prototype revealed a key insight: each workflow action contributes evidence that clarifies the case context. Straightforward cases converge quickly because sufficient evidence emerges early. More ambiguous cases trigger additional evidence gathering until the system either reaches confidence or escalates for manual review.
Instead of forcing every case through the same path, the workflow adapts to the information available at each stage. This pattern illustrates how reasoning loops can guide workflow progression while deterministic systems continue to enforce authoritative outcomes.
From agent tier architecture to evidence-driven processes
Seen in this light, evidence-driven workflows represent the process-design counterpart to the runtime architecture introduced by the Agent Tier.
In many enterprise architectures today, contextual reasoning is distributed across several layers of the system. Channel applications embed validation logic, orchestration services implement routing rules, rule engines enforce policy conditions and operations teams intervene when ambiguity arises. Over time, this distribution fragments the decision logic that governs workflow progression. The Agent Tier architecture addresses this fragmentation by concentrating contextual reasoning within a dedicated runtime layer, allowing signals to be interpreted in a unified context while deterministic systems continue to enforce authoritative state transitions.
In this model:
- Deterministic enterprise systems remain responsible for authoritative actions such as creating accounts, approving transactions, enforcing regulatory requirements and updating systems of record.
- Contextual reasoning layers evaluate signals, interpret evolving evidence and determine how workflows should progress.
The Agent Tier becomes the component that evaluates context and recommends the next step in the workflow. Deterministic systems then execute that step and update the operational state.
This separation clarifies responsibility within the architecture. Deterministic systems maintain governance and authoritative records, while reasoning systems interpret signals and guide workflow progression. Figure 2 shows how the Agent Tier acts as the reasoning layer within an evidence-driven workflow.
Nitesh Varma
Designing workflows for adaptive systems
Traditional enterprise workflows assumed that designers could enumerate the important scenarios in advance, an assumption increasingly challenged in practice as decision-tree–based approaches become difficult to maintain and evolve at scale.
Today’s enterprise systems operate in a very different environment. Digital channels produce continuous behavioral telemetry. External verification services introduce additional sources of information. Machine learning models generate probabilistic risk assessments that must be interpreted alongside deterministic policy rules.
In such environments, rigid decision trees become increasingly difficult to sustain. Evidence-driven workflows offer a more adaptable model. By allowing context to evolve through evidence accumulation, workflows can respond dynamically to real-world variability while maintaining the governance and reliability required for enterprise systems.
For CIOs and architects, the challenge is no longer simply automating process steps. It is designing systems in which evidence determines progression, contextual reasoning interprets signals and deterministic systems continue to enforce authoritative outcomes.
Evidence-driven workflows represent a shift in how enterprise processes can be designed. As signals proliferate and conditions evolve dynamically, the central challenge becomes interpreting context rather than enumerating scenarios. In this model, workflows are guided by continuous evaluation of evidence rather than predefined paths.
Seen in this broader context, the Agent Tier is not simply a runtime architecture pattern. It is part of a broader evolution in enterprise systems — one in which workflows increasingly resemble reasoning loops rather than decision trees, and operational decisions emerge from the interpretation of evidence rather than predefined branching logic.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?