Workflows, Graphs, and Nodes¶
A Tryll agent is not a prompt. It is a graph of small, single-purpose steps that you wire together. Each user turn re-walks the graph; each step decides where to go next. This page explains why Tryll is built this way and how the pieces fit.
The building blocks¶
A graph has three things:
- A set of nodes, each with a name
and a node type —
Generate,Retrieve,ToolCall,CannedResponse, orHumanMessageGuardrail. - A set of routes — triples of
(from_node, exit_name, to_node)— that tell the agent which node to run next based on the previous node's exit. - A start id — the first node to run on every user turn.
Graphs are authored once, sent to the server with
CreateAgentRequest, frozen at
agent creation, and then re-walked for every turn the agent lives.
Here is the simplest useful graph — a guardrail in front of a generator, with a scripted refusal on the rejected path:
flowchart LR
guard["HumanMessageGuardrail<br>guard"]
refuse["CannedResponse<br>refuse"]
gen["Generate<br>answer"]
guard -- "triggered" --> refuse
guard -- "not_triggered" --> gen
refuse -- "default" --> END
gen -- "default" --> END
The same graph, in each client:
from tryll_client import GraphDescription, NodeType
graph = (
GraphDescription()
.add_node("guard", NodeType.HumanMessageGuardrail,
{"string_storage": "jailbreak_patterns"})
.add_node("refuse", NodeType.CannedResponse,
{"string_storage": "refusal_lines"})
.add_node("answer", NodeType.Generate)
.wire("guard", "triggered", "refuse")
.wire("guard", "not_triggered", "answer")
.wire("refuse", "default", "END")
.wire("answer", "default", "END")
.set_start_node("guard")
.set_default_model_name("My Local Model")
)
namespace TC = Tryll::Client;
TC::GraphDescription graph;
graph.AddNode("guard", TC::NodeType::HumanMessageGuardrail,
{{"string_storage", "jailbreak_patterns"}})
.AddNode("refuse", TC::NodeType::CannedResponse,
{{"string_storage", "refusal_lines"}})
.AddNode("answer", TC::NodeType::Generate)
.Wire("guard", "triggered", "refuse")
.Wire("guard", "not_triggered", "answer")
.Wire("refuse", "default", "END")
.Wire("answer", "default", "END")
.SetStartNode("guard")
.SetDefaultModelName("My Local Model");
FTryllGraphDescription Graph = FTryllGraphBuilder()
.AddNode(TEXT("guard"), ETryllNodeType::HumanMessageGuardrail,
{{TEXT("string_storage"), TEXT("jailbreak_patterns")}})
.AddNode(TEXT("refuse"), ETryllNodeType::CannedResponse,
{{TEXT("string_storage"), TEXT("refusal_lines")}})
.AddNode(TEXT("answer"), ETryllNodeType::Generate)
.Wire(TEXT("guard"), TEXT("triggered"), TEXT("refuse"))
.Wire(TEXT("guard"), TEXT("not_triggered"), TEXT("answer"))
.Wire(TEXT("refuse"), TEXT("default"), TEXT("END"))
.Wire(TEXT("answer"), TEXT("default"), TEXT("END"))
.SetStartNode(TEXT("guard"))
.SetDefaultModelName(TEXT("My Local Model"))
.Build();
Exit routes: how nodes "talk" to each other¶
Nodes do not call each other. They return a named exit route, and the agent follows the matching edge. Every node type publishes its possible exits up-front — see the table in Workflow Nodes.
For example:
| Node type | Exits |
|---|---|
Generate |
default |
Retrieve |
found, not_found |
ToolCall |
tool_called, no_tool_called |
CannedResponse |
default |
HumanMessageGuardrail |
triggered, not_triggered |
Every exit a node can produce must be wired to a target, or the
graph fails validation at
CreateAgent time. The
target is either another node or the sentinel string "END".
The per-turn loop¶
When a SendMessage arrives, the agent:
- Appends a new interaction to its dialog, starting with the user's message.
- Walks the graph from
start_at, one node at a time. Each node: - reads from the current interaction (and anything earlier nodes attached to it),
- does its work (retrieve from an index, run the model, match a pattern, pick a canned line, …),
- optionally attaches new components to the interaction, and
- returns its chosen exit.
- Follows the route for that exit to the next node.
- Stops when the route target is
"END", when the step budget is exceeded, or when the turn is cancelled.
What each node attaches along the way is what makes the dialog
itself useful state. A Retrieve node attaches retrieved knowledge
to the current turn, and a downstream Generate node's
projection picks that knowledge up
and folds it into the prompt.
Why a graph instead of a prompt template¶
Three reasons:
1. Composition. "Retrieve, then generate" is different from "check for jailbreak first, then retrieve, then generate". Prompt templates push this branching into strings; a graph makes it executable and inspectable. Every node is typed and diagnostics-friendly.
2. Cost control. The graph lets you short-circuit expensive work.
A guardrail that
routes to a canned response
never touches the model. A
tool-call node with
generate_on_no_tool=false (experimental) does zero extra generation on a miss.
3. Reuse. Graphs are data. You can build them at runtime from C++,
Python, or Unreal, ship them in a
UTryllWorkflowAsset,
or construct them dynamically from user configuration. The server
does not know or care where the graph came from.
Turns are serial; graphs are stateless between turns¶
An agent runs one turn at a time. If a second SendMessage arrives
while the previous turn is still running, it is rejected with
error 3001; the in-flight
turn keeps running. Cancellation is explicit
(DestroyAgent, socket close), not pre-emption.
The graph itself carries no per-turn state. All state lives on the agent's dialog (the accumulated interactions) and in the per-node KV caches. This is why the same graph description can be replayed cleanly every turn — the work of "what has been said" is separate from the work of "what to do next".
Edges and pitfalls¶
- Routing loops. Nothing stops you from routing node A → node A;
the agent has a configurable
max_steps_per_turnbudget (default 64) that breaks infinite loops by aborting the turn with an error. - Unused exits still need wiring. Even an exit you "know" will
never fire must be routed somewhere, typically straight to
"END". The validator is strict. - Model sharing is cross-graph, not cross-node. Two
Generatenodes in one graph that use the same model share the underlying weights (good) but have independent KV caches (necessary — they see different prompts). Budget token-count accordingly; see Projection and Token Budgets. - The start node runs every turn. If your start node is
Generate, it will generate every turn, even if a later node would have short-circuited. Put guardrails and retrieval before the model, not after, if you want them to actually gate work.