Generate¶
The Generate node runs language-model inference. It is the workhorse of every Tryll workflow: it takes the current dialog, passes it through the agent's projection to build a prompt, and emits the model's response — either as a single chunk or streamed token-by-token.
NodeType: Generate.
Parameters¶
All parameters are optional. Each is a string key/value pair on
NodeDescription.params.
Generation behaviour¶
The Mutable column indicates whether the parameter can be changed at runtime via
ChangeParam.
| Key | Default | Mutable | Description |
|---|---|---|---|
model_name |
(agent default) | No | Catalog name of the language model to run. Falls back to the graph's default_model_name. At least one of the two must be set. |
system_prompt |
"" |
Yes | Text injected at the head of the prompt by the default projection strategy. The KV cache is rewound lazily on the next SendMessage call. |
stream |
"false" |
Yes | "true" to emit tokens one-by-one as AnswerText chunks. "false" to emit a single AnswerText with the full response and is_final = true. |
Sampling overrides¶
Each field overrides the corresponding value from the model's
default_sampling (see model management).
All fields are optional strings; parse as the type shown. All sampling parameters are mutable at runtime.
| Key | Type | Description |
|---|---|---|
temperature |
float | Sampling temperature. |
top_p |
float | Nucleus sampling threshold. |
top_k |
int | Top-K cut-off. |
min_p |
float | Min-P filter. |
max_tokens |
int | Generated-token cap for this node. |
seed |
uint32 | RNG seed (0 = random per turn). |
repeat_penalty |
float | Penalty for repeating tokens. |
presence_penalty |
float | OpenAI-style presence penalty. |
frequency_penalty |
float | OpenAI-style frequency penalty. |
Exit routes¶
| Route | Fires when |
|---|---|
default |
Always — Generate never branches. Wire this to the next node, or to "END" to terminate the turn. |
Side effects¶
- Appends the generated answer to the current turn as the assistant's reply.
- Emits one or more
AnswerTextframes to the client. - Updates the per-model KV cache.
Diagnostics¶
When the agent has enable_diagnostics = true, the node's
contribution to TurnComplete.debug_info includes the projected
prompt and the model name that was actually run (after the
model_name / default_model_name fallback).
Minimum working example¶
from tryll_client import GraphDescription, NodeType
graph = (
GraphDescription()
.add_node("answer", NodeType.Generate, {
"stream": "true",
"system_prompt": "You are a terse assistant.",
})
.wire("answer", "default", "END")
.set_start_node("answer")
.set_default_model_name("My Local Model")
)
agent = client.create_agent(graph)
namespace TC = Tryll::Client;
TC::GraphDescription graph;
graph.AddNode("answer", TC::NodeType::Generate, {
{"stream", "true"},
{"system_prompt", "You are a terse assistant."},
})
.Wire("answer", "default", "END")
.SetStartNode("answer")
.SetDefaultModelName("My Local Model");
auto agent = client.CreateAgent(graph);
FTryllGraphDescription Graph = FTryllGraphBuilder()
.AddNode(TEXT("answer"), ETryllNodeType::Generate, {
{TEXT("stream"), TEXT("true")},
{TEXT("system_prompt"), TEXT("You are a terse assistant.")},
})
.Wire(TEXT("answer"), TEXT("default"), TEXT("END"))
.SetStartNode(TEXT("answer"))
.SetDefaultModelName(TEXT("My Local Model"))
.Build();
Or author the same nodes inside a UTryllWorkflowAsset in the
Content Browser and assign it to UTryllAgentComponent.
Client bindings¶
- C++:
Tryll::Client::GraphDescription::AddNode(name, NodeType::Generate, params)—GraphDescription.h - Python:
tryll_client.GraphDescription.add_node(name, NodeType.Generate, params)—graph.py - Unreal: add an
FTryllNodeDescwithType = GeneratetoFTryllGraphDescription.Nodes—TryllGraphDescription.h