Define and Handle Tool Calls¶
Declare a tool the model can "call", ship it with a
ToolCall node, receive
ToolCallNotification frames on the client, and route the graph on
whether a call happened.
Prerequisites
- A session connected and configured.
- A language model whose chat template family you know (Llama-3, ChatML, Mistral, …). See Tool Calling concept for the four format families Tryll supports.
The pattern¶
flowchart LR
tc["ToolCall<br>detect"]
gen["Generate<br>answer"]
tc -- "tool_called" --> END
tc -- "no_tool_called" --> gen
gen -- "default" --> END
If the model emitted a tool call, we fire the notification and stop
(the client will run the tool and send the result back in a later
turn). If no tool call was detected, a Generate node answers
normally.
Step 1 — declare the tools¶
Tools are declared per-node with ToolDef + ToolParamDef:
from tryll_client import ToolDef, ToolParamDef
set_light = ToolDef(
name="set_light",
description="Turn a named light on or off.",
parameters=[
ToolParamDef("name", "string",
"Human-readable name of the light, e.g. 'porch'."),
ToolParamDef("on", "boolean",
"true to turn on, false to turn off."),
],
)
Author FTryllToolDefinition entries inside your
UTryllWorkflowAsset (Content Browser data asset) alongside the
graph. Or build them at runtime in C++ and assign to the
UTryllAgentComponent's graph.
Step 2 — build the graph with a ToolCall node¶
from tryll_client import GraphDescription, NodeType
graph = (
GraphDescription()
.add_tool_call_node("detect", [set_light], {
"tool_call_format": "llama3", # match the model family
"notify_client": "true",
"generate_on_no_tool": "false",
"system_prompt": "You are a smart-home controller.",
})
.add_node("answer", NodeType.Generate)
.wire("detect", "tool_called", "END")
.wire("detect", "no_tool_called", "answer")
.wire("answer", "default", "END")
.set_start_node("detect")
.set_default_model_name("Llama-3.2-3B-Instruct")
)
agent = client.create_agent(graph)
TC::GraphDescription graph;
graph.AddToolCallNode("detect", {setLight}, {
{"tool_call_format", "llama3"},
{"notify_client", "true"},
{"generate_on_no_tool", "false"},
{"system_prompt", "You are a smart-home controller."},
})
.AddNode("answer", TC::NodeType::Generate)
.Wire("detect", "tool_called", "END")
.Wire("detect", "no_tool_called", "answer")
.Wire("answer", "default", "END")
.SetStartNode("detect")
.SetDefaultModelName("Llama-3.2-3B-Instruct");
auto agent = client.CreateAgent(graph);
See the full param list in ToolCall node reference.
Step 3 — receive the notification client-side¶
Register a callback on the AgentProxy before the first send_message call.
The callback fires on the reader / background thread for every
ToolCallNotification frame the server sends — keep it short and
non-blocking.
import json
def on_tool_call(tool_name: str, arguments_json: str) -> None:
args = json.loads(arguments_json)
if tool_name == "set_light":
print(f"[tool] set_light name={args['name']} on={args['on']}")
# execute the real action here
agent.set_on_tool_call(on_tool_call)
The callback receives (tool_name: str, arguments_json: str).
Call agent.set_on_tool_call(None) to unregister.
agent.SetOnToolCall(
[](std::string_view toolName, std::string_view argsJson)
{
// Fires on the reader thread — keep this non-blocking.
if (toolName == "set_light")
{
// parse argsJson (e.g. with nlohmann::json or similar)
std::cout << "[tool] set_light args=" << argsJson << "\n";
}
});
Pass an empty (default-constructed) AgentProxy::ToolCallCallback
to unregister: agent.SetOnToolCall({});
Bind On Tool Call on UTryllSubsystem to a Blueprint or C++
handler. The delegate signature is
(int64 AgentId, const FString& ToolName, const FString& ArgumentsJson);
parse the JSON with FJsonSerializer or your preferred library.
auto* subsystem = GetGameInstance()->GetSubsystem<UTryllSubsystem>();
subsystem->OnToolCall.AddDynamic(this, &ThisClass::HandleToolCall);
void AThisClass::HandleToolCall(int64 AgentId,
const FString& ToolName,
const FString& ArgumentsJson)
{
if (ToolName == TEXT("set_light")) { /* parse JSON, act */ }
}
Note: the Unreal delegate is session-level (one binding on the
subsystem receives calls for all agents, with AgentId to
distinguish them). The C++ and Python callbacks are
per-agent (registered on each AgentProxy).
Step 4 — feed the result back (optional)¶
Tryll does not keep a dedicated "tool result" channel. The cleanest
way to give the model the result is to push it into a downstream
node's system_prompt via
ChangeParam, then continue the
conversation normally. The tool result lives as system context
(authoritative metadata) instead of polluting the dialog history
with fake user turns.
The flow:
- Your tool-call handler runs the real tool and captures the result.
- Before the next
send_message, callchange_paramon theanswernode (or whichever downstream node should see the context) to update itssystem_prompt. - Send the next user message — the
Generatenode now has the tool result available in its prompt.
# Inside your on_tool_call handler, after running the real tool:
agent.change_param(
"answer",
"system_prompt",
"Recent tool executions:\n"
"- set_light(name='porch', on=true) → OK, porch light is now on.",
)
# Continue the conversation. The Generate node's new system_prompt
# takes effect on the next send_message.
reply = agent.send_message("Done — anything else?")
// Inside your SetOnToolCall callback, after running the real tool:
agent.ChangeParam("answer", "system_prompt",
"Recent tool executions:\n"
"- set_light(name='porch', on=true) → OK, porch light is now on.");
agent.SendText("Done — anything else?",
[](std::string_view text, bool, bool)
{ std::cout << text << std::flush; });
From your OnToolCall handler, call
UTryllAgentComponent::ChangeParam with NodeName="answer",
ParamKey="system_prompt", and a string summarising the tool
result. Then call SendMessage to continue the turn.
A few things to know:
change_paramreplaces the stored value, it does not append. If you want multiple tool results accumulated over several turns, keep the full "Recent tool executions:" string on your client and re-send it each time.- Updating
system_promptdoes not flush the KV cache immediately — it is re-decoded during the nextSendMessage, so a back-to-backchange_param+send_messagepair only costs one re-decode. See Change Agent Parameters →system_promptand KV-cache rewind. change_paramfails with error3004 AgentBusyif a turn is in flight. Call it between turns, not from inside a streaming callback without first awaitingTurnComplete.
Alternative: a second ToolCall node in sequence
If you need the server to redetect a tool call with the previous
result already in prompt — e.g. for a "call tool, see result,
call another tool" chain in a single turn — put a second
ToolCall node downstream of the first. Most apps find the
one-shot client-side handler + change_param feedback simpler.
Verify it worked¶
Send a message that should trigger the tool:
Server log at info:
Your tool-call callback (Python/C++) or OnToolCall delegate (Unreal) fires with:
Then:
goes through no_tool_called → Generate and produces a normal
answer.
Common pitfalls¶
- Format mismatch. The biggest reliability lever. Set
tool_call_formatto match the model family or the per-model default inmodels.json. - Argument values are strings.
arguments_jsonstores all values as JSON strings, even booleans and numbers. Coerce in your client. - Model invents tools. Always check
tool_nameagainst your allow-list before acting. A hallucinated tool name must be a no-op. notify_client=falsemeans noToolCallNotificationframe is sent, soset_on_tool_call/SetOnToolCall/OnToolCallwill never fire. Useful if you only need the call recorded on the turn for diagnostics without a client event. Set"true"for all three clients when you need to act on the call.generate_on_no_tool=true(experimental) emits the model's text as a normal answer when no tool was detected. Picktrueif yourToolCallnode stands in for aGeneratenode; pickfalseif a separateGeneratenode runs onno_tool_called.