Skip to content

Define and Handle Tool Calls

Declare a tool the model can "call", ship it with a ToolCall node, receive ToolCallNotification frames on the client, and route the graph on whether a call happened.

Prerequisites

  • A session connected and configured.
  • A language model whose chat template family you know (Llama-3, ChatML, Mistral, …). See Tool Calling concept for the four format families Tryll supports.

The pattern

flowchart LR
    tc["ToolCall<br>detect"]
    gen["Generate<br>answer"]
    tc -- "tool_called" --> END
    tc -- "no_tool_called" --> gen
    gen -- "default" --> END

If the model emitted a tool call, we fire the notification and stop (the client will run the tool and send the result back in a later turn). If no tool call was detected, a Generate node answers normally.

Step 1 — declare the tools

Tools are declared per-node with ToolDef + ToolParamDef:

from tryll_client import ToolDef, ToolParamDef

set_light = ToolDef(
    name="set_light",
    description="Turn a named light on or off.",
    parameters=[
        ToolParamDef("name", "string",
            "Human-readable name of the light, e.g. 'porch'."),
        ToolParamDef("on",   "boolean",
            "true to turn on, false to turn off."),
    ],
)
namespace TC = Tryll::Client;

TC::ToolDef setLight{
    "set_light",
    "Turn a named light on or off.",
    {
        {"name", "string",  "Human-readable name of the light."},
        {"on",   "boolean", "true to turn on, false to turn off."},
    },
};

Author FTryllToolDefinition entries inside your UTryllWorkflowAsset (Content Browser data asset) alongside the graph. Or build them at runtime in C++ and assign to the UTryllAgentComponent's graph.

Step 2 — build the graph with a ToolCall node

from tryll_client import GraphDescription, NodeType

graph = (
    GraphDescription()
    .add_tool_call_node("detect", [set_light], {
        "tool_call_format":   "llama3",        # match the model family
        "notify_client":      "true",
        "generate_on_no_tool": "false",
        "system_prompt":      "You are a smart-home controller.",
    })
    .add_node("answer", NodeType.Generate)
    .wire("detect", "tool_called",    "END")
    .wire("detect", "no_tool_called", "answer")
    .wire("answer", "default",        "END")
    .set_start_node("detect")
    .set_default_model_name("Llama-3.2-3B-Instruct")
)

agent = client.create_agent(graph)
TC::GraphDescription graph;
graph.AddToolCallNode("detect", {setLight}, {
        {"tool_call_format",    "llama3"},
        {"notify_client",       "true"},
        {"generate_on_no_tool", "false"},
        {"system_prompt",       "You are a smart-home controller."},
     })
     .AddNode("answer", TC::NodeType::Generate)
     .Wire("detect", "tool_called",    "END")
     .Wire("detect", "no_tool_called", "answer")
     .Wire("answer", "default",        "END")
     .SetStartNode("detect")
     .SetDefaultModelName("Llama-3.2-3B-Instruct");

auto agent = client.CreateAgent(graph);

See the full param list in ToolCall node reference.

Step 3 — receive the notification client-side

Register a callback on the AgentProxy before the first send_message call. The callback fires on the reader / background thread for every ToolCallNotification frame the server sends — keep it short and non-blocking.

import json

def on_tool_call(tool_name: str, arguments_json: str) -> None:
    args = json.loads(arguments_json)
    if tool_name == "set_light":
        print(f"[tool] set_light name={args['name']} on={args['on']}")
        # execute the real action here

agent.set_on_tool_call(on_tool_call)

The callback receives (tool_name: str, arguments_json: str). Call agent.set_on_tool_call(None) to unregister.

agent.SetOnToolCall(
    [](std::string_view toolName, std::string_view argsJson)
    {
        // Fires on the reader thread — keep this non-blocking.
        if (toolName == "set_light")
        {
            // parse argsJson (e.g. with nlohmann::json or similar)
            std::cout << "[tool] set_light args=" << argsJson << "\n";
        }
    });

Pass an empty (default-constructed) AgentProxy::ToolCallCallback to unregister: agent.SetOnToolCall({});

Bind On Tool Call on UTryllSubsystem to a Blueprint or C++ handler. The delegate signature is (int64 AgentId, const FString& ToolName, const FString& ArgumentsJson); parse the JSON with FJsonSerializer or your preferred library.

auto* subsystem = GetGameInstance()->GetSubsystem<UTryllSubsystem>();
subsystem->OnToolCall.AddDynamic(this, &ThisClass::HandleToolCall);

void AThisClass::HandleToolCall(int64 AgentId,
                                const FString& ToolName,
                                const FString& ArgumentsJson)
{
    if (ToolName == TEXT("set_light")) { /* parse JSON, act */ }
}

Note: the Unreal delegate is session-level (one binding on the subsystem receives calls for all agents, with AgentId to distinguish them). The C++ and Python callbacks are per-agent (registered on each AgentProxy).

Step 4 — feed the result back (optional)

Tryll does not keep a dedicated "tool result" channel. The cleanest way to give the model the result is to push it into a downstream node's system_prompt via ChangeParam, then continue the conversation normally. The tool result lives as system context (authoritative metadata) instead of polluting the dialog history with fake user turns.

The flow:

  1. Your tool-call handler runs the real tool and captures the result.
  2. Before the next send_message, call change_param on the answer node (or whichever downstream node should see the context) to update its system_prompt.
  3. Send the next user message — the Generate node now has the tool result available in its prompt.
# Inside your on_tool_call handler, after running the real tool:
agent.change_param(
    "answer",
    "system_prompt",
    "Recent tool executions:\n"
    "- set_light(name='porch', on=true) → OK, porch light is now on.",
)

# Continue the conversation. The Generate node's new system_prompt
# takes effect on the next send_message.
reply = agent.send_message("Done — anything else?")
// Inside your SetOnToolCall callback, after running the real tool:
agent.ChangeParam("answer", "system_prompt",
    "Recent tool executions:\n"
    "- set_light(name='porch', on=true) → OK, porch light is now on.");

agent.SendText("Done — anything else?",
    [](std::string_view text, bool, bool)
    { std::cout << text << std::flush; });

From your OnToolCall handler, call UTryllAgentComponent::ChangeParam with NodeName="answer", ParamKey="system_prompt", and a string summarising the tool result. Then call SendMessage to continue the turn.

A few things to know:

  • change_param replaces the stored value, it does not append. If you want multiple tool results accumulated over several turns, keep the full "Recent tool executions:" string on your client and re-send it each time.
  • Updating system_prompt does not flush the KV cache immediately — it is re-decoded during the next SendMessage, so a back-to-back change_param + send_message pair only costs one re-decode. See Change Agent Parameters → system_prompt and KV-cache rewind.
  • change_param fails with error 3004 AgentBusy if a turn is in flight. Call it between turns, not from inside a streaming callback without first awaiting TurnComplete.

Alternative: a second ToolCall node in sequence

If you need the server to redetect a tool call with the previous result already in prompt — e.g. for a "call tool, see result, call another tool" chain in a single turn — put a second ToolCall node downstream of the first. Most apps find the one-shot client-side handler + change_param feedback simpler.

Verify it worked

Send a message that should trigger the tool:

agent.send_message("Please turn on the porch light.")
agent.SendText("Please turn on the porch light.",
    [](std::string_view, bool, bool) {});

Call UTryllAgentComponent::SendMessage("Please turn on the porch light.").

Server log at info:

[info] Node detect: tool_called name=set_light
[info] ToolCallNotification fired

Your tool-call callback (Python/C++) or OnToolCall delegate (Unreal) fires with:

tool_name      = "set_light"
arguments_json = {"name": "porch", "on": "true"}

Then:

reply = agent.send_message("How's the weather?")
agent.SendText("How's the weather?",
    [](std::string_view text, bool, bool)
    { std::cout << text << std::flush; });

UTryllAgentComponent::SendMessage("How's the weather?") — the streamed reply arrives through On Answer Text.

goes through no_tool_calledGenerate and produces a normal answer.

Common pitfalls

  • Format mismatch. The biggest reliability lever. Set tool_call_format to match the model family or the per-model default in models.json.
  • Argument values are strings. arguments_json stores all values as JSON strings, even booleans and numbers. Coerce in your client.
  • Model invents tools. Always check tool_name against your allow-list before acting. A hallucinated tool name must be a no-op.
  • notify_client=false means no ToolCallNotification frame is sent, so set_on_tool_call / SetOnToolCall / OnToolCall will never fire. Useful if you only need the call recorded on the turn for diagnostics without a client event. Set "true" for all three clients when you need to act on the call.
  • generate_on_no_tool=true (experimental) emits the model's text as a normal answer when no tool was detected. Pick true if your ToolCall node stands in for a Generate node; pick false if a separate Generate node runs on no_tool_called.