Tool Call¶
The Tool Call node uses language-model inference to detect whether the model wants to call an external function, given a set of tool definitions. It does not execute the tool — the server is agnostic to the tool's implementation; the detected call is surfaced to the client, which runs the tool and feeds the result back into the next turn.
This detection-only contract makes the node safe to run on-device with no special sandboxing: the server never touches the client's tool implementation.
NodeType: ToolCall.
Parameters¶
All parameters are optional. Each is a string key/value pair on
NodeDescription.params.
Detection behaviour¶
The Mutable column indicates whether the parameter can be changed at runtime via
ChangeParam.
| Key | Default | Mutable | Description |
|---|---|---|---|
model_name |
(agent default) | No | Catalog name of the language model to run the detection pass on. Falls back to the graph's default_model_name. |
tool_call_format |
(model catalog, else "chatml") |
No | Prompt format for tool definitions and output parsing. One of "chatml", "llama3", "mistral", "generic". If omitted, falls back to the model catalog's tool_call_format, then to "chatml". |
system_prompt |
"" |
Yes | User-supplied system prompt injected before the tool-definition block. The KV cache is rewound lazily on the next SendMessage call. |
generate_on_no_tool |
"false" |
Yes | (Experimental.) When "true" and no tool call is detected, the node also emits a plain-text reply (identical in shape to a Generate node output) so the client sees a fallback answer. |
notify_client |
"false" |
Yes | When "true", each detected tool call additionally fires an unsolicited ToolCallNotification frame to the client in real time, in addition to recording the call on the turn. |
Tool definitions themselves live on NodeDescription.tools, not in
params; see ToolDef.
Sampling overrides¶
Same fields as Generate:
temperature, top_p, top_k, min_p, max_tokens, seed,
repeat_penalty, presence_penalty, frequency_penalty.
All sampling parameters are mutable at runtime.
Exit routes¶
| Route | Fires when |
|---|---|
tool_called |
At least one tool call was detected in the model's output. |
no_tool_called |
The model produced no parseable tool call. |
Both routes must be wired.
Side effects¶
- Records every detected tool call on the current turn (one record per call; a single turn may produce more than one).
- When
generate_on_no_tool = true(experimental) and no tool is detected, additionally appends the model's text as the assistant's reply and firesAnswerTextchunks (as a Generate node would). - When
notify_client = true, fires oneToolCallNotificationper detected call, out-of-band, before the turn completes.
Tool-call formats¶
| Format | Family | Typical model |
|---|---|---|
chatml |
OpenAI / Qwen / default | Qwen2.5, OpenAI-compatible chat models. |
llama3 |
Llama 3 | Meta Llama 3 / 3.1 / 3.2. |
mistral |
Mistral function-calling | Mistral-Instruct v0.3+. |
generic |
JSON-in-text | Fallback for models with no native format. |
The format controls:
- How tool definitions are rendered into the prompt.
- How the model's output is parsed to extract calls.
Pick the format matching your model's training. If unsure, start with
chatml and inspect debug_info.tool_call_format and the raw model
output in diagnostics.
Diagnostics¶
When enable_diagnostics = true, the node contributes these keys to
TurnComplete.debug_info:
| Key | Meaning |
|---|---|
tool_call_format |
The format config name that was actually used. |
tool_count |
Number of tool definitions passed to the model. |
generate_on_no_tool |
Current value of the param. (Experimental.) |
| (plus sampling / prompt diagnostics shared with other inference nodes) | — |
Minimum working example¶
from tryll_client import (
GraphDescription, NodeType, ToolDef, ToolParamDef,
)
tools = [
ToolDef(
name="get_weather",
description="Get the current weather for a city.",
parameters=[
ToolParamDef(name="city", type="string",
description="City name"),
],
),
]
graph = (
GraphDescription()
.add_tool_call_node("detect", tools, {
"tool_call_format": "chatml",
"notify_client": "true",
})
.add_node("answer", NodeType.Generate)
.wire("detect", "tool_called", "END") # client handles the tool
.wire("detect", "no_tool_called", "answer")
.wire("answer", "default", "END")
.set_start_node("detect")
.set_default_model_name("Qwen2.5-3B-Instruct")
)
agent = client.create_agent(graph)
namespace TC = Tryll::Client;
TC::ToolDef getWeather{
"get_weather",
"Get the current weather for a city.",
{ {"city", "string", "City name"} },
};
TC::GraphDescription graph;
graph.AddToolCallNode("detect", {getWeather}, {
{"tool_call_format", "chatml"},
{"notify_client", "true"},
})
.AddNode("answer", TC::NodeType::Generate)
.Wire("detect", "tool_called", "END") // client handles the tool
.Wire("detect", "no_tool_called", "answer")
.Wire("answer", "default", "END")
.SetStartNode("detect")
.SetDefaultModelName("Qwen2.5-3B-Instruct");
auto agent = client.CreateAgent(graph);
FTryllToolDefinition GetWeather;
GetWeather.Name = TEXT("get_weather");
GetWeather.Description = TEXT("Get the current weather for a city.");
GetWeather.Parameters.Add({TEXT("city"), TEXT("string"), TEXT("City name")});
FTryllGraphDescription Graph = FTryllGraphBuilder()
.AddToolCallNode(TEXT("detect"), {GetWeather}, {
{TEXT("tool_call_format"), TEXT("chatml")},
{TEXT("notify_client"), TEXT("true")},
})
.AddNode(TEXT("answer"), ETryllNodeType::Generate)
.Wire(TEXT("detect"), TEXT("tool_called"), TEXT("END"))
.Wire(TEXT("detect"), TEXT("no_tool_called"), TEXT("answer"))
.Wire(TEXT("answer"), TEXT("default"), TEXT("END"))
.SetStartNode(TEXT("detect"))
.SetDefaultModelName(TEXT("Qwen2.5-3B-Instruct"))
.Build();
Or author FTryllToolDefinition entries and the node list inside a
UTryllWorkflowAsset; bind UTryllSubsystem::OnToolCall to
receive the detection.
Receive the notification in your client with agent.set_on_tool_call(cb) (Python),
agent.SetOnToolCall(cb) (C++), or UTryllSubsystem::OnToolCall (Unreal).
See the full flow (including client-side execution and feeding the result back) in How to define and handle tool calls.
Client bindings¶
- C++:
Tryll::Client::GraphDescription::AddToolCallNode—GraphDescription.h - Python:
tryll_client.GraphDescription.add_tool_call_node—graph.py - Unreal: add an
FTryllNodeDescwithType = ToolCall(andToolspopulated) toFTryllGraphDescription.Nodes—TryllGraphDescription.h