Lifetime and Ownership¶
Three kinds of resources in Tryll outlive any single wire request: string storages, embedded string storages, and language / embedding models. Agents reference them by name, often for the whole conversation. This page explains who owns each one, how long it stays alive, and what actually happens when you destroy it — so you can reason about the edge cases without reading the server source.
The dual-ownership pattern¶
Every resource of the three kinds follows the same shape:
- A manager owns a shared reference. The manager is the surface you
see on the wire — it is what gives the resource a name and answers
Create…/Destroy…requests. - Each node that needs the resource is given its own shared
reference at
CreateAgenttime. - The resource lives as long as any reference still exists.
Consequence: destroying a storage from the client never yanks data
out from under a live agent. Destroy…Request drops only the
manager's reference. Any agent whose nodes still hold the resource
finishes cleanly; the bytes are freed once the last node releases.
The only difference between the three resource kinds is where the manager lives:
| Resource | Manager scope | Lives until |
|---|---|---|
| StringStorage | Session | The session closes, or DestroyStringStorageRequest + the last agent holding it is destroyed. |
| EmbeddedStringStorage | Session | The session closes, or DestroyEmbeddedStringStorageRequest + the last agent holding it is destroyed. |
| Model | Process (shared across sessions) | The server unloads it (after UnloadModelRequest for Pinned, or automatically for OnDemand once unused). |
The rest of this page walks through each case.
StringStorage¶
Created by
CreateStringStorageRequest, a
string storage is named within its owning session. At CreateAgent
time, any CannedResponse or HumanMessageGuardrail node that names
the storage in its string_storage param gets its own shared
reference.
flowchart LR
subgraph Session
Mgr[StringStorageManager]
end
subgraph Agent
N1[CannedResponse node]
N2[HumanMessageGuardrail node]
end
Mgr -- "shared ref" --> S[(StringStorage)]
N1 -- "shared ref" --> S
N2 -- "shared ref" --> S
What each operation actually does:
DestroyStringStorageRequest— drops only the manager's reference. The name is freed for reuse in the same session. Live nodes keep the data alive until their agent is destroyed.DestroyAgentRequest— drops each node's reference. If the manager has already been destroyed and no other agent holds the storage, the bytes are freed now.- Session close — the manager is torn down, then every remaining agent is destroyed. The last reference goes with the last agent.
Updating a storage in place is not supported — create a new one under
a different name (and, if needed, rebind via
change_param) instead of
mutating one that agents already hold.
EmbeddedStringStorage¶
Same pattern. CreateEmbeddedStringStorageRequest hands the manager
a shared reference; Retrieve nodes get their own at CreateAgent
time.
flowchart LR
subgraph Session
Mgr[EmbeddedStringStorageManager]
end
subgraph Agent
R[Retrieve node]
end
Mgr -- "shared ref" --> E[(EmbeddedStringStorage<br>records + HNSW index)]
R -- "shared ref" --> E
The in-memory object — records, embeddings, HNSW index — follows the
same dual-ownership rules as StringStorage.
On-disk artifacts are separate. A Path-A storage reads from a
records file and (optionally) a pre-built .usearch index on the
server's disk. Those files are not owned by the session; they
remain on disk after the session ends and are reused on the next
CreateEmbeddedStringStorageRequest that points at the same config.
What gets rebuilt each session is the in-memory index object, not
the cached bytes on disk.
Models¶
Models follow the same manager + node pattern, but the manager lives on the server process, not on a single session. That is why pinning a model in one session keeps it loaded for every other session on the same server.
flowchart LR
subgraph Server process
MM[ModelManager]
end
subgraph Session A
GA[Generate / ToolCall node]
end
subgraph Session B
GB[Generate / ToolCall node]
end
MM -- "shared ref<br>(only while Pinned)" --> M[(Model)]
GA -- "shared ref" --> M
GB -- "shared ref" --> M
Two things decide how long the model stays resident:
- Retention mode.
LoadModelRequestinstalls the model as Pinned — theModelManagerholds a reference of its own. A model loaded implicitly (because an agent's graph referenced it without a priorLoadModelRequest) is OnDemand — only the nodes hold references; the manager does not. - Active nodes. Each
Generate/ToolCallcontext holds a reference for the life of its agent.
The model is unloaded exactly when the last reference goes away:
- Pinned → released by
UnloadModelRequest, but only after every node using it has been torn down. If agents are still active the request is acknowledged immediately and the actual unload is deferred until the last context drops. - OnDemand → the server runs
EvictUnusedOnDemandafter everyDestroyAgentRequest, freeing any OnDemand model whose reference count has fallen to zero. No explicit unload call is needed.
Session end¶
When the TCP connection closes (or the server shuts down), a single cleanup sequence runs for the session:
- Every active turn is cancelled.
- Every agent in the session is destroyed.
- The session's
StringStorageManagerandEmbeddedStringStorageManagerare torn down, dropping their references. - After the last agent is gone,
EvictUnusedOnDemandfrees any OnDemand models whose last user just left.
Pinned models survive — that is the whole point of pinning. A fresh session can reuse them without paying the load cost again.
Common questions¶
Can I destroy_string_storage while an agent is running?
Yes. The call is safe at any time. Live nodes keep the data alive
for the rest of the agent's life; the storage name just becomes
available for reuse in the same session.
Do I need to re-pin my model after each agent?
No. Pinned stays loaded across agent create / destroy cycles — and
even across sessions on the same server process. Only
UnloadModelRequest demotes it.
Is the Path-A .usearch index rebuilt every session?
No. The cached index on disk is reused; only the in-memory index
object follows the normal shared-reference rules.
What if two agents reference the same storage? Each holds its own reference. Destroying one agent drops that agent's reference; the other keeps using the storage normally.
Can I mutate a string storage in place?
No — there is no wire request to edit one. Create a new storage and,
if a live agent needs the new content, use
change_param to rebind the
node's string_storage param.