Advanced Concepts
Data Flow
Data moves through a trill workflow via step outputs, job outputs, and template expressions. The pieces connect like this:
Step writes JSON to $STEP_OUTPUT_FILE
→ step outputs: {{ steps.<step>.outputs.<key> }}
→ job outputs mapping: outputs: { tag: "step.key" }
→ cross-job templates: {{ jobs.<job>.outputs.<key> }}
Step Outputs
Every step gets a $STEP_OUTPUT_FILE environment variable pointing
to a temporary file. Write a JSON object to it:
steps:
- name: docker_build
run: |
TAG="v$(date +%s)"
SHA=$(git rev-parse --short HEAD)
echo "{\"tag\": \"$TAG\", \"sha\": \"$SHA\"}" > "$STEP_OUTPUT_FILE"
Subsequent steps in the same job access these via
{{ steps.docker_build.outputs.tag }}.
Output values preserve their JSON types — numbers, booleans, arrays, and nested objects all work:
steps:
- name: analyze
run: |
echo '{"count": 42, "passed": true, "targets": ["api", "web"]}' \
> "$STEP_OUTPUT_FILE"
- name: report
run: |
echo "{{ steps.analyze.outputs.count }} tests"
echo "Passed: {{ steps.analyze.outputs.passed }}"
{% for t in steps.analyze.outputs.targets %}
echo " - {{ t }}"
{% endfor %}
Job Outputs
To make step outputs available to other jobs, declare them at the job level:
jobs:
build:
outputs:
tag: "docker_build.tag"
sha: "docker_build.sha"
steps:
- name: docker_build
run: |
echo "{\"tag\": \"v1.0\", \"sha\": \"abc123\"}" > "$STEP_OUTPUT_FILE"
deploy:
depends_on: [build]
steps:
- name: push
run: echo "Deploying {{ jobs.build.outputs.tag }}"
The outputs map uses step_name.key format to reference step
outputs.
Expression Engine
Trill uses MiniJinja for template expressions. Templates are evaluated just before each step runs:
run: echo "Image {{ jobs.build.outputs.tag }} at {{ jobs.build.outputs.sha }}"
Available context:
| Variable | Description |
|---|---|
steps.<name>.outputs.<key> | Outputs from a previous step in the same job |
jobs.<name>.outputs.<key> | Outputs from an upstream job |
jobs.<name>.status | Status of an upstream job: "success", "failure", "skipped", "cancelled" |
local | true when running locally |
env.<name> | Environment variable values |
Filters
Trill includes built-in MiniJinja filters for working with structured data:
| Filter | Usage | Description |
|---|---|---|
tojson | {{ value | tojson }} | Serialize any value to a JSON string |
fromjson | {{ str | fromjson }} | Parse a JSON string into a structured value |
keys | {{ map | keys }} | Get the keys of a map as a list |
values | {{ map | values }} | Get the values of a map as a list |
merge | {{ base | merge(other) }} | Merge two maps (right-side wins) |
These are useful when outputs contain structured data:
steps:
- name: discover
run: |
echo '{"services": {"api": {"port": 3000}, "web": {"port": 8080}}}' \
> "$STEP_OUTPUT_FILE"
- name: deploy
run: |
{% for svc in steps.discover.outputs.services | keys %}
echo "Deploying {{ svc }}"
{% endfor %}
Round-trip structured data with tojson and fromjson:
steps:
- name: config
run: |
{% set defaults = '{"replicas": 1, "memory": "512m"}' | fromjson %}
{% set overrides = '{"replicas": 3}' | fromjson %}
{% set merged = defaults | merge(overrides) %}
echo "Replicas: {{ merged.replicas }}"
Conditionals
Jobs and steps support if conditions:
jobs:
deploy:
if: "not local"
steps:
- name: push
run: ./deploy.sh
local_check:
if: "local"
steps:
- name: verify
run: ./check.sh
When a condition evaluates to false, the job or step is skipped (not failed). Downstream jobs that depend on a skipped job will still run.
Status Functions
By default, when a job fails, all downstream dependents are skipped immediately. Status functions let you override this behavior for common CI patterns like cleanup, notifications, and conditional recovery.
Four built-in functions are available in if conditions:
| Function | Returns true when |
|---|---|
always() | Always (unconditionally) |
failure() | Any dependency has status "failure" |
success() | All dependencies have status "success" or "skipped" |
cancelled() | Any dependency has status "cancelled" |
A job whose if: condition calls any of these four functions is called
status-aware. Status-aware jobs have relaxed dependency rules: they
wait for all dependencies to reach a terminal state (any outcome) rather
than requiring success. They’re also exempt from two kinds of automatic
skipping — skip propagation when a dependency fails, and sibling
cancellation triggered by a cancel_on_failure job.
Always Run (Cleanup)
jobs:
build:
steps:
- name: compile
run: cargo build
cleanup:
depends_on: [build]
if: "always()"
steps:
- name: teardown
run: docker-compose down
cleanup runs regardless of whether build succeeds or fails.
Notify on Failure
jobs:
test:
steps:
- name: run
run: cargo test
notify:
depends_on: [test]
if: "failure()"
steps:
- name: alert
run: curl -X POST https://slack.example.com/webhook
notify only runs when test fails. If test succeeds, notify
is skipped (its condition evaluates to false).
Compound Conditions
Status functions can be combined with other expressions:
jobs:
alert:
depends_on: [deploy]
if: "failure() or cancelled()"
steps:
- name: notify
run: echo "Deploy did not succeed"
Edge Cases
failure()with no dependencies returnsfalsesuccess()with no dependencies returnstrue(vacuous truth)cancelled()with no dependencies returnsfalsealways()ignores dependencies entirely and returnstrue- A status-aware job whose condition evaluates to
falseis markedSkipped(same as any other conditional job) - Dependents of a status-aware job wait for it normally
Dynamic Extension
Steps can add new jobs to the workflow at runtime. This is for cases where you don’t know the full set of work upfront — service discovery, generated test suites, conditional fan-out.
A step writes YAML to $STEP_EXTEND_FILE:
jobs:
discover:
steps:
- name: find_services
run: |
cat > "$STEP_EXTEND_FILE" <<'YAML'
jobs:
deploy-api:
depends_on: [discover]
steps:
- name: deploy
run: echo "Deploying API"
deploy-web:
depends_on: [discover]
steps:
- name: deploy
run: echo "Deploying Web"
YAML
Trill reads the extension file after the step completes, validates the new jobs (no name conflicts, no cycles), and merges them into the live dependency graph. New jobs start executing immediately if their dependencies are satisfied.
Extension jobs can depend on existing jobs and other extension jobs. They can also declare their own outputs and environment variables.
Signals
Signals are the delivery mechanism for wait steps.
When a step has type: wait with a signal field, it blocks until
a matching signal arrives. Signals can be sent from the CLI, an
external script, a CI webhook, or any HTTP client.
trill signal CLI Command
Send a signal to a running workflow:
trill signal <run-id> --name deploy-approved
Include a JSON data payload that becomes the wait step’s outputs:
trill signal abc123 --name deploy-approved --data '{"env": "production", "version": "v2.1.0"}'
For remote (server) mode, pass the server URL and auth token:
trill signal abc123 --name deploy-approved \
--server http://localhost:3000 \
--token trl_user_...
CLI flags:
| Flag | Required | Description |
|---|---|---|
<run-id> | yes | The run ID to signal (positional argument) |
--name | yes | Signal name (must match the signal field in a wait step) |
--data | no | JSON object payload delivered as step outputs |
--server | no | Server URL (omit for local mode) |
--token | no | Auth token (required with --server) |
Local Mode: Signal Files
In local mode (no server), signals use the filesystem. When a wait step blocks on a signal, trill polls for a file at:
.trill/signals/<run-id>/<signal-name>
The trill signal command creates this file with the JSON data as
its contents. Once trill detects the file, it reads the data, deletes
the file, and resumes the step.
This means any process that can write a file can send a signal —
you don’t need the trill CLI:
mkdir -p .trill/signals/abc123
echo '{"status": "ready"}' > .trill/signals/abc123/deploy-approved
The poll interval is 500ms.
Managed cloud: Signal API
When running against trill.build, signals are delivered via an HTTP endpoint:
POST /api/v1/runs/:id/signals/:signal
Request body (optional):
{
"data": {"env": "production"}
}
The endpoint resolves all pending waits for the given run that match
the signal name. If no pending waits match, the request succeeds but
resolved is 0. A SignalReceived event is emitted so any connected
UI or event-stream consumer sees the resolution.
Response:
{
"resolved": 1
}
Authentication is required (Bearer token, same as other API endpoints).
Durable waits on trill.build
Duration and signal waits are durable against trill.build — the state of a paused job lives in the control plane, not in the runner agent’s memory. If your agent restarts, loses network, or is replaced mid-run, the wait survives. When the deadline expires (or a signal arrives), a runner — possibly a different one — picks the job back up and continues.
You don’t have to do anything to opt in; it’s how wait steps behave
when a run is submitted with --server.
Use Cases
Deploy gates — Wait for an external CI pipeline or deployment tool to signal readiness before running integration tests:
jobs:
deploy:
steps:
- name: trigger
run: curl -X POST https://deploy.example.com/trigger
- name: wait-for-deploy
type: wait
signal: deploy-complete
timeout: 30m
- name: smoke-test
run: ./smoke-test.sh
External system integration — Pause for an external approval system, security scanner, or compliance check:
steps:
- name: scan
run: ./trigger-security-scan.sh
- name: wait-for-scan
type: wait
signal: security-cleared
timeout: 1h
- name: release
run: ./release.sh
Batch cooling periods — Insert a fixed pause between stages to let metrics stabilize or rate limits reset:
steps:
- name: deploy-canary
run: ./deploy.sh --canary
- name: soak
type: wait
duration: 15m
- name: promote
run: ./deploy.sh --promote
Combined gate — Wait a minimum period, then require manual confirmation:
steps:
- name: stabilize
type: wait
duration: 10m
signal: operator-confirm
timeout: 2h
The duration runs first (10 minutes), then trill blocks until the
operator-confirm signal arrives (or the 2-hour timeout expires).
trill approve CLI Command
Submit approval decisions from the command line. This is useful for scripted approvals, CI integration, or any automation that needs to approve or reject a waiting step.
trill approve <run-id> --step approve-deploy --action approve \
--server http://localhost:3000 \
--token trl_user_...
Include outputs for approval fields:
trill approve abc123 --step approve-deploy --action approve \
--outputs '{"environment": "production", "reason": "quarterly release"}' \
--server http://localhost:3000 \
--token trl_user_...
Reject a step:
trill approve abc123 --step approve-deploy --action reject \
--server http://localhost:3000 \
--token trl_user_...
CLI flags:
| Flag | Required | Description |
|---|---|---|
<run-id> | yes | The run ID (positional argument) |
--step | yes | Step name matching the approval step |
--action | yes | approve or reject |
--outputs | no | JSON object of approval field values |
--server | yes | Server URL (server-only command) |
--token | yes | Auth token (or set TRILL_TOKEN env var) |
LLM Integration
Every interactive feature in trill has a JSON mode. This makes trill a natural tool for LLM agents that need to drive workflows programmatically.
How It Works
Run with --debug --json:
trill run deploy.yaml --debug --json
The LLM agent reads JSON events from stdout and writes JSON responses to stdin. The protocol is line-delimited (NDJSON) — one JSON object per line.
Workflow: LLM-Driven Deployment
Here’s a concrete example. The workflow builds and tests, then asks for deployment approval:
jobs:
build:
steps:
- name: compile
run: cargo build --release
test:
depends_on: [build]
steps:
- name: run
run: cargo test
deploy:
depends_on: [test]
steps:
- name: approve
type: approval
prompt: "Deploy to production?"
fields:
- name: environment
type: select
options: [staging, production]
- name: reason
type: text
required: false
- name: push
run: |
echo "Deploying to {{ steps.approve.outputs.environment }}"
An LLM agent can:
- Observe each step as it executes (read
step_pendingandstep_doneevents) - Decide whether to run or skip steps (respond with
{"action": "run"}or{"action": "skip"}) - React after step completion — continue, abort the workflow,
or inject new jobs via
{"action": "extend", "jobs": {...}} - Approve deployments based on test results (respond to
approval_requiredwith{"action": "approve", "outputs": {...}}) - Abort if something looks wrong (respond with
{"action": "quit"}before a step, or{"action": "abort"}after a step)
The agent doesn’t need to parse terminal escape sequences or interact with a TUI. Everything is structured JSON.
Protocol Summary
Debug events (stdout):
| Event | When | Key Fields |
|---|---|---|
step_pending | Before step | template, command, env, context, depends_on |
step_done | After step | status, duration_ms, exit_code, outputs, extensions, output |
approval_required | Approval step | prompt, fields |
workflow_done | Workflow complete | success, duration_ms |
Pre-step responses (stdin, after step_pending):
| Action | Effect |
|---|---|
run | Execute the step |
skip | Skip without running |
continue | Run all remaining steps |
quit | Abort workflow |
Post-step responses (stdin, after step_done):
| Action | Effect |
|---|---|
continue | Proceed to the next step (default) |
extend | Inject new jobs into the DAG (requires jobs field) |
abort | Cancel remaining steps and abort workflow |
Approval responses (stdin, after approval_required):
| Action | Effect |
|---|---|
approve | Approve with outputs |
reject | Reject (step fails) |
Next Steps
This covers the workflow features available locally and on trill.build. See Defining Steps for the full set of step types including expression and HTTP steps, and Debugging for the interactive step-through debugger.