
If you’re hand‑editing prompts, hard‑coding URLs, or nudging AI settings every time you deploy, you’re doing extra work. n8n Expressions let you template once, map fields cleanly, and ship workflows that behave the same in dev, staging, and prod.
What “Expressions” really give you
Expressions make any node parameter dynamic. You can pull values from earlier nodes, the running workflow, or your environment and then transform them with plain JavaScript. Think {{ $json.title }}, {{ $node[‘Search’].json.results[0] }}, or {{ $env.BASE_URL }}.
Core sources you’ll use most
- $json for the current item’s data.
- $node[‘Node Name’]…or the shorthand $(‘Node Name’).item.json to reach into earlier nodes.
- $workflow, $execution for metadata.
- $env for instance env vars, and $vars for admin‑defined constants if your plan supports it.
Template prompts (so they’re not brittle)
Treat prompts like any other payload: template them with expressions and keep the structure consistent.
System prompt (example): Paste this into an AI node’s “System/Instructions” field in expression mode:
You return compact, valid JSON only.
brand: {{ $vars.BRAND_NAME ?? 'acme' }}
timezone: {{ $env.TZ ?? 'UTC' }}
required_keys: ["summary","tags","next_action"]
User prompt (example)
Summarize this ticket in 2 sentences and extract tags.
title: {{ $json.title }}
body: {{ $json.body }}
Return:
{
"summary": string,
"tags": string[],
"next_action": string
}
To keep the AI consistent across runs, set a low temperature (0-0.3) in the model sub‑node or OpenAI node. n8n exposes Temperature/Top‑P on chat model nodes; lower values reduce randomness.
Need stronger structure? Use n8n’s Information Extractor node or a community “structured outputs” node to enforce JSON shapes when a schema is required.
Map fields into clean payloads (no glue code)
Before you call an AI or API node, normalize data with a Set node in expression mode:
{{
{
userId: $json.user.id,
total: Number($json.cart?.total ?? 0),
items: ($json.cart?.items ?? []).map(i => ({ sku: i.sku, qty: Number(i.qty ?? 0) })),
source: $workflow.name,
receivedAt: (new Date()).toISOString()
}
}}
This keeps downstream nodes simple and prevents “works on my machine” surprises.
Keep runs predictable across environments
- Parameterize your base URLs and flags. Put constants in $vars (admin‑managed, read‑only) or fall back to $env. For example: {{ ($vars.API_BASE ?? $env.API_BASE_URL) + ‘/v1/orders’ }}. $vars is part of n8n’s Environments feature and $env exposes instance configuration variables.
- Control randomness. As above, set low Temperature/steady Top‑P on model nodes for repeatable responses.
- Persist cross‑run state deliberately. When you need counters or “last processed” timestamps, use getWorkflowStaticData in a Code node rather than sneaking values through prompts.
// Code node
const s = this.getWorkflowStaticData('global');
s.lastRunAt = new Date().toISOString();
return items;
- Don’t trust preview for env vars. Design‑time previews can differ from run‑time for $env, and admins can block env access. Verify at execution time if something looks blank.
Copy‑paste cheat sheet
Current item: {{ $json.field }}
Previous node: {{ $node['HTTP Request'].json.data.id }}
Shorthand previous: {{ $('HTTP Request').item.json.data.id }}
Env variable: {{ $env.BASE_URL }}
Instance variable: {{ $vars.BRAND_NAME }} // if Environments is enabled
Default + cast: {{ Number($json.amount ?? 0) }}
First match in array: {{ ($json.items ?? []).find(i => i.status === 'ok') }}
ISO timestamp: {{ (new Date()).toISOString() }}
Expressions are your portability layer. Template the prompt, shape a payload, stabilize the environment, and your AI workflows stop drifting from dev to prod.
Like always, I hope this helps. If you have n8n or any other automation issues, don’t hesitate to reach out.