
- Pick n8n Cloud for the fastest start or Docker if you want full control.
- Add OpenAI credentials in n8n.
- Build a 4‑node flow: Webhook → OpenAI Chat Model (JSON) → Structured Output Parser → Respond to Webhook.
- Return clean, valid application/json to your caller.
1) Choose where to run n8n
Option A: n8n Cloud Spin up an editor in minutes, no servers to maintain. If you’re new to n8n or want a low‑ops path, start here.
Option B: Self‑host with Docker If you need custom networking, private nodes, or compliance controls, use Docker/Compose. n8n’s guide includes a production‑ready Compose file with Traefik and volumes; note that the SQLite DB and your encryption key live in /home/node/.n8n on the n8n_data volume.
Tip: Set a stable encryption key so saved credentials survive restarts. Use N8N_ENCRYPTION_KEY (env var or config) instead of letting n8n auto‑generate one.
2) Wire up credentials and secrets (safely)
- In the left rail, open Credentials → create OpenAI → paste your API key. This credential can be reused by any OpenAI node.
- Prefer environment‑based configuration for sensitive settings. n8n supports both env files and _FILE variants for secret mounting. If you use containers or K8s, this maps cleanly to Docker/Kubernetes secrets.
3) Build the end‑to‑end AI flow
Node 1: Webhook (Trigger) Purpose: make your flow callable like an API. Set Respond to Using “Respond to Webhook” Node (we’ll control the response later). Use the Test URL while iterating, then switch to Production URL when you’re ready.
Node 2 OpenAI Chat Model Purpose: call the LLM. Choose your model, then set Response Format = JSON so the model returns valid JSON.
Node 3 Structured Output Parser Purpose: enforce a schema (field names, types, required keys) and output a clean object. Choose Define using JSON Schema and paste your schema (example below). This node maps model output into a predictable structure.
Node 4 Respond to Webhook Purpose: send an HTTP response. Set Response Code to 200, Response Headers to Content-Type: application/json, and Response Body to the parser’s output (use an expression like {{$json}}).
Alternative: You can skip the Respond node and have the Webhook return the last node’s JSON directly by setting Respond → When Last Node Finishes and Response Data → First Entry JSON.
4) Copy‑paste schema and prompt
Example JSON Schema (drop this into the Structured Output Parser):
{
"type": "object",
"properties": {
"topic": { "type": "string" },
"summary": { "type": "string" },
"tags": { "type": "array", "items": { "type": "string" } },
"confidence": { "type": "number" }
},
"required": ["topic", "summary", "tags", "confidence"],
"additionalProperties": false
}
System / Prompt starter for the OpenAI node: “Return only JSON that fits the schema provided downstream. No prose, no explanation.”
Because the OpenAI Chat Model node is set to JSON response format, the model will emit valid JSON which the parser then shapes to your exact fields.
Need even stricter enforcement using OpenAI’s JSON Schema “structured outputs?” There’s a community node that injects response_format with your schema and validates with AJV. Use this if you want server‑side schema checks at the model call itself.
5) Test it
Send a POST with any text you want summarized:
curl -X POST "https://<your-test-webhook-url>" \
-H "Content-Type: application/json" \
-d '{
"text": "n8n connects APIs and AI so you can automate and prototype fast."
}'
Expect a response shaped like:
{
"topic": "n8n + AI automation",
"summary": "n8n lets you combine APIs and LLMs to build automated workflows quickly.",
"tags": ["automation","ai","workflows"],
"confidence": 0.92
}
If you’re still on the test URL, remember to switch your Webhook node to the Production URL before going live.
6) Common gotchas (and quick fixes)
- Only the first item returns: The Respond to Webhook node replies once using the first incoming item. Aggregate to one item first, or use the Webhook node’s When Last Node Finishes option.
- Credentials disappear after restart: Persist the /home/node/.n8n volume and set N8N_ENCRYPTION_KEY.
- Model emits prose instead of JSON: Double‑check Response Format = JSON in the OpenAI Chat Model node, and keep your prompt strict.
- Secrets in env files: Use _FILE variants or an external secrets backend rather than hard‑coding values.
7) Minimal Docker Compose (self‑host quick start)
services:
n8n:
image: docker.n8n.io/n8nio/n8n:latest
restart: unless-stopped
environment:
- N8N_PROTOCOL=https
- N8N_PORT=5678
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
ports:
- "5678:5678"
volumes:
- n8n_data:/home/node/.n8n
volumes:
n8n_data:
This persists your database and encryption key in the n8n_data volume. Put N8N_ENCRYPTION_KEY in your .env, then start with docker compose up -d. For a hardened, TLS‑terminated setup, use the official Traefik recipe in the docs.
Wrap‑up
That’s a complete, API‑style AI endpoint in n8n: from inbound request to validated JSON out. Start simple with Cloud, harden later with Docker and externalized secrets. The core pattern stays the same: Webhook → LLM (JSON) → Parse → Respond.