
You don’t need a thousand microservices to deliver AI value. Use n8n to orchestrate, add small bits of code where it matters, and wrap the LLM in guardrails. This opener gives you a decision rubric, a simple mental model, and the full series map so you can move fast without breaking your ops.
AI can draft, summarize, and classify. n8n can trigger, schedule, retry, log, and connect to everything. Blend them well and you get output that’s useful on day one and maintainable on day ninety. Blend them poorly and you end up babysitting flows at 2 a.m.
This series is a field guide to shipping AI workflows that are boring in the best way: predictable, observable, and easy to hand off.
Automation vs. custom code vs. both
Use this quick rule-of-thumb to decide where work should live.
Choose n8n nodes when:
- The step is orchestration: triggers, schedules, webhooks, queues, retries, alerts.
- You’re moving data between APIs (GA4, HubSpot, Salesforce, Notion, Sheets).
- The logic is declarative or transform‑heavy (mapping, filtering, formatting).
Reach for small custom code when:
- You need nontrivial algorithms, fast loops, or heavy data shaping.
- You must validate complex schemas or enforce strict business rules.
- You want a reusable service across many workflows.
Mix them when:
- The workflow is 80% glue, 20% logic. Put the glue in n8n; package the logic in a function, microservice, or n8n Function node.
- You need LLMs for reasoning but also deterministic gates, audits, and rollbacks.
If a step must be testable, idempotent, and fast under load, lean code‑heavy. If a step is about connecting systems, lean n8n. Most production flows are hybrids.
A simple mental model: The LLM Sandwich
Wrap every AI call between deterministic layers so outputs stay useful and safe.
Top slice: Pre‑LLM (deterministic):
- Normalize inputs and enforce required fields.
- Add context: fetch records, prior messages, brand rules, product data.
- Set the contract: define a JSON schema or function signature.
Filling: LLM step (constrained):
- Use function calling or JSON mode with a strict schema.
- Keep prompts short, data fresh, and instructions explicit about source use.
Bottom slice: Post‑LLM (deterministic):
- Validate JSON against the schema; reject or auto‑fix minor deviations.
- Run assertions: required keys present, score thresholds met, tokens within budget.
- Commit side effects: write to DBs, push to Slack/CRM, log decisions and costs.
This sandwich gives you creativity in the middle and control at the edges.
Guardrails you want from day one
- Structured outputs only. No free‑text if the next step expects fields.
- Idempotency keys on writes to CRMs, Sheets, and warehouses.
- Timeouts and retries with jitter so you don’t DDoS yourself.
- Cost and token logs by workflow, by user, by run.
- Human‑in‑the‑loop escapes for low‑confidence results.
- Caching for repeatable lookups and prompts.
- Versioned prompts and fixtures to catch regressions.
What this series covers
Will be releasing these articles over the next couple of weeks.
- Ship Your First AI Flow in n8n
- Expressions 101: Turning Prompts into Payloads
- LLM Nodes & Durable Patterns
- Data In, Data Out: Sheets, Notion, Airtable
- Content Ops: From Brief to LinkedIn Post
- Lead Capture to CRM with AI Scoring
- Support Triage: Inbox to Slack, Sorted by AI
- RAG the Easy Way in n8n
- AI Data Mapping: PIM → Salesforce Sync
- Analytics Copilot: Weekly Marketing Recap
- Human‑in‑the‑Loop Without the Wait
- Observability & Cost Control for AI Flows
- Testing AI Workflows: Stop Guessing
- Scaling n8n at Work
- Governance, Privacy & Rollout Plan
A few real‑world patterns you can ship quickly
- Content pipeline: Research → outline → draft → review → schedule. n8n handles sources, deadlines, and publishing. The LLM handles outline and draft with a schema for title, slug, and body.
- Lead scoring to CRM: Enrich with People Data Labs, classify by ICP, compute a score, attach reasoning notes, and upsert to Salesforce or HubSpot with dedupe keys.
- PIM → Salesforce sync: Normalize brand/model/finish with scoped mappings, use the LLM to propose mappings with confidence, auto‑commit high confidence, queue low confidence for review.
Anti‑patterns to avoid
- Raw prompts buried in nodes with no versioning.
- Letting the model invent columns, IDs, or prices.
- Overusing AI for simple if/else logic that belongs in nodes.
- Writing to systems without dedupe or idempotency.
- Long context windows where 90% of tokens are wasted.
How to use this series
Pick one workflow your team runs weekly. Apply the LLM Sandwich. Add logs and a reviewer step. When it’s stable for two weeks, scale it up and move to the next workflow.
Next up: Ship Your First AI Flow in n8n, Cloud vs. Docker, secrets, and getting a clean JSON response end‑to‑end.