Skip to content
Skip to main content
n8n and Claude API content automation pipeline
12 min readBy Carlos Aragon

n8n + Claude API: Build an AI Content Pipeline in 30 Minutes

One article in. Five platform-native variants out. LinkedIn post, Twitter thread, email snippet, Slack update, and a blog intro — all generated and published automatically for under half a cent per run. Here's the exact workflow we use at VIXI.

The Content Treadmill Problem

Most marketing teams and agencies are stuck in the same loop. A blog post goes live. Then someone spends two hours adapting it for LinkedIn. Another hour for a Twitter thread. A newsletter version. A Slack summary for the internal team. By the time it's done, one piece of content has consumed half a day and four different people.

At VIXI, we ran the numbers. Content repurposing was costing us roughly 10 hours a week across the team — at $75/hr, that's $3,000/month in writer time to say the same thing in different fonts. The output was also inconsistent: LinkedIn posts written in a rush sound nothing like the original voice; Twitter threads lose the data; email snippets miss the hook.

The fix wasn't more writers. It was building a pipeline that does the adaptation automatically — and does it better than a rushed human rewrite.

What we built:

  • One webhook trigger that accepts any article as input
  • Claude API generates 5 platform-native variants in one call
  • n8n routes each variant to its destination channel
  • Total cost: ~$0.004 per article run with Claude Haiku

At scale, 100 articles a month costs $0.40 in API fees. Compare that to $3,000/month in manual labor. This pipeline paid for itself in the first hour.

Architecture Overview — What We're Building

The stack is intentionally simple: n8n handles orchestration and routing, Claude API handles the actual content generation, and platform webhooks handle distribution. No custom code servers. No infrastructure to maintain. The whole thing runs on n8n cloud (or a $6/month VPS if you're self-hosting).

End-to-end flow

[Trigger: Webhook / RSS / Schedule]
  → [n8n: Fetch source content]
  → [n8n: HTTP Request to Claude API]
  → [Claude: Generate 5 platform variants as JSON]
  → [n8n: Parse JSON response]
  → [Branch 1: LinkedIn post → LinkedIn API]
  → [Branch 2: Twitter thread → X API]
  → [Branch 3: Email snippet → Mailchimp/ConvertKit]
  → [Branch 4: Slack update → Slack webhook]
  → [Branch 5: Blog intro → Notion/Airtable draft]

Prerequisites: an n8n account (cloud or self-hosted), an Anthropic API key, and access to whichever channels you want to publish to. You don't need to connect all five on day one — the pipeline works with whatever output branches you configure.

Setting Up the n8n Workflow

Start with a new workflow in n8n. Add a Webhook trigger node — this is your entry point. Set the HTTP method to POST and copy the webhook URL. Every article you want to process gets sent here.

The webhook expects a simple JSON payload:

Webhook payload format

{
  "title": "5 Ways AI Is Changing Marketing Attribution",
  "content": "Full article text here — paste the entire body...",
  "url": "https://vixi.agency/blog/ai-attribution",
  "tone": "professional"
}

The tone field is optional — if omitted, Claude defaults to the system prompt voice. We use it when clients have pieces that should be more casual or more formal than our default.

Next, add a Set node after the webhook. Use it to build your prompt variables — this keeps your prompt logic clean and editable without touching the HTTP Request node configuration. Set two variables: systemPrompt (static) and userMessage (dynamic, includes the article data).

Calling the Claude API

Add an HTTP Request node — not the community Claude node, which gives you less control over request structure. Configure it as follows:

HTTP Request node configuration

Method: POST
URL: https://api.anthropic.com/v1/messages

Headers:
  x-api-key: {{ $credentials.anthropicApiKey }}
  anthropic-version: 2023-06-01
  content-type: application/json

Body (JSON):
{
  "model": "claude-haiku-4-5-20251001",
  "max_tokens": 2000,
  "system": "{{ $json.systemPrompt }}",
  "messages": [
    {
      "role": "user",
      "content": "{{ $json.userMessage }}"
    }
  ]
}

We use Claude Haiku here, not Sonnet. For content repurposing — which is mostly reformatting, not deep reasoning — Haiku is fast enough and costs $0.80/MTok on input vs $3/MTok for Sonnet. That's a 75% reduction for the same output quality on this specific task. We've run both; clients can't tell the difference on channel-specific copy.

Store your Anthropic API key as a credential in n8n (Settings → Credentials → Header Auth) and reference it with $credentials.anthropicApiKey in the header. Never hardcode it in the node config.

The Prompt That Actually Works

This is where most people get it wrong. Vague prompts produce vague content. Claude needs to know exactly what it's producing, in what format, with what constraints. Here's the system prompt we landed on after three weeks of iteration:

System prompt (production version)

You are a content strategist for a B2B marketing agency. You write 
for practitioners, not beginners. Every output must be:
- Platform-native (LinkedIn ≠ Twitter ≠ email)
- Specific (cite numbers, tools, outcomes — not vague advice)
- Under 300 words per variant unless the format requires more
- First-person plural ("we") for agency content

Return a valid JSON object only. No preamble. No markdown fences. 
No explanation. Just the JSON.

And the user message template, which gets the article data interpolated in by the n8n Set node:

User message template

Repurpose this article into 5 platform variants.

Title: {{ $('Webhook').item.json.title }}
URL: {{ $('Webhook').item.json.url }}
Content: {{ $('Webhook').item.json.content }}

Return JSON with exactly these keys:
{
  "linkedin": "1 post, 150-200 words, 3 line breaks, ends with question",
  "twitter_thread": ["tweet 1", "tweet 2", "tweet 3", "tweet 4", "tweet 5"],
  "email_snippet": "2-3 sentences for newsletter preview, includes URL",
  "slack_update": "1 sentence + link, no hashtags, casual tone",
  "blog_intro": "2 paragraphs for Notion draft, SEO-friendly opening"
}

The explicit character limits and format instructions per key are critical. Without them, Claude writes whatever seems right for each platform, and you get inconsistent length that breaks your downstream formatting.

Cost optimization tip: Prompt caching

If you're running this pipeline on batch content (10+ articles at once), add "cache_control": {"type": "ephemeral"} to your system message block. Anthropic caches the prompt for 5 minutes — on batch runs, the system prompt tokens hit the cache and you save roughly 40% on input costs.

Routing to 5 Channels

After the HTTP Request node, add a Code node to parse Claude's response. The API returns the generated content inside content[0].text as a JSON string. Parse it out:

Parse response (Code node — JavaScript)

const rawText = $input.first().json.content[0].text;
const variants = JSON.parse(rawText);
return [{ json: variants }];

Now add a Switch node or simply branch with multiple output connections into parallel HTTP Request nodes — one per channel. Here's how each branch works:

Branch 1: LinkedIn

HTTP POST to LinkedIn API v2 /ugcPosts endpoint with $json.linkedin as the commentary text. Use OAuth2 credentials stored in n8n.

Branch 2: Twitter / X Thread

Loop Over Items using $json.twitter_thread array. Post tweet 1, capture the tweet ID, post tweet 2 as a reply to tweet 1's ID, and so on. This creates a proper reply chain.

Twitter thread loop (Code node)

// In Loop Over Items node — run this Code node per tweet
const tweets = $('Parse Response').item.json.twitter_thread;
const items = tweets.map((text, index) => ({
  json: { text, position: index }
}));
return items;
// Then HTTP Request node posts each tweet,
// passing reply_to from previous item's response

Branch 3: Email Snippet

HTTP POST to ConvertKit or Mailchimp API to append $json.email_snippet to a draft broadcast. We send to a staging draft, not live — someone reviews before send.

Branch 4: Slack

Slack webhook node → #content-team channel. Use $json.slack_update. Takes 10 seconds to configure if you have the Slack app installed.

Branch 5: Blog Draft

HTTP POST to Notion API → append a new page to your content database with $json.blog_intro as the body. Tag it "needs review" — a human edits before publishing.

Run the branches in parallel using n8n's default behavior — each output connection from the Code node runs simultaneously. Total latency from webhook trigger to all 5 channels published: under 8 seconds in our testing.

Exportable Workflow JSON

Here's the abbreviated n8n workflow JSON you can import as a starting point. Replace the credential IDs and webhook URLs with your own values after importing.

n8n workflow JSON (importable)

{
  "name": "AI Content Pipeline",
  "nodes": [
    {
      "parameters": {
        "httpMethod": "POST",
        "path": "content-pipeline",
        "responseMode": "responseNode"
      },
      "name": "Webhook",
      "type": "n8n-nodes-base.webhook",
      "position": [240, 300]
    },
    {
      "parameters": {
        "method": "POST",
        "url": "https://api.anthropic.com/v1/messages",
        "authentication": "genericCredentialType",
        "genericAuthType": "httpHeaderAuth",
        "sendBody": true,
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "claude-haiku-4-5-20251001"
            },
            {
              "name": "max_tokens",
              "value": 2000
            }
          ]
        }
      },
      "name": "Call Claude API",
      "type": "n8n-nodes-base.httpRequest",
      "position": [480, 300]
    },
    {
      "parameters": {
        "jsCode": "const rawText = $input.first().json.content[0].text;\nconst variants = JSON.parse(rawText);\nreturn [{ json: variants }];"
      },
      "name": "Parse Response",
      "type": "n8n-nodes-base.code",
      "position": [720, 300]
    },
    {
      "parameters": {
        "jsCode": "// Log cost to Supabase\nconst usage = $('Call Claude API').item.json.usage;\nreturn [{\n  json: {\n    input_tokens: usage.input_tokens,\n    output_tokens: usage.output_tokens,\n    model: 'claude-haiku-4-5-20251001',\n    run_at: new Date().toISOString(),\n    cost_usd: (usage.input_tokens * 0.0000008) + (usage.output_tokens * 0.000004)\n  }\n}];"
      },
      "name": "Log Cost",
      "type": "n8n-nodes-base.code",
      "position": [960, 400]
    }
  ],
  "connections": {
    "Webhook": { "main": [[ { "node": "Call Claude API", "type": "main", "index": 0 } ]] },
    "Call Claude API": { "main": [[ { "node": "Parse Response", "type": "main", "index": 0 } ]] },
    "Parse Response": {
      "main": [[
        { "node": "LinkedIn", "type": "main", "index": 0 },
        { "node": "Twitter Thread", "type": "main", "index": 0 },
        { "node": "Email Draft", "type": "main", "index": 0 },
        { "node": "Slack", "type": "main", "index": 0 },
        { "node": "Blog Draft", "type": "main", "index": 0 },
        { "node": "Log Cost", "type": "main", "index": 0 }
      ]]
    }
  }
}

The Log Cost node is optional but we recommend keeping it. It calculates exact spend per run and POSTs to a Supabase table — after 30 days you have a real cost dashboard instead of guessing.

Error handling — don't skip this

Add a Catch node connected to every HTTP Request node. Wire the Catch output to a Slack alert node that pings your team with the error message and the article title that failed. Silent failures are the death of automated pipelines — you won't know content wasn't published until a client asks why LinkedIn hasn't been updated in two weeks.

Cost Breakdown & Scaling

Let's put real numbers on this. Using Claude Haiku (current pricing as of March 2026):

ItemTokensCost
Input (article + system prompt)~2,000 tokens$0.0016
Output (5 variants)~500 tokens$0.0020
Total per run~2,500 tokens$0.0036
100 articles/month250,000 tokens$0.36
n8n self-hosted VPS (DigitalOcean)$6.00/mo
Total pipeline cost (100 articles)~$6.36/month

With prompt caching enabled on the system prompt, you'll see roughly 40% reduction on cached token costs during batch runs — bringing the 100-article monthly cost to under $5 total.

Compare that to one freelance content coordinator at $500-2,000/month. The pipeline doesn't sleep, doesn't take weekends, and processes an article in under 10 seconds. We still have humans reviewing the Notion drafts before publishing — that's the right use of their time. The mechanical reformatting is gone.

Scaling considerations

  • Anthropic rate limits: 50 requests/min on most tiers. Batch 10+ articles? Add a 1.5s delay between calls in your loop.
  • LinkedIn API: 500 UGC posts/day limit per app. More than that, use Buffer or a social media API layer.
  • Twitter/X API: Basic tier allows 500 posts/month. Check your tier limits before wiring up automated posting.
  • n8n execution limit: cloud free tier caps at 5,000 executions/month. Self-hosted removes this limit entirely.

30 Minutes to First Run

Here's the honest setup timeline. We've built this with a dozen clients now — here's what actually takes time:

  • 0-5 minn8n account + Anthropic API key. You probably already have both.
  • 5-15 minBuild the core 4 nodes: Webhook → Set → HTTP Request (Claude) → Code (parse). Test with a sample article.
  • 15-25 minWire the output branches. Slack takes 2 minutes. LinkedIn OAuth takes 10. Twitter API setup takes the longest if you haven't done it before.
  • 25-30 minTest end-to-end with a real article. Check all 5 outputs. Add the error catch nodes.

The part that actually takes time isn't the automation — it's tuning the prompt so the output sounds like you. Expect to iterate the system prompt 3-5 times before it's right. Once it is, you'll stop touching it.

We've run this pipeline for clients across SaaS, e-commerce, and B2B services. The consistent finding: the first week, people review every output carefully. By week three, they're approving LinkedIn posts on their phone in 30 seconds. By month two, they've stopped worrying about content consistency because the pipeline's voice is locked in.

Want us to build this for your agency?

We build custom n8n + Claude API pipelines for marketing agencies and B2B companies. If you'd rather have a working system in a week than spend 30 minutes building it yourself, let's talk. We'll scope it, build it, and hand it off with documentation your team can actually use.

Book a Call →