← Back to articles

Self-Hosting n8n on Railway: The $5/Month Automation Stack I Actually Use

I ditched Zapier and Make after hitting their limits one too many times. Here's exactly how I set up n8n on Railway with persistent storage and working webhooks.

Self-Hosting n8n on Railway: The $5/Month Automation Stack I Actually Use

I was paying $49/month for Zapier when I finally did the math. For my usage — somewhere around 8,000 tasks per month spread across client projects — I was either hitting the task limit or running into webhook restrictions every few weeks. Make's pricing model wasn't much better once I started building complex multi-step workflows.

The solution I landed on: self-hosted n8n on Railway. Three months in, I'm paying roughly $5–8/month, have no task limits, and actually understand what's running and why.

This is the exact setup I use.

Why n8n Over the Alternatives

I've used Zapier since 2019, ran Make (formerly Integromat) for about a year, and briefly tested Pipedream. Each has its place, but for developer-heavy workflows that involve custom code, webhooks from multiple sources, or integrations those platforms don't officially support, n8n wins.

You can write real JavaScript in execution nodes. Not some watered-down sandbox — actual Node.js. When I need to normalize a weird API response or do date math, I don't have to shoehorn it into a formula builder.

Webhook handling is first-class. Each workflow gets a unique webhook URL. You can respond synchronously or asynchronously, and every incoming request is inspectable. Debugging a failed webhook on Zapier involves a lot of guessing.

The data model makes sense. Every node receives an array of items and passes an array of items. Once you internalize that, building conditional logic and loops is straightforward.

The tradeoff: you're responsible for uptime. For personal automation and client projects at normal scale, that's fine. For anything with a hard SLA, use n8n Cloud.

Setting Up n8n on Railway

Railway sits nicely between "run it yourself in Docker" and the old Heroku pricing. The Hobby plan ($5/month credit) usually covers a lightly-used n8n instance.

Step 1: Create the Project

Sign up at Railway and create a new project. Click "Deploy from Template" and search for n8n — there's an official template that handles the initial config. If it's not available, deploy from the Docker image:

Image: n8nio/n8n:1.45.0

Pin to a specific version. Using latest is convenient until a breaking change hits your instance during business hours.

Step 2: Configure Environment Variables

This is where most tutorials fall short. The minimal set you actually need:

N8N_HOST=your-subdomain.railway.app
N8N_PROTOCOL=https
WEBHOOK_URL=https://your-subdomain.railway.app/
N8N_PORT=5678
N8N_ENCRYPTION_KEY=your-random-32-char-string
DB_TYPE=sqlite

Generate the encryption key with:

openssl rand -hex 16

Don't skip N8N_ENCRYPTION_KEY. Without it, n8n auto-generates one on startup — which means every time the service restarts, your stored credentials become inaccessible. Set it explicitly and treat it like a password.

For N8N_HOST and WEBHOOK_URL, use the Railway-generated domain initially. Update them once you've connected a custom domain.

Step 3: Add a Volume for Persistent Storage

This is the step that catches everyone the first time. Railway containers are ephemeral by default — data written to the filesystem vanishes on restart. For n8n, that means your workflows, credentials, and execution history disappear every redeploy.

In Railway: go to your n8n service → Settings → Volumes. Add a volume and mount it at:

/home/node/.n8n

That's n8n's default data directory. After adding the volume, trigger a redeploy. Now your data survives restarts.

If you're building anything beyond personal use, switch to PostgreSQL instead of SQLite. Add a Postgres service to the same Railway project and wire it up using Railway's reference variables:

DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=${{Postgres.PGHOST}}
DB_POSTGRESDB_PORT=${{Postgres.PGPORT}}
DB_POSTGRESDB_DATABASE=${{Postgres.PGDATABASE}}
DB_POSTGRESDB_USER=${{Postgres.PGUSER}}
DB_POSTGRESDB_PASSWORD=${{Postgres.PGPASSWORD}}

Railway resolves ${{Service.VAR}} references automatically. No hardcoded values, no secrets in plaintext.

Step 4: Enable Basic Auth

Don't leave n8n exposed without authentication. Add:

N8N_BASIC_AUTH_ACTIVE=true
N8N_BASIC_AUTH_USER=your-username
N8N_BASIC_AUTH_PASSWORD=a-strong-password

On n8n 1.x the variable names changed slightly:

N8N_SECURITY_BASICAUTH_ACTIVE=true
N8N_SECURITY_BASICAUTH_USER=your-username
N8N_SECURITY_BASICAUTH_PASSWORD=a-strong-password

Check which version you're running from the Railway service logs and use the correct set.

Step 5: Custom Domain

Railway gives you a .railway.app subdomain out of the box. If you want something cleaner, go to Settings → Domains and point a CNAME at the Railway target. After that, update N8N_HOST and WEBHOOK_URL and redeploy.

Three Workflows That Actually Earn Their Keep

GitHub → Slack PR Notifications

Trigger: Webhook (paste the URL into GitHub repo → Settings → Webhooks, select Pull requests events)

The workflow structure:

  1. Webhook Trigger — receives the GitHub payload
  2. IF Node — checks {{$json.action}} equals opened or closed
  3. Code Node — extracts what you actually need
  4. Slack Node — sends the formatted message

The Code node step:

const pr = $input.first().json.pull_request;
return [{
  json: {
    title: pr.title,
    author: pr.user.login,
    url: pr.html_url,
    action: $input.first().json.action,
    repo: $input.first().json.repository.full_name
  }
}];

Keeps the Slack message clean instead of dumping the full 200-field GitHub payload downstream.

Daily Summary via Cron

Cron trigger: 0 8 * * * (8am daily)

Pull the previous day's data from wherever — I use Supabase for most projects. The Supabase node in n8n handles auth with your project URL and service role key, then you query directly:

Table: events
Filter: created_at > yesterday's date

Pass the results to a Code node to format, then send via Gmail. Supabase's free tier (500MB database, unlimited API calls) means this particular workflow adds zero cost.

AI Document Processing Pipeline

This one's been the most useful for client work. The flow:

  1. Webhook receives a document URL (from a form submission or another service)
  2. HTTP Request downloads the file and returns it as binary
  3. Code Node converts to base64 if the API requires it
  4. HTTP Request to the AI API with the document in the body
  5. Code Node parses the structured response
  6. Supabase Node stores the result

I always build the AI HTTP Request manually rather than using a provider-specific node. It gives full control over the model selection, system prompt, and response handling. The Code node after it:

const raw = $input.first().json.choices[0].message.content;
// Strip markdown code fences if the model wrapped the JSON
const cleaned = raw.replace(/```json\n?|\n?```/g, '').trim();
const parsed = JSON.parse(cleaned);
return [{ json: parsed }];

Parsing AI responses defensively like this saves you from cryptic errors when the model decides to add formatting.

Real Cost Breakdown

Running n8n on Railway with SQLite and moderate usage:

| Item | Monthly Cost | |------|-------------| | Railway Hobby base | $5.00 | | n8n service (512MB RAM) | ~$2–4 | | Volume storage (few hundred MB) | ~$0.25 | | Total | ~$7–9 |

Compare that to n8n Cloud's Starter plan at $20/month with a 5-workflow cap, or Zapier Professional at $49/month. If you're running more than a handful of automations, self-hosting pays off fast.

The one thing Railway lacks that I miss from other platforms: a truly free persistent tier. The $5 credit requires a credit card and gets consumed by a running service. It's not zero-cost, it's low-cost.

Mistakes I Made That You Don't Have To

SQLite + concurrent workflows = lock errors. If multiple workflows fire at the same time, SQLite throws write-lock exceptions. Either keep concurrency low, throttle triggers, or move to PostgreSQL early. I learned this the hard way at 2am when a webhook flood triggered 40 simultaneous executions.

Not setting WEBHOOK_URL explicitly. When this isn't set, n8n detects the host dynamically on each startup. After a domain change or Railway redeploy, your webhook URLs silently shift. Any external service with the old URL starts failing. Set the env var once, don't touch it.

Migrating from SQLite to PostgreSQL later. The migration works — export workflows and credentials, wipe the volume, switch DB type, reimport — but it's annoying and error-prone. If there's any chance this instance will handle real traffic, start with Postgres.

Skipping queue mode. For time-sensitive workflows, n8n's default execution mode runs everything in-process. Queue mode with Redis gives you proper job queuing, retries, and concurrency control. Add a Redis service to Railway ($1–2/month) and set:

EXECUTIONS_MODE=queue
QUEUE_BULL_REDIS_HOST=${{Redis.REDIS_HOST}}
QUEUE_BULL_REDIS_PORT=${{Redis.REDIS_PORT}}

Worth it once your automations become load-bearing.

Where This Fits in a Larger Stack

n8n handles the orchestration layer. For storage and querying, Supabase handles the database. For deployment of everything else — APIs, cron workers, background jobs — Railway is where I'm hosting most things right now.

The combination gives me a full automation and backend stack for under $20/month across all projects. No per-task fees, no artificial workflow limits, and I can inspect and debug everything locally before pushing.

The setup investment is maybe two hours the first time. After that, deploying a new automation takes minutes.


Some links in this article are affiliate links. If you sign up for Railway or Supabase through these links, I may earn a small commission at no additional cost to you. I only link to tools I personally use and pay for.