I Automated 80% of My Freelance Busywork with n8n and AI Agents
A practical walkthrough of the AI-powered workflows I actually use to handle invoicing reminders, content pipelines, client onboarding, and monitoring — with real configs you can steal.
Six months ago I sat down and mapped every recurring task in my freelance workflow. Client follow-ups, deployment monitoring, invoice reminders, content scheduling, lead qualification — the list was embarrassing. Not because the tasks were complex, but because I was still doing most of them by hand.
Now roughly 80% of that busywork runs on autopilot. The backbone is n8n — a self-hostable workflow automation platform — combined with AI agent nodes that handle the parts requiring actual judgment. Not the "AI will replace us all" kind of judgment. More like "read this email, figure out if the client is asking for a revision or a new feature, and route it accordingly."
Here's how I set it up, what actually works, and where AI agents fall flat.
Why n8n Over Zapier or Make
I've used all three. Zapier is fine until you hit their pricing wall — at ~$70/month for 2,000 tasks, it gets expensive fast when you're running multiple automations. Make (formerly Integromat) is cheaper but the UI makes me want to close my laptop.
n8n hits the sweet spot for developers:
- Self-hostable — I run mine on a Railway instance for about $5/month. That's unlimited executions.
- Code when you need it — Every node can have custom JavaScript. No "upgrade to premium to use code" nonsense.
- AI Agent node built-in — LangChain integration is native. You connect an LLM, give it tools, and it reasons through multi-step tasks inside your workflow.
- Version control friendly — Workflows export as JSON. I keep mine in a git repo.
The tradeoff is setup time. If you just want "when I get an email, post to Slack," Zapier is faster. But if you want AI-powered decision making inside your automations, n8n is the only platform where I've gotten that working reliably in production.
The Setup: n8n on Railway
My n8n instance runs on Railway with a Postgres database for persistence. Total cost is around $5-7/month depending on execution volume.
Here's the railway.json config I use:
{
"$schema": "https://railway.app/railway.schema.json",
"build": {
"dockerfilePath": "Dockerfile"
},
"deploy": {
"numReplicas": 1,
"restartPolicyType": "ON_FAILURE",
"restartPolicyMaxRetries": 10
}
}
And the Dockerfile:
FROM n8nio/n8n:latest
ENV N8N_PORT=5678
ENV N8N_PROTOCOL=https
ENV GENERIC_TIMEZONE=Europe/Warsaw
ENV N8N_METRICS=true
ENV N8N_DIAGNOSTICS_ENABLED=false
ENV EXECUTIONS_DATA_PRUNE=true
ENV EXECUTIONS_DATA_MAX_AGE=168
EXPOSE 5678
CMD ["n8n", "start"]
Environment variables on Railway:
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=your-postgres-host.railway.internal
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_USER=postgres
DB_POSTGRESDB_PASSWORD=your-password
N8N_ENCRYPTION_KEY=generate-a-random-string-here
WEBHOOK_URL=https://your-n8n-instance.up.railway.app/
One tip: set EXECUTIONS_DATA_MAX_AGE to 168 (7 days). Without it, your Postgres database will bloat with execution logs and you'll burn through Railway's storage allocation in a month.
Workflow 1: Smart Email Triage
This is the one that saves me the most time. Every morning, an AI agent reads my unread client emails and categorizes them.
The workflow structure:
Trigger (Cron: 8AM) → Gmail Node (fetch unread) → Loop → AI Agent → Router → Actions
The AI Agent node uses Claude as the LLM with this system prompt:
You are an email triage assistant for a freelance developer.
Classify each email into exactly ONE category:
- URGENT_BUG: Production issue or critical bug report
- REVISION: Client requesting changes to existing work
- NEW_FEATURE: Client requesting new scope (potential upsell)
- INVOICE: Payment-related communication
- SPAM: Irrelevant or promotional
- INFO: General information, no action needed
Respond with JSON: {"category": "CATEGORY", "summary": "one-line summary", "requires_response": true/false}
Based on the category, the router sends the email to different destinations:
- URGENT_BUG → Slack DM to me + creates a GitHub issue in the client's repo
- REVISION → Adds to my ClickUp task list with the summary
- NEW_FEATURE → Drafts a scoping reply (AI-generated, I review before sending)
- INVOICE → Checks against my invoice tracking sheet, flags overdue ones
The key insight: the AI agent doesn't need to be perfect. It needs to be right ~90% of the time, and the 10% it gets wrong should fail safe. Miscategorizing a REVISION as INFO is fine — I'll see it eventually. Miscategorizing an URGENT_BUG as SPAM would be bad. So I tuned the prompt to be aggressive about the urgent category.
After three weeks of running this, my morning email routine went from 30 minutes to about 5 minutes of reviewing the AI's work.
Workflow 2: Client Onboarding Pipeline
When a new client signs a contract, I trigger this workflow via webhook. It:
- Creates a project in ClickUp with my standard task template
- Creates a private Slack channel named
client-{name} - Sends a welcome email with next steps (AI-personalized based on project type)
- Sets up a Supabase project if the client needs a backend
- Creates a monitoring dashboard on the deployment platform
The AI agent handles step 3. It takes the project brief and generates a welcome email that references specific deliverables, timeline, and communication preferences. Much better than my old template that started with "Hi! Welcome aboard!" regardless of whether it was a 2-week landing page or a 6-month SaaS build.
The Supabase setup (step 4) was the trickiest to automate. n8n doesn't have a native Supabase node, so I use the HTTP Request node with Supabase's Management API:
// In a Function node before the HTTP request
const projectName = $input.first().json.client_slug;
return {
json: {
name: projectName,
organization_id: "your-org-id",
plan: "free",
region: "eu-central-1",
db_pass: generateSecurePassword()
}
};
function generateSecurePassword() {
const chars = 'ABCDEFGHJKMNPQRSTUVWXYZabcdefghjkmnpqrstuvwxyz23456789!@#$%';
let result = '';
for (let i = 0; i < 24; i++) {
result += chars.charAt(Math.floor(Math.random() * chars.length));
}
return result;
}
This alone saves me about 45 minutes per new client.
Workflow 3: Deployment Monitoring + Incident Response
I monitor about a dozen client sites. The old approach was checking UptimeRobot and hoping for the best. Now I have a workflow that:
- Pings each site every 5 minutes (n8n Cron + HTTP Request)
- If a site is down, waits 2 minutes and checks again (to avoid false positives)
- If still down, the AI agent checks the Vercel/Railway deployment logs via API
- It generates a preliminary incident report: what likely broke, which deploy might have caused it, and suggested fix
- Sends me a Slack alert with all that context
The AI analysis isn't always right about the root cause, but having deployment logs pre-fetched and a hypothesis ready means I can start debugging immediately instead of spending 10 minutes just figuring out what happened.
Here's the monitoring check function:
const sites = [
{ name: "Client A", url: "https://client-a.com", expected_status: 200 },
{ name: "Client B", url: "https://client-b.vercel.app", expected_status: 200 },
// ... more sites
];
const results = [];
for (const site of sites) {
try {
const response = await this.helpers.httpRequest({
method: 'GET',
url: site.url,
timeout: 10000,
resolveWithFullResponse: true
});
results.push({
...site,
status: response.statusCode,
healthy: response.statusCode === site.expected_status,
responseTime: response.timings?.end || 0
});
} catch (error) {
results.push({
...site,
status: 0,
healthy: false,
error: error.message
});
}
}
return results.filter(r => !r.healthy).map(r => ({ json: r }));
Workflow 4: Content Pipeline
This one is meta — it's part of how this blog post was created. The workflow:
- Trigger (weekly cron or manual)
- AI agent generates a topic based on trending dev discussions (it has access to a web search tool)
- Another AI agent writes the full article draft
- The draft gets saved to a GitHub repo as an MDX file
- Vercel auto-deploys
I review every article before it goes live. The AI writes a solid first draft, but it always needs editing — removing corporate-speak, adding real numbers from my experience, fixing code examples that are almost right but not quite.
The key to making the content pipeline work is the system prompt. I spent more time tuning that prompt than building the actual workflow. The difference between a generic AI article and one that reads like a real person wrote it comes down to very specific negative instructions: no "dive deep," no "in this article we'll explore," no "game-changer."
Where AI Agents Fail (and What I Do Instead)
Not everything should be an AI agent. I tried and failed to automate:
Complex code reviews — The AI catches syntax issues and obvious bugs, but misses architectural problems. I still review PRs manually.
Client negotiation — I experimented with AI-drafted pricing responses. They were either too aggressive or too passive. Human judgment matters here.
Creative decisions — "Should this button be blue or green?" is not a question for an AI agent in a workflow. Those still need a human with context about the brand and user research.
Multi-step API workflows with poor error handling — If step 3 of 7 fails, the AI agent doesn't know how to gracefully recover. I use traditional error handling (try/catch in Function nodes) for critical paths and only add AI for the decision-making steps.
The rule of thumb I follow: if the task has a clear right/wrong answer, automate it traditionally. If it requires judgment but not creativity, use an AI agent. If it requires creativity or nuance, do it yourself.
Cost Breakdown
Here's what my automation stack actually costs per month:
| Service | Cost | What it does | |---------|------|-------------| | Railway (n8n + Postgres) | ~$7 | Hosts n8n and its database | | Claude API | ~$15 | Powers all AI agent nodes | | Supabase (free tier) | $0 | Client project backends | | GitHub | $0 | Workflow version control + content deploy | | Vercel (free tier) | $0 | Blog hosting | | Total | ~$22/month | |
Compare that to doing everything manually at my hourly rate, and the ROI is absurd. The email triage alone saves me ~12 hours per month.
Getting Started
If you want to build something similar, here's my recommended order:
- Start with one workflow. The email triage is the highest impact-to-effort ratio.
- Self-host n8n on Railway. It takes about 15 minutes and costs less than a coffee.
- Use Claude for AI nodes. In my testing, it handles structured output (JSON classification) more reliably than other models.
- Build error handling first, AI second. Make sure your workflow works with dummy data before adding AI decision-making.
- Keep AI prompts short and specific. Long, complex prompts produce worse results than tight, focused ones.
The full workflow JSON files are in my GitHub repo if you want to import them directly into your n8n instance.
Final Thoughts
Automation isn't about replacing yourself. It's about buying back time for the work that actually requires your brain — architecture decisions, client relationships, creative problem-solving. The busywork was never the valuable part.
Six months in, I'm spending about 15 fewer hours per month on administrative tasks. That's almost two full workdays I've reclaimed for building things, learning, or just not working. Which, as a freelancer, might be the most valuable automation of all.
Some links in this article are affiliate links. If you sign up through them, I may receive a small commission at no extra cost to you. I only recommend tools I actually use and pay for myself.