Most blogs have a bottleneck: the human.
You have ideas, but writing takes hours. Formatting demands attention. Finding images burns time. Deploying requires steps. Each layer adds friction between “I should write about this” and “it’s published.”
The typical solution? “Write better prompts” or “hire a VA.” But neither solves the real problem.
The real challenge isn’t execution. It’s context transfer.
This is the story of building an automated blog system where you say “write about X” and get publication-ready content in 15 minutes, with zero manual steps. Not because the AI is smarter, but because we solved the context problem first.
The Real Challenge Wasn’t Technical
When we started, the obvious approach was: “Use Hugo for static generation, Cloudflare for hosting, write some scripts.” The technical stack was straightforward. Any developer could assemble it in an afternoon.
The hard part was this question: How do you get AI to write in your voice, meet your standards, and make decisions like you would—without being there?
Every blog exists in a specific instance:
- Your audience expects a certain depth and tone
- You use preferred structures and categories
- You know what “good enough to publish” means in your context
- You’ve internalized quality standards that resist articulation
Standard approach: Write detailed prompts for each post, review the output, give revision feedback, format, deploy manually.
Our approach: Encode the instance knowledge once, then automate everything else.
The difference: One-time investment in articulation vs. ongoing investment in supervision.
The Architecture: Context File + Multi-Pass Process
The Context File (AI_CONTEXT.md)
We created a single document that captures everything an experienced writer for this blog would know—not just what to write, but how to decide, what standards to apply, and what “good” looks like here.
Workflow Documentation
- Exact steps from request to deployment
- Technical commands with full paths
- Common issues and their fixes
- Session checklist (what to do first)
Quality Standards
- 34 specific criteria organized into 4 categories
- Minimum pass threshold: 85%
- Examples of what passes and what fails
- Revision priorities when standards aren’t met
Voice Profile
- Practitioner perspective, not academic
- Balance of theory and ground reality
- Sentence patterns and tone markers
- What to avoid, what to emphasize
- Typical structures for openings and closings
Site Conventions
- Categories:
insights,frameworks,case-library,on-education - Tagging guidelines (mix broad + specific)
- Frontmatter template with required fields
- File naming:
YYYY-MM-DD-slug.md - Image location and naming
Technical Specifications
- Complete directory structure
- Deployment commands
- Hugo configuration details
- URL patterns and routing
- Troubleshooting guide
The insight: If a human joining your publication would need onboarding documentation, so does AI. This file is that onboarding—systematically encoded.
The Multi-Pass Process
Instead of “write once and hope,” we built quality assurance into the workflow itself:
Pass 1: Draft (5 minutes)
- Write freely, prioritize ideas over perfection
- Target: 1500-2500 words
- Focus: Get the thinking down, include examples
- No self-editing yet
Pass 2: Self-Critique (2 minutes)
- Review against quality checklist (34 criteria)
- Score: % of criteria met
- Identify: Weak sections, missing examples, unclear arguments
- Decision: Pass (85%+) → proceed to Pass 3 | Fail (<85%) → rewrite
Pass 3: Revision (3 minutes)
- Fix all issues identified in Pass 2
- Strengthen weak paragraphs
- Add concrete examples where missing
- Improve transitions between sections
- Convert passive voice to active
Pass 4: Polish (2 minutes)
- Optimize opening hook
- Perfect subheadings for scannability
- Tighten language (remove filler words)
- Generate SEO meta description
- Final formatting check
The key difference: Quality isn’t checked at the end—it’s built into each pass. By the time we reach deployment, we’ve already caught and fixed issues.
The Technical Implementation
Stack Choices and Why
Hugo (Static Site Generator)
- Builds in ~15ms (fast iteration)
- Built-in taxonomies (categories, tags)
- Simple Go templating (no complex framework)
- No database to maintain or secure
- Markdown as source (portable, version-controllable)
Cloudflare Pages (Hosting)
- Free tier with unlimited bandwidth
- Global CDN (fast everywhere)
- Simple deployment via Wrangler CLI
- No Git required (deploy from local files)
- Automatic SSL, instant rollbacks
Markdown + Frontmatter (Content Format)
---
title: "Your Title"
date: 2025-10-12T14:30:00+05:30
categories: ["insights", "frameworks"]
tags: ["automation", "content", "systems"]
featured_image: "/images/blog/slug.jpg"
description: "SEO-optimized summary (150-160 chars)"
draft: false
---
Clean, readable, portable. No proprietary formats.
The Automation Pipeline
Input: “Write a blog about X”
Process:
- Context Load: Read AI_CONTEXT.md (all standards loaded into working memory)
- Pass 1 - Draft: Generate outline, write freely, include examples
- Pass 2 - Critique: Score against 34 criteria, identify issues
- Pass 3 - Revise: Fix all identified issues
- Pass 4 - Polish: Final touches, generate meta
- Image Selection: Search stock photos, evaluate top 5, auto-select best fit
- File Creation: Generate filename (YYYY-MM-DD-slug.md), assemble frontmatter, write content
- Build:
hugo(fails fast if errors exist) - Deploy:
wrangler pages deploy public(only runs if build succeeds) - Confirm: Return URL + metadata (title, word count, categories)
Output: Live URL in 10-15 minutes
Manual steps: Zero
What Makes It Work
1. Specificity Over Generality
Bad approach: “Write a good blog post” Our approach: “Meet these 34 specific, measurable criteria”
Examples from our checklist:
- ✅ Clear thesis stated within first 2-3 paragraphs
- ✅ Include 2-3 concrete examples (not abstract theory only)
- ✅ Use active voice in >80% of sentences
- ✅ Place subheadings every 3-5 paragraphs for scannability
- ✅ Write 1500-2500 words (6-10 min read)
- ✅ Use practitioner perspective (experience-driven, not academic)
No ambiguity. Pass or fail is clear for each criterion.
2. Self-Correction Built Into the Process
Traditional workflow:
- AI writes
- You review
- You give feedback
- AI revises
- You review again
- Repeat 3-5 times
Automated workflow:
- AI writes (Pass 1)
- AI critiques itself against standards (Pass 2)
- AI revises based on its own critique (Pass 3)
- AI polishes for publication (Pass 4)
- Done
The critique step uses the same checklist a human editor would use. But it happens automatically, consistently, every single time.
3. Instance Knowledge Explicitly Encoded
When the context file says:
“Practitioner perspective - Experience-driven, not academic. Avoid purely theoretical framing. Include ‘what this means in practice.’ Acknowledge complexity without being defeatist. Professional but conversational.”
A fresh AI instance knows exactly how to write for this blog. Not by guessing. Not by analyzing past posts. By following explicitly encoded standards.
4. Deployment Determinism
No “hopefully it works” moments. The workflow is deterministic:
cd /Users/harishadithya/website
hugo # Build (fails immediately if errors)
wrangler pages deploy public # Deploy (only runs if build succeeds)
If something’s wrong, it fails before deployment. No broken publishes.
Lessons from the Build
What Worked
Context files beat better prompts.
Instead of crafting the perfect prompt for each post, we invested time encoding standards once. Every subsequent post benefits. The effort compounds.
This matches how organizations work: You don’t re-explain standards every time someone writes a report. You have a style guide. AI needs the same.
Multi-pass beats multi-agent.
We considered building separate writer, critic, and editor agents that would communicate. Simpler solution: One AI instance takes different perspectives in sequence.
Same quality. Less complexity. No coordination overhead.
Specificity enables automation.
“Write well” is vague and subjective. “Pass 85% of these 34 specific criteria” is executable. The more specific your standards, the more you can automate confidently.
Vagueness creates variation. Specificity creates consistency.
Automation reveals gaps in your standards.
When you manually review, you apply implicit judgments (“this feels off”). When AI applies your explicit standards, gaps become obvious.
We added 10+ criteria after the first few posts revealed edge cases we hadn’t articulated. The system forced clarity.
What We’d Change
Image selection could be more sophisticated.
Current approach: Auto-select from top 5 stock photo search results based on relevance.
Better approach: Build a visual style profile (preferred colors, composition types, aesthetic) and score images against it.
Category guidance could be clearer.
We defined four categories, but the boundaries between “insights” and “frameworks” can blur. A decision tree with examples would eliminate ambiguity.
Quality scoring could be weighted.
Not all 34 criteria are equally important. “Clear thesis” matters more than “uses lists for scanability.” A weighted system would better reflect priorities.
Feedback loop could be tighter.
Right now, we review patterns every 5 posts. Real-time quality tracking would reveal issues faster and enable quicker corrections.
What This Means for You
The Pattern Generalizes
This isn’t just about blogging. The pattern applies anywhere you have:
- Recurring tasks with quality standards
- Accumulated expertise that’s hard to articulate
- A need for consistency across instances
- Bottlenecks in delegation or automation
Code review:
- Encode your style guide and architectural principles
- Automated PR feedback against standards
- Human review only for complex judgment calls
Report writing:
- Encode organizational templates and tone
- Automated first drafts for routine reports
- Faster turnaround, consistent format
Decision-making:
- Encode your decision criteria
- AI applies framework, surfaces key factors
- You make the call with better prep
The core insight: Automation requires articulation. You can’t automate what you can’t specify.
How to Build Your Own
Step 1: Identify the recurring task What do you do repeatedly that follows patterns you’ve internalized?
Step 2: Externalize your heuristics What makes the difference between “good” and “not good enough”? Write it down. Be specific. Use examples.
Step 3: Create a context file Document:
- The workflow (step-by-step)
- Quality standards (measurable criteria)
- Voice/style profile (with examples)
- Technical specifications (paths, commands)
- Common issues and fixes
Step 4: Build the multi-pass process
- Pass 1: Generate
- Pass 2: Critique against standards
- Pass 3: Revise based on critique
- Pass 4: Polish for delivery
Step 5: Test with a fresh perspective Can someone (or a fresh AI instance) follow your context file and produce acceptable results? If not, what’s missing?
Step 6: Iterate based on failures Every time the output doesn’t meet standards, ask: Was the standard unclear? Missing? Add it to the context file.
The Real Test
A truly successful system doesn’t need its creator to operate.
The test: Can a fresh AI instance, with no memory of the build process, read the context file and produce publication-quality work?
That’s not a thought experiment. That’s how this post was written.
I (the AI writing this) started this session by reading AI_CONTEXT.md. I followed the 4-pass workflow. I applied the quality checklist. I selected the image. I created this file. I’m about to deploy it.
The person who built the system isn’t here right now. But the system works anyway.
That’s context transfer.
If you’re curious about the technical details, the entire setup—context file, quality checklist, deployment scripts—lives in this blog’s repository. The system that created this post is the system this post describes.
Comments
Note: Comments are powered by GitHub Discussions. You can comment using your GitHub account.