This morning I had 47 companies sitting in a folder called "ready." Companies I'd researched, scored by fit (S, A, or B tier), confirmed were actively hiring. The average job search sends maybe 100 applications over several months. I had 47 queued.
The problem was I couldn't get myself to apply.
Three months in, I'd sent 8 applications total. I know the math. I know the pattern. I just couldn't sit down and fill out the forms. Every application required re-activating a particular mental mode I was actively avoiding: re-reading my own resume, re-drafting cover letter answers, running out of momentum halfway through form 3. Eight applications in three months, then nothing.
So I built a system to do it for me.
By 4 PM today I'd applied to 25 companies.
Here's what that actually looked like.
I use something I've been calling Claude OS (a setup where Claude runs most of my day). There's a specialist role called Job-Search Claude. At the start of each batch, I give it a list of companies and it opens each one in Chrome, navigates to the application form, and fills in every text field: name, email, phone, LinkedIn, GitHub, portfolio URL, the behavioral questions, the "why this company" box, the work authorization dropdowns, the EEO fields.
By the time I touch the form, the thinking is done. My job is to upload the resume file (Chrome extensions can't access local filesystems) and click submit. About 90 seconds per company.
We ran 7 batches across the day. Between batches, Job-Search Claude updated the pipeline database, closed dead postings where the Ashby links had 404'd, and queued the next group. I approved each batch. I clicked submit on each form. That's the whole thing.
One of today's applications was to a company called Writer. I used Claude to write it. I've decided this is fine.
This isn't "AI generates slop applications," though. The automation is the easy part. The part that took months to build is the raw material it draws from.
In a file called story-bank.md, I have three fully-written behavioral stories: the Contoural citation pipeline (40 hours per client engagement down to 10, 75% reduction, measured against human expert baselines), the Claude OS architecture, the Roman AI embedding infrastructure. Each one with hard numbers. Each one written in my actual voice after several rounds of feedback on what reads as human and what reads as ChatGPT.
The "why this company" answers follow a voice guide: parentheses for asides (never em dashes), contractions, one specific technical observation about what they're actually building (something that only makes sense if you read their docs). Generic mission praise is filtered out before it ever makes it into the file. "I'm passionate about AI's potential to transform X" is the sign that I did zero research on the company, and the answer is useless.
The pattern is 70/30. The 70% constant is my proof block: the stories I actually lived, the numbers I actually measured. The 30% variable is company-specific: the problem sentence, the forward bridge. Job-Search Claude writes the 30% from real research. It pulls from the 70% as-is.
For Lindy (they build an AI assistant you can deploy to actually do tasks, not just suggest them), the answer wrote itself. I've been building a personal version of that for six months. I could point to something specific about how our orchestration approaches compare.
For Helicone (LLM API observability), I use it. The answer was two sentences about my actual Anthropic billing setup and one sentence about what I'd want to see in their next feature.
The companies where Job-Search Claude came back with a generic answer were the ones with no good specific hook. Same as any cover letter. Garbage in, garbage out. The automation just processes it faster.
I probably spent more total hours building this than I would have spent writing 25 cover letters by hand.
The story-bank.md took a weekend. The voice guide went through several rounds of feedback before the answers stopped sounding like a LinkedIn post. The browser automation took a while because form state is genuinely annoying to work with across a dozen different Greenhouse and Ashby implementations. And before any of this existed, I sent 8 applications in 3 months because I couldn't make myself do it.
The system didn't make the work disappear. It moved the work to a place where I could actually do it. Building story-bank.md on a Saturday is psychologically different from the recurring friction of every individual application.
That's probably the real thing here. Applying to jobs wasn't hard because the tasks were hard. It was hard because every single application required context-switching into a mental mode I'd been avoiding. Turning it into a batch process with pre-assembled material is mostly a psychological intervention dressed up in Python and browser automation.
I have 22 companies left in "ready."
While I was working through today's batches, a Job-Search agent I'd run overnight had already found 42 new companies to add to the pipeline. It did this without me asking. I'd queued it the night before and gone to sleep. By morning it had produced research docs on each company, scored them by fit, and staged the top four as "ready."
I'll do the next batch in a few days when I feel like it.
That sentence wasn't true four months ago.