OpenClaw is everywhere right now. An autonomous AI agent that connects to your apps, reads your messages, decides what to do on your behalf. It’s genuinely impressive, and I’ve been excited about it for weeks.

Then I tried to list what I’d actually want an autonomous agent to do for me, and the list was shorter than I expected. Most of my automations aren’t “go explore and figure out what I need.” They’re “do this specific thing at 7am every day.” For that, you don’t need autonomy. You need a cron job.

So I built a pipeline that pulls from about 100 sources every morning, runs everything through Claude, and delivers a curated digest to a Slack channel before I’ve finished my coffee. It costs about $1.50 a month to run, and it’s been one of the most useful things I’ve built this year.

The workflow

The reason this works is a combination that I don’t think enough people are using yet. Trigger.dev is a code-first automation platform — basically N8N but you write TypeScript instead of dragging boxes around. On its own, that’s useful. But code-first means Claude can write your automations for you. You describe what you want, Claude writes the TypeScript, and Trigger.dev runs it in the cloud. No containers to manage, no servers to babysit, no timeout limits to worry about.

Vercel or Supabase Functions are built for the stuff behind your website — they’ll time out after seconds because nothing on your site should take that long. Trigger.dev will let a task run for up to 14 days (not exaggerating), chain it into the next one, retry if something fails, and do it all again tomorrow morning. Once you have it, you start seeing automations everywhere.

You deploy with one command (npx trigger.dev deploy), you get a dashboard that shows every run, and the whole thing has built-in retries with exponential backoff. If one of my five source fetchers fails because a site is temporarily down, the others still complete and I get a partial digest instead of nothing. That kind of resilience used to require real engineering. Now it’s a config option.

What the pipeline actually does

Every morning at 07:00 CET, a cron job fires and fans out to five parallel fetchers:

  • Hacker News — grabs front page stories matching my keywords, fetches the article text via Jina Reader, and pulls in the top comments via Algolia’s HN API. The comments are often more interesting than the articles.
  • Newsletters — parses RSS feeds from the newsletters I subscribe to.
  • Blogs — RSS/Atom feeds from company blogs I follow.
  • Reddit — Atom feeds from a few subreddits, plus the mod-bot’s auto-generated TL;DRs on popular threads.
  • YouTube — recent videos from my favourite channels, with full transcripts extracted via Supadata.
  • Twitter — via xcancel as an RSS proxy, because I’d rather not deal with Twitter but interesting people still post there.

All six run in parallel as separate tasks. When they’re done, the results merge into a single list and get sent to Claude Sonnet with a prompt that says, essentially: “You are a daily intelligence analyst for a startup founder focused on AI, SaaS, and growth. Filter ruthlessly. Only keep the 12-18 genuinely interesting items. Group them by theme. Generate some content ideas.”

Claude returns structured JSON, and the final task formats it as Slack Block Kit and posts it to my channel.

The whole pipeline runs in under two minutes. At 7AM, I open Slack, and there’s a themed intelligence brief waiting for me. The themes shift based on what’s actually happening. Some days it’s all about new model releases, other days it surfaces a pattern across three unrelated sources.

The parts that were surprisingly easy

RSS feeds just work. For all the talk about RSS being dead, it’s the most reliable part of the entire pipeline. You fetch the XML, parse it with fast-xml-parser, and you’ve got clean structured data. Eleven newsletter sources, and the only hiccup was that Reddit’s .rss endpoint actually returns Atom format instead of RSS, so the entry paths are different (parsed.feed.entry instead of parsed.rss.channel.item).

The Claude analysis step was also surprisingly straightforward. I was worried about the output format, but Sonnet handles structured JSON output well if you give it a clear schema and tell it what role to play. The cost is about two cents per digest, which is negligible. I considered whether Opus would give better results, but Sonnet is more than capable for filtering and theming, and the cost difference is about five times as much. Not worth it for this use case.

Slack delivery was trivial. Set up an incoming webhook, POST some JSON, done. The only wrinkle is that Slack Block Kit has a 3,000 character limit per text section, so longer digests need chunking. A small utility function handles that.

Jina Reader is almost too simple — prepend r.jina.ai/ to any URL and you get clean plaintext back. No auth needed, handles most sites well.

The YouTube minefield

YouTube transcripts took most of the build time. You need the actual transcript for a digest, but getting them programmatically is a minefield. Here’s everything I tried that didn’t work:

npm packages — there are several (youtube-transcript, youtube-captions-scraper, and a few others). They all claim to extract transcripts. They all returned zero items when I tested them. I suspect YouTube changed something on their end and the packages haven’t caught up, but regardless, none of them work as of early 2026.

yt-dlp — this actually works locally. You can extract subtitle tracks and get clean transcript text. But when you deploy to the cloud, YouTube blocks the request with LOGIN_REQUIRED. Cloud server IP ranges are apparently flagged, and there’s no practical way around it for a serverless function.

YouTube’s InnerTube API — the undocumented internal API that the website and apps use. The WEB client returns UNPLAYABLE for most videos. The ANDROID client works from a local machine but, again, LOGIN_REQUIRED from cloud IPs. Same story.

Caption track URLs from page scraping — you can find the caption track URLs in the video page HTML. They return HTTP 200, which feels promising, and then give you zero bytes of content for auto-generated captions. Manual captions sometimes work. Auto-generated ones, which are the majority, consistently return empty responses.

The get_transcript InnerTube endpoint — returns 400 FAILED_PRECONDITION. I never figured out what the precondition was.

After two days of this, I found Supadata. You give it a video URL and an API key, and it returns the transcript. The free tier is rate-limited to one request per second, which is fine for a daily digest pulling from seven channels. I added a 1.1-second delay between calls, and it just works. Every time. From cloud IPs. For free.

Automation always has one component that’s unexpectedly hard, and the way you deal with it is by finding someone who’s already solved the problem and using their API. Supadata exists because someone else went through the same minefield and decided to make a business out of it.

YouTube is the most dramatic example, but half the internet doesn’t want you making automated requests. RSS feeds bypass most of it because they’re meant for automated consumption. Jina Reader handles article extraction from cloud IPs. Algolia gives you Hacker News data without touching the site. For everything else, a title and a link is often enough for Claude to work with.

The cost

Every service in the pipeline — Trigger.dev, YouTube Data API, Supadata, Jina Reader, Slack — has a free tier that covers this easily. The only actual cost is Claude Sonnet at two to five cents per call, once a day. I spend more on a single coffee than I spend in a month running this.

What it’s actually like

Some mornings there are eight items, some mornings eighteen. Claude’s filtering isn’t perfect — it occasionally includes something obvious or misses a connection I would have caught. But on balance, it catches more than I’d find manually, and across sources I wouldn’t have checked on a given day.

The content ideas section has been unexpectedly valuable. Claude spots patterns — “three separate sources discussed X this week” — and suggests angles I hadn’t considered. It’s helped me surface and comment on content I otherwise would never have seen.

The YouTube transcripts are probably the highest-value component. Having Claude summarise a 45-minute video into key points, with community commentary from Hacker News or Reddit layered in, is often as useful as watching the video. If it grabs my attention, I can watch the whole thing.

The thing that compounds is the environment. Every API key I added for the digest — Slack, YouTube, Claude, Supadata — is already there when I want to build the next automation. The second project starts further along than the first. The third further still. You’re building up a toolkit, not just a single tool.

I keep thinking about this when I see the OpenClaw demos. They’re genuinely cool, but when I sit down and ask “what would I actually want an autonomous agent to handle?”, the answer is almost nothing. I don’t need an AI deciding what to do with my Slack at 3am. I need my digest to run, my notes to post, my monitoring to check. Well-defined things, on a schedule, that I set up once and forget about. The exciting part isn’t the autonomy, it’s that the setup takes an afternoon instead of a month.

The whole thing is TypeScript. It runs in the cloud. It costs less than a coffee. And it just works, every morning, without me thinking about it.


The daily digest pipeline runs on Trigger.dev. The full source code is on GitHub if you want to fork it and build your own.