Making Agents Work While You Sleep
Published: 2026-03-03 · 11 min

Most of the writing about AI agents focuses on prompting. What to say. How to structure your request. Which model is best for which task. It’s all reactive — you ask, it answers.
That’s not how a production agent system works.
The part of AI agent operations that nobody documents well is the scheduled layer. The work that fires without any human input. The cron jobs that trigger agent sessions at 9 AM and 9 PM and 2 AM. The morning brief that’s sitting in your Telegram channel when you wake up. The overnight research that’s ready before your first meeting. The maintenance tasks that run in the background without you thinking about them.
That’s what this is about: how to build agent infrastructure that runs continuously, not just when you’re awake to prompt it.
The Cron Layer
A cron job is a scheduled task. In a Unix system, you configure cron entries that say: at this time, on this schedule, run this command. Standard system automation.
What makes cron useful for agent operations is that the command can be “spin up an agent session and run this task.” Every scheduled task becomes a miniature agent run, with full access to the agent’s toolset, memory architecture, and communication channels.
My current cron schedule runs six recurring events plus several one-shot reminders:
| Job | Schedule | Agent | What It Does |
|---|---|---|---|
| Mission Pulse | 9AM, 12PM, 3PM, 6PM, 9PM CST | Main | Check in-flight work, handle build queue, dispatch pending tasks |
| qmd Reindex | Nightly | Observer | Re-scan Obsidian vault for new files, update QMD embeddings |
| Session Auto-Prune | Nightly | Observer | Rotate session JSONL files over size threshold |
| Morning Brief | After 6AM CST (once) | Main | Compile status report, surface what needs attention |
Each row represents a real task that runs on a machine I own, without me doing anything. The agent handles it, produces output, and either reports back or handles the result directly.
Mission Pulse: The Heartbeat
The Mission Pulse is the most important scheduled job in the stack. It fires six times a day and does one thing: it checks on everything.
When Mission Pulse fires, the main agent reads
ops/in-flight.md (the active task tracker), checks the
shared context checkpoint file, looks for any queued messages or pending
dispatches, and evaluates whether anything has gone stale or needs
attention.
The pulse is named deliberately. It’s the heartbeat of the operation. Without it, agent work can stall silently — a dispatch goes out, the receiving agent finishes, the completion message lands somewhere, but no one looks at it until a human happens to check. The pulse eliminates that gap. Six times a day, someone is looking.
When the pulse finds something — a stale dispatch, a blocked task, a completed artifact waiting for review — it surfaces it. The main agent sends a Telegram notification with exactly what’s needed and what the next action is. When there’s nothing to surface, it silences itself.
The signal-to-noise discipline matters. If every pulse fire generates a notification, you stop reading them. The pulse only notifies when there’s something actionable. The rest of the time, it confirms state and updates internal records without producing any output.
The Morning Brief Pattern
The morning brief is its own job, separate from Mission Pulse, and it only fires once per day: the first time after 6 AM CST that a session runs.
The brief covers: - What shipped overnight — tasks completed since the previous morning brief - What’s blocked — active dispatches with no progress and a recommended action - Security note — one-line from the security audit log (full details go to the dedicated security channel) - Next three actions — the specific things I’m doing without being asked
The brief lands in my Telegram DM. By the time I’m at my desk, I know: what got done, what needs a decision, what’s happening in the background infrastructure, and what the agent is already working on.
The format is deliberately compressed. Three bullets, one line each, then next actions. It’s a status card, not a report. The goal is 30-second read time, not comprehensive documentation. If something needs more explanation, there’s a longer artifact somewhere (a checkpoint file, a research doc) and the brief cites the path.
This pattern sounds simple. The reason most setups don’t have it: it requires integrating the scheduled task system with the agent’s awareness of ongoing work. The brief isn’t just a timestamp — it has to know what’s in flight, what’s completed, what the agent has already done about it. That requires the memory architecture and the in-flight tracker to actually be maintained.
QMD Reindex: Keeping Memory Fresh
The semantic search layer over my Obsidian vault is only useful if it stays current. When new research lands in the vault, it has to get indexed before it’s searchable.
The qmd reindex job runs nightly and does two things:
qmd update # scan for new and modified files, rebuild the text index
qmd embed # embed any new documents into the vector storeThis keeps the 884-file vault (currently at 4,563 vectors) reflecting the actual state of knowledge, not the state from three days ago when I last ran it manually.
The job runs via a launchd plist (macOS equivalent of cron), fires at a fixed time, runs the two commands, and writes a completion entry to a log file. No agent session required for this one — it’s a direct shell job. But the result feeds directly into agent capability: after the nightly reindex, tomorrow’s agent sessions have full semantic search access to everything written today.
The job reports when there’s a problem. When the qmd script can’t find a dependency or the embed process errors, the Observer agent picks it up and surfaces a HEARTBEAT_FAILED status. That’s how I found out the reindex script path had drifted after a refactor — the scheduled job failed, the agent caught it, I fixed it.
Session Auto-Prune: Infrastructure Maintenance
Agent sessions write to JSONL transcript files. Those files grow. At scale, a single active agent session can accumulate multiple megabytes of transcript in a day of continuous operation.
Left unchecked, this creates two problems: disk space consumption and — more importantly — slow session load times as the system has to parse large files to reconstruct context.
The session auto-prune job runs nightly and checks all agent JSONL files against a size threshold. Files over 2MB get flagged for rotation. The job also checks the OpenClaw config for any schema issues that would cause the cleanup to fail.
This is pure infrastructure maintenance. No one should have to think about it. The job runs, files get managed, the system stays healthy. When something goes wrong with it — and occasionally something does — the Observer agent reports the issue and the fix is usually straightforward.
Reminder LaunchAgents: One-Shot Scheduling
For time-specific reminders that aren’t recurring, I use macOS LaunchAgents. These fire once at a specified wall clock time, run a command, and unregister.
Current active ones: - CB-002 QA Reminder — fires at 8:30 PM to remind me to review the Cozy Books video before uploading - Delete Upwork Reminder — fires at 9:00 PM (account closure action I’ve scheduled) - Security Rotation Reminder — fires at 9:30 PM for credential rotation
The pattern is: schedule a time-sensitive action as a LaunchAgent, the LaunchAgent fires and triggers an agent session that sends a Telegram notification, I handle the action, the agent records the completion.
This is different from cron in a meaningful way. Cron is for recurring scheduled work. LaunchAgents (in one-shot mode) are for things that need to happen once at a specific time — account deletions, time-bound reviews, anything where “remind me tonight” needs to actually fire.
The Build Queue
Mission Pulse also checks for a build queue — a list of tasks that are approved but waiting for execution. The pattern: something gets decided during a live session, it gets added to the queue, and the next pulse or the overnight run picks it up and dispatches it.
The queue prevents two things:
Live session depth — if a task would take 30 minutes of agent compute and it’s not urgent, adding it to the queue means it gets handled overnight rather than holding up the current session.
Lost work — decisions made in conversation that don’t get acted on. If you say “we should do X” but don’t dispatch it immediately, it usually doesn’t happen. The build queue captures it in the moment and makes sure it fires later.
The queue currently lives in shared-context/queue.md. In
practice, most things don’t sit in the queue long — the six daily pulses
mean most queued work gets dispatched within a few hours.
Overnight Output
Here’s what a typical overnight run produces:
The 9PM Mission Pulse fires, checks in-flight, dispatches two or three queued tasks to the appropriate agents. The engineering agent builds a small utility and commits it. The writing agent drafts an article outline. The observer agent runs the security audit.
While I’m asleep, those sessions run. By morning:
- The utility is built and committed to the repo
- The article outline is in
shared-context/drafts/ - The security audit findings are in the security channel
- The qmd reindex has run and the vault is current
- Session JSONL files have been pruned
The morning brief surfaces the summary: 3 tasks shipped overnight, 1 blocked, 0 security issues, next actions are X, Y, Z.
That’s what continuous operation looks like in practice. Not a single session with a human prompting every step — a set of scheduled jobs, dispatch chains, and agent sessions that produce output on a cadence that doesn’t require anyone to be awake.
What Makes This Different
Most AI content is about getting better answers to questions. Better prompts, better context, better models.
That’s useful. It’s also missing the more valuable half of what AI agents can do.
Scheduled agent operations compound in a way that reactive prompting doesn’t. Every overnight run, every pulse, every background maintenance job produces output that wouldn’t exist if someone had to manually trigger it. Over weeks and months, that accumulates into a significant body of work that’s been done in the background while the human focus was elsewhere.
The setup investment is real. Building a cron schedule, writing the job scripts, connecting the jobs to the agent infrastructure, setting up notification routing — that’s probably two or three days of engineering work for a solid initial setup.
The return on that investment is indefinite. Once the system is running, it runs. The cron schedule doesn’t require attention. The morning brief arrives every morning. The build queue drains overnight. The infrastructure stays maintained.
The goal isn’t to automate everything. It’s to automate the recurring, the predictable, and the time-insensitive — freeing the live interaction layer for the decisions and judgment calls that actually require a human.
Building Your Own
The minimum viable scheduled agent setup has three components:
A recurring pulse — fires at least twice a day, checks on active work, surfaces anything stale or blocked. This alone prevents the “dispatch and forget” failure mode where agent work silently stalls.
A morning brief — fires once per day in the morning, compiles status, surfaces the most important things. Makes the first 5 minutes of the workday useful instead of spent catching up.
A maintenance job — whatever your system needs to stay healthy. For me it’s the qmd reindex and session pruning. For your setup it might be something different. The point is that infrastructure maintenance shouldn’t require manual attention.
From there, add scheduled jobs as real recurring needs emerge. Don’t pre-build a cron schedule for things you don’t actually need yet. The schedule should reflect actual operating patterns, not an imagined ideal.
Start with the pulse. Everything else is optional.
Questions about setting up scheduled agent operations? Email me: [email protected]
© Ridley Research. All rights reserved.
