Core Skill

15 Scheduled Tasks That Run Your Local SEO on Autopilot

Automated monitoring, reporting, content drafting, and prospecting. 15 task templates that execute on schedule with three approval tiers — autonomous, queue, and notify.

Get on GitHub

Scheduled Tasks: 15 Automated Workflows That Run While You Sleep

Monday morning. You open your laptop and the weekly ranking reports for all 12 clients are already written. Two clients had ranking drops — alerts landed in Slack on Saturday. Four GBP post drafts are queued for your review. A prospect audit you scheduled Friday afternoon is sitting in your inbox, ready to attach to the proposal.

None of this required you to be at your desk. The tasks ran on schedule. The outputs landed in the right places. The alerts fired when thresholds were crossed. The drafts are waiting for your approval before anything goes live.

This is what local SEO looks like when the recurring work is automated.

What Scheduled Tasks Are

Fifteen task templates covering four categories of recurring local SEO work:

Monitoring (5 tasks) — the things you need to watch continuously:

  • m1-rankings-monitor — Weekly geogrid scan. Tracks ARP, ATRP, and SoLV week-over-week. Alerts when rankings degrade.
  • m2-review-velocity — Daily review monitoring. Catches new reviews, flags low ratings, tracks response rates.
  • m3-gbp-change-monitor — Daily GBP surveillance. Detects unauthorized edits to business name, categories, hours, or attributes.
  • m4-lsa-rankings-monitor — Weekly LSA position tracking for businesses running Local Services Ads.
  • m5-ai-visibility-monitor — Monthly AI platform visibility check. Tracks mentions across ChatGPT, Gemini, Perplexity, and AI Overviews.

Reporting (4 tasks) — the deliverables clients expect:

  • r1-weekly-report — Weekly performance snapshot with key metrics and directional changes.
  • r2-monthly-client-report — Comprehensive monthly report with executive summary, metrics tables, and recommended actions.
  • r3-multi-location-rollup — Portfolio-level summary for multi-location brands. Aggregates across all locations.
  • r4-quarterly-business-review — QBR document with trend analysis, competitive shifts, and strategic recommendations.

Engagement (4 tasks) — the work that keeps profiles active:

  • e1-gbp-post-drafts — Monthly GBP post creation. Four posts per month — service spotlight, seasonal offer, educational, social proof. Held for approval.
  • e2-review-response-drafts — Daily review response drafting. Professional responses queued for your review before posting.
  • e3-citation-audit — Monthly citation health check. Catches new NAP inconsistencies and tracks correction status.
  • e4-page-content-audit — Quarterly on-page content review. Identifies thin content, missing schema, and optimization opportunities.

Prospecting (2 tasks) — finding and qualifying new business:

  • p1-prospect-audit — On-demand prospect analysis. Run a quick audit on a potential client and get a findings summary for your pitch.
  • p2-competitor-monitor — Monthly competitive landscape tracking. Watches competitor ranking shifts, new reviews, and GBP changes.

See It Work: A Week of Automated Tasks

Monday 7:00 AM — m1-rankings-monitor runs for all clients

Output for Keystone Insurance Buffalo:
  → scans/2026-04-07-geogrid.md
  ARP: 7.6 (↑ from 8.2 last week)
  SoLV: 58% (↑ from 51%)
  No alert triggered (improvement trend)

Output for Valley Plumbing Phoenix:
  → scans/2026-04-07-geogrid.md
  ARP: 5.1 (↓ from 4.3 last week)
  SoLV: 42% (↓ from 55%)
  🚨 ALERT → Slack: "Valley Plumbing Phoenix — ARP degraded 0.8 positions.
  SoLV dropped 13 points. See scans/2026-04-07-geogrid.md"
Tuesday 8:00 AM — m2-review-velocity flags a new 1-star review

🚨 Slack alert: "⭐ 1-star review — Keystone Insurance Buffalo
'Filed a claim 3 weeks ago, no one has called me back.'
→ Draft response queued: drafts/2026-04-08-review-response.md
Reply APPROVE to post | REJECT to discard | EDIT [notes] to revise"
Wednesday — e2-review-response-drafts generates responses for 3 new reviews

✍️ Slack: "3 review responses drafted for Keystone Insurance Buffalo.
2 five-star acknowledgments, 1 three-star with concern addressed.
Preview: drafts/2026-04-09-review-responses.md
Reply APPROVE to post | REJECT to discard"
First of month — e1-gbp-post-drafts runs

✍️ Slack: "4 GBP posts drafted for Keystone Insurance Buffalo.
Service spotlight: Homeowner's Insurance Review
Seasonal: Spring Storm Preparedness
Educational: Understanding Deductibles
Social proof: Client success story
Preview: drafts/2026-04-01-gbp-posts.md
Reply APPROVE to publish | REJECT to discard | EDIT [notes] to revise
Expires: 2026-04-09"

The monitoring runs autonomously. The content waits for your approval. The alerts surface what needs attention. The rest stays quiet.

Three Approval Tiers

Not every task should run without oversight. The system uses three tiers based on risk:

Tier 1: Autonomous — runs, writes output, no human needed. Monitoring and internal reporting. Rankings scans, review velocity checks, competitor tracking. The output lands in the brief; you review it when convenient.

Tier 2: Queue for Approval — runs, produces a draft, holds until you approve. GBP posts, review responses, client-facing reports. Nothing goes live until a human says yes. Drafts expire after 72 hours if not acted on.

Tier 3: Notify Before and After — highest stakes. Notifies before execution, waits for explicit confirmation, executes, then confirms completion. Client emails, GBP publishing, anything that’s hard to reverse. Double confirmation ensures nothing goes out accidentally.

Each task has a default tier. You can override per client in the brand brief configuration.

What a Task Definition Looks Like

Every task is a structured template with everything Claude needs to execute:

name: m1-rankings-monitor
description: Weekly geogrid ranking scan. Tracks ARP, ATRP, and SoLV
  week-over-week. Alerts when rankings degrade.
schedule: weekly — Monday 7 AM
tier: autonomous
skills: geogrid-analysis, localseodata-tool
mcps: LocalSEOData

The task file includes the skills to invoke, fallback guidance if a tool is unavailable, a verification checklist, the prompt template with variables, output file paths, and alert thresholds.

You don’t write these from scratch. The 15 templates are ready to use. Configure the schedule, set the approval tier, and the task runs.

Who Uses Scheduled Tasks and How

Freelancers: Monitoring Without Manual Checking

You can’t manually check rankings for 10 clients every Monday. But you need to know when something changes. Scheduled monitoring tasks watch for you and only surface alerts when thresholds are crossed. The weekly report writes itself.

Agencies: Consistent Client Deliverables

Monthly reports used to take a full day across all clients. Now the task generates them on the 1st. Review responses that used to wait until someone had time are drafted daily and queued for approval. GBP posts that used to be an afterthought are drafted monthly with seasonal awareness.

Consistency improves because the system doesn’t forget. Every client gets the same service level regardless of how busy the team is.

Multi-Location Brands: Portfolio-Level Visibility

The rollup report aggregates across all locations automatically. Identify which locations are trending up, which need intervention, and where patterns emerge across the portfolio. The QBR builds itself from three months of accumulated task outputs.

Without Scheduled Tasks vs. With

Without tasks: You wake up Monday, open 6 tabs, manually run geogrid scans for each client, compare to last week’s numbers (which you hopefully saved somewhere), draft a Slack message to the team about anything that changed, start writing the weekly report.

With tasks: You wake up Monday, check Slack. Two alerts for ranking drops. Weekly reports for all clients already in their brief directories. You spend your morning on the two clients that need attention instead of checking all twelve.

Same coverage. Different workload. The difference is whether recurring work runs on your energy or on a schedule.

How Tasks Connect to Briefs and Specs

Every task writes its output into the client’s brief directory structure. The brief’s session log gets a one-line entry. Findings get updated if something critical surfaces. The next action gets refreshed.

The output follows a standardized schema — consistent file naming, required sections, status fields, summary writing standards. Every task output looks the same, which means every report, scan, and draft is predictable and parseable.

Approval workflows are configured per client in the brand brief. One client might want all GBP posts auto-approved. Another might require every piece of content queued for review. The configuration lives with the client, not in the task definition.

Get Started — Tasks Are Free and Open Source

All 15 task templates are included in the LocalSEOSkills repository. MIT licensed. Ready to configure and schedule.

First task to try:

"Set up weekly ranking monitoring for [Business Name] targeting [keyword]."

Claude configures the m1-rankings-monitor task, links it to the client’s brief, and schedules the first run. From that point forward, ranking scans run automatically and alerts fire when something changes.

Automation that works while you don’t.


Skill Documentation

For technical details on all 15 task templates, TASK.md format, scheduling configuration, and approval tier setup, see the full tasks documentation.

All 36 skills. Free. Open source. Get on GitHub →