Output Schema — Standardized Task Output Format
Every scheduled task generates a dated markdown file following consistent naming and structural conventions. The output schema ensures that every report, scan, draft, and alert is predictable, parseable, and cross-referenced with the client brief.
File Naming Convention
All outputs follow: briefs/{brand}/{location}/{category}/{YYYY-MM-DD}-{task-type}.md
briefs/keystone-insurance/buffalo/reports/2026-04-01-weekly.md
briefs/keystone-insurance/buffalo/scans/2026-04-01-geogrid.md
briefs/keystone-insurance/buffalo/drafts/2026-04-01-gbp-posts.md
briefs/keystone-insurance/buffalo/alerts/2026-04-01-review-drop.md
briefs/keystone-insurance/buffalo/prospects/2026-04-04-prospect-audit.md
Output Categories
| Category | Purpose |
|---|---|
reports/ | Weekly, monthly, quarterly summaries |
scans/ | Geogrid scans, citation audits, competitor snapshots |
drafts/ | GBP posts, review responses, page content awaiting approval |
alerts/ | Monitoring alerts requiring immediate attention |
prospects/ | Prospect audits for sales use |
Mandatory File Structure
Every output file contains these sections in order:
Header
# {Task Name} — {Business Name} — {Location}
- Date: {YYYY-MM-DD}
- Task: {task-id}
- Type: monitoring | reporting | execution | prospecting
- Tier: autonomous | queue | notify
Status
## Status
- Result: SUCCESS | PARTIAL | FAILED
- Tools: {list of tools invoked}
- Errors: {any errors encountered, or "none"}
- Runtime: {execution duration}
Summary
## Summary
{2-4 sentence narrative prioritizing insights}
Findings
## Findings
{Task-specific content — metrics, analysis, recommendations}
Recommended Actions (if applicable)
## Recommended Actions
| Priority | Action | Effort | Impact |
|----------|--------|--------|--------|
Approval Required (queue/notify tiers only)
## Approval Required
- Status: PENDING | APPROVED | REJECTED
- Reviewer: {reviewer}
- Date: {approval date}
- Notes: {reviewer notes}
Delivery Log (when notifications are sent)
## Delivery Log
- {timestamp} — {notification type} — {channel} — {status}
Status Field Definitions
- SUCCESS — all tools executed, complete data returned
- PARTIAL — some tools failed; output generated from available data with errors noted
- FAILED — task incomplete; errors logged and brief flagged for review
The system does not silently skip failures. Every error is documented and flagged for human review.
Summary Writing Standards
Five rules govern summary composition:
- Lead with the most critical finding — what matters most comes first
- Show directional change — “improved from X to Y” not just “is Y”
- Include one concrete next action — what should happen based on these findings
- Never include raw API responses — interpret the data, don’t dump it
- Write for a 30-second colleague briefing — concise, actionable, professional
Good example:
Rankings improved across 6 of 7 grid points; downtown cluster
remains weak. 3 new reviews healthy; 1-star review unanswered
since yesterday. Recommend responding to 1-star review today
and monitoring downtown grid next week.
Bad example:
Scan completed. ARP: 7.6. SoLV: 58%. Reviews: 3. See findings.
The good example tells you what happened, what matters, and what to do next. The bad example makes you read the findings section to understand anything.
Brief Update Protocol
After every task completion, the agent adds one line to the location brief’s Session Log:
[DATE] — {task type} complete → {one-line finding} → see {output file path}
If the task surfaces Critical findings, those get added to the brief’s Findings section. The Next Action gets refreshed if the new finding changes the priority.
The brief stays lean — one line per task run. The full output lives in the file.
Learn More
To learn how output schemas fit into the broader system, see the scheduled tasks overview and briefs overview.