How-tos
How to use your .fix_it_plan.md file
Every audit exports a markdown file. Here's the fastest way to turn it into shipped fixes — open it in Cursor, paste sections into Claude, or work through it by hand.
Last updated April 9, 2026
Every VistaCite audit produces a file called fix_it_plan-{domain}-{date}.md. You download it from the results page by clicking Download .fix_it_plan.md. The file is the actual product — the dashboard is just a preview of what's inside it. This doc explains how to use the file to ship real fixes as fast as possible.
What's in the file
A single markdown document with these sections, in order:
- Title — audited domain and date
- Audited URL — blockquote with the exact URL scanned
- Summary — a 2-3 sentence TL;DR with your scores and the single highest-impact fix
- Crystal Ball — what an LLM knows about your domain verbatim
- Summary Scores — a markdown table with SEO / AEO / GEO scores plus critical / major / minor issue counts
- This Week — Critical fixes — the issues with highest severity, in priority order
- This Month — Major improvements — medium-priority issues
- This Quarter — Minor polish — low-priority cleanups
- What's Working — passing checks, so you know what NOT to break
- What we didn't check — transparent list of things VistaCite doesn't measure (Core Web Vitals, site-wide crawl, backlinks)
- Out of Scope (but still important) — off-page advice for things that matter more than any on-page fix
- Footer — generation timestamp + re-audit link
Every issue in the tiered sections has the same structure: a Finding (what's wrong), an Effort + Impact label, a Fix (what to do), a Copy-paste code block (for checks where we have a Claude-generated or static-template fix), and a Verify instruction.
The fastest path: Cursor or Claude Code
If you use an AI coding assistant, drop the whole .md file into the chat with a prompt like:
This is an AI search visibility audit for my website. Work through each issue in the "This Week" section, starting with the #1 critical fix. For each one: explain what it means, show me where in my codebase to apply the fix (my repo is open), and paste the exact replacement code.
The file is designed for this workflow. Each issue is self-contained — the AI assistant can understand the full context from a single section without needing to read the rest of the file. The copy-paste code blocks are language-tagged (html, json, ```markdown) so Cursor syntax-highlights them correctly.
The second-fastest path: work through it by hand
Open the file in any markdown viewer — Obsidian, Notion, GitHub, VS Code preview, or cat it in the terminal. Work through sections in order:
- Read the Summary first. The TL;DR tells you the single biggest lever. Fix that first if it's a one-liner.
- Crystal Ball. If the answer is "I don't have reliable information about this domain," read the Crystal Ball doc — you have an off-page problem that on-page fixes won't solve. Keep going anyway because AEO is the floor.
- Summary Scores table. Note which pillar is weakest — that's where the biggest score gains are.
- This Week. Work through every critical issue. These are things the audit flagged as blocking AI engines from understanding your page at all. For each one:
- Read the Finding to confirm the problem
- Read the Fix to understand the solution
- Paste the Copy-paste block into your code
- Deploy
- Run the Verify step to confirm
- This Month. Same workflow but lower urgency. Batch these into a sprint.
- This Quarter. Nice-to-haves. Do them when you have a spare afternoon.
The power move: commit the file to your repo
Put the downloaded .md file in docs/seo/audit-YYYY-MM-DD.md in your repo. That way:
- The audit is versioned — you can diff this week's audit against last month's
- Your team can see the same findings you see
- Future you can answer "when did we fix the H1 issue?" with
git log - Re-running the audit and committing the new file shows actual score movement over time
What not to do
- Don't fix everything at once. The tiering is deliberate — "This Week" items give you the biggest return per minute spent. Work them first.
- Don't skip the Verify step. It takes 30 seconds and catches the case where you thought you deployed but didn't.
- Don't treat the scores as golden. They're heuristics, not a SERP ranking. A 95 doesn't guarantee citation by ChatGPT — it just means you're not losing to easily-fixable problems.
- Don't publish a shareable audit as your only marketing. The
/s/[id]share link is a neat rhetorical device, not a long-term positioning play.