Your buyers are asking ChatGPT. You are not in the answer.
A managing partner at a Miami firm told me last month that two of his last three inbound calls started the same way. The prospect had asked ChatGPT who the best advisors in his category were, gotten back a short list, and called the first name on the list. His firm was not on the list. His SEO agency told him they were working on it. His competitors were booking the meetings.
This is the pattern across every mid-market operator I talk to. The buyer has already shifted. Before a prospect fills out your contact form, they have asked ChatGPT, Claude, or Perplexity a version of the question your business exists to answer. If your site is not structured so those tools can extract and cite it, you are invisible at the exact moment the decision gets made. The condition is zero citations. The reality is the buyer is gone by the time your sales team hears about them. What to do about it is the rest of this page.
Why traditional SEO does not solve this
Google ranks pages against a query. LLMs retrieve passages and compose an answer. Those are different systems with different inputs. A search engine wants to send a user to the best page. A retriever wants to lift the best paragraph and paste it into a synthesized response. Optimizing a page to rank on Google means picking a keyword, getting backlinks, and writing content that signals relevance. Optimizing a page to be cited by Claude means making sure the answer is parseable, the structure is clean, and the authority is signaled in ways a retriever can trust.
Ranking number three on Google does not mean Claude cites you. I have tested this on real sites. A wealth advisory firm ranked in the top five for its core category keywords and was cited zero times across 12 questions in Claude, ChatGPT, and Perplexity. A law firm with half the backlink volume was cited twice in Claude because its pillar pages had a 45 word direct answer at the top and a proper FAQPage schema block. The gap is not effort. The gap is the wrong work.
The honest thing to say here is that most SEO agencies are not built for this yet. They know keywords, links, and content cadence. They do not know JSON-LD validation, Speakable specifications, retriever-friendly heading hierarchy, or the difference between the Bing index and the Perplexity index. If your current agency is promising AI visibility and the deliverables look like a 2023 SEO retainer, you are paying for the wrong work.
The 4 retriever paths
LLMs do not crawl the web the way a single search engine does. They pull from four distinct retrieval paths, and each one has its own signals. Getting cited at scale means being present across all four.
1. Google SGE and AI Overviews
2. Bing-fed ChatGPT Search
3. Perplexity
4. Claude retriever
The 6-layer AI Visibility framework
The four retriever paths all draw from the same underlying work on your site. That work breaks into six layers. Every engagement covers all six, in order, because skipping a layer leaves the ones above it resting on nothing.
Layer 1
Technical crawlability
Layer 2
Structured data
Layer 3
Machine-optimized content
Layer 4
Semantic HTML and accessibility
Layer 5
Authority and freshness
Layer 6
Monitoring and measurement
What this looked like on our own site
The most honest thing I can put on this page is the audit I ran on work-smart.ai in April 2026. Twelve buyer questions, four retrievers, 48 total runs. The baseline result was 3 citations. All 3 came from Claude. The other three retrievers returned zero. The three cited pages were the engagement process page, the Miami consultants page, and the Voice DNA service page.
The findings were not ambiguous. Claude was rewarding the pages that already had clean answer capsules, named author blocks, and FAQPage schema. The pages that did not have those elements were invisible everywhere. Perplexity was returning zero because the site was a client-rendered SPA and Perplexity was not seeing the content. ChatGPT Search was returning zero because Bing had not fully reindexed the rebuild.
The response was a four-part fix. First, the three already-cited pages were rebuilt to defend the position, tighter answer capsules, stronger schema, refreshed author blocks, and dateModified bumps. Second, two new pillar pages were shipped against the biggest gaps: a Shadow AI Playbook for the governance question and a CFO ROI Framework for the cost question. Third, Prerender was enabled across the SPA so retrievers see fully rendered HTML. Fourth, the 12-question audit went on the calendar as a recurring monthly job so the feedback loop is permanent.
I am running the same audit on this page the month it ships. If a service provider is selling AI visibility and cannot show you their own citation numbers with dates on them, they are selling a pitch deck, not a service. You can read the Shadow AI Playbook and the CFO ROI Framework to see the two gap-fill pillars in the wild.
What the engagement looks like
Phase 1 is the audit. Two weeks, fixed scope. I run your 10 to 12 buyer questions against all four retrievers, record the baseline, crawl your site for the six layers, and deliver a gap report with a prioritized fix list. You own the report regardless of whether we work together on Phase 2. Some companies take the audit and execute internally. That works if you have a technical content team with bandwidth.
Phase 2 is the fix. Four to eight weeks depending on scope. Technical crawlability gets cleaned up first, then schema, then content rewrites on the pages that are close, then net-new pillar pages on the questions with no coverage. Every shipped page is validated for schema, accessibility, and answer capsule structure before it goes live. The work is done in public and you see each page before it ships.
Phase 3 is monitoring. Monthly retainer. The 12-question audit is re-run against all four retrievers, citation rate is tracked against the baseline, and we iterate on the pages that are still not getting picked up. New buyer questions get added to the audit set as your market shifts. This is the phase that compounds. Most of the citation gains show up in months three through six, not in week two.
What you own at the end
All the schema markup, written directly into your codebase. Every pillar page we ship, with the source files in your repo. The monitoring dashboard and the audit spreadsheet with the baseline, the monthly scores, and the questions. The playbook document so your team can keep publishing in the same format after the engagement ends. No lock-in, no proprietary tooling you have to keep renting, no hostage deliverables. If you want to take it all in-house after six months, you can. See how this fits with the rest of the services, or read about how I work. The wealth advisory case study covers a parallel engagement, and the Voice DNA service is the companion layer for companies that need their content to sound the same across every page the retrievers lift from. For vertical context, see the legal and financial services industry pages, and if you want the diagnostic first the free assessment covers AI visibility as part of the full operating system.