Case · Enterprise IT · EU
Four-pillar framework lifted AI citations 8× in 90 days.
An EU enterprise IT vendor was invisible inside ChatGPT and Perplexity answers for its top 40 solution queries. After three sprints — entity cleanup (Wikidata + LinkedIn + Org schema), FAQPage / HowTo / Service JSON-LD across 28 priority pages, and a rewrite of those pages around prompt-fit definitional intros — measured AI-engine citations rose from 4 to 33 in 90 days (~8×) and inbound MQLs from generative referrals climbed +42%.
- AI citations
- 4 → 33
- GenAI MQLs
- +42%
- Window
- 90 days
Challenge
Top 40 solution queries returned competitor answers inside ChatGPT, Perplexity and Google AI Overviews. Brand was absent from generative results despite ranking on page-1 of classic Google for the same terms — a textbook GEO gap.
Approach
- Sprint 1 — Entity layer: aligned NAP across Wikidata, LinkedIn Company, Crunchbase and the Org schema on the corporate site so LLMs could resolve the brand to a single entity.
- Sprint 2 — Schema layer: shipped FAQPage, HowTo and Service JSON-LD on 28 priority solution and product pages; added Definition blocks to the top 12 glossary terms.
- Sprint 3 — Content layer: rewrote those 28 pages around prompt-fit definitional intros (≤60 words, answer-first), added named frameworks and primary benchmarks AI engines prefer to quote.
Results
- AI-engine citations: 4 → 33 in 90 days (~8×) across ChatGPT, Perplexity and Google AI Overviews.
- Inbound MQLs from generative-AI referrals +42% quarter-on-quarter.
- First-page Google rankings held; no organic cannibalisation from the rewrites.
Evidence
- Perplexity — Weekly 40-prompt citation log — brand mention rate per answer.
- ChatGPT — Same 40-prompt basket re-run on GPT-4o + GPT-5 with web browsing on.
- Google AI Overviews — AIO citation source tracked via incognito SERP captures, EU locale.
- GA4 — Custom channel group for generative-AI referrers (chatgpt.com, perplexity.ai, gemini.google.com).
Client analytics + Perplexity / ChatGPT citation tracking · client name withheld under NDA.
FAQ
Why was the brand invisible inside ChatGPT and Perplexity despite ranking on page-1 of Google?
Classic SEO ranking signals (links, on-page keywords) don't translate 1:1 to generative engines. ChatGPT, Perplexity and AI Overviews pick citations based on entity resolution (is this brand a known thing?) and prompt-fit answer formatting (does this page answer the question in ≤60 words?). The vendor was strong on the first signal but weak on both, so LLMs cited competitors with cleaner entity graphs and answer-shaped pages.
How does a four-pillar GEO framework actually move citations 8× in 90 days?
Sprint 1 fixes the entity layer (Wikidata + LinkedIn + Org schema aligned to one canonical NAP) so LLMs resolve the brand to a single node. Sprint 2 ships FAQPage, HowTo and Service JSON-LD on the 28 priority pages so AI engines see machine-readable answers. Sprint 3 rewrites those pages around prompt-fit definitional intros (≤60 words, answer-first) plus named frameworks LLMs prefer to quote verbatim. The compounding effect on a 40-prompt basket: 4 → 33 citations in one quarter.
Did the rewrites cannibalise existing Google rankings?
No. The 28 pages were rewritten around answer-first intros without removing the long-form depth Google's ranking algorithm rewards. Page-1 rankings held across the tracked keyword set and inbound MQLs from generative-AI referrals climbed +42% on top of stable classic-organic traffic.