Case · B2B SaaS · MENA + GCC
AEO playbook captured 61% of AI-Overview answers for category queries.
A mid-market B2B SaaS vendor in the GCC was losing brand-defining queries to aggregators inside Google AI Overviews. We rebuilt 18 category pages around direct-answer intros (≤60 words), added FAQPage + Definition schema, and shipped a glossary of 24 entity-anchored terms. Within two quarters, the brand was cited in 61% of tracked AI-Overview answers (up from 9%) and non-brand organic sessions grew +118%.
- AIO citation share
- 9% → 61%
- Non-brand organic
- +118%
- Window
- 2 quarters
Challenge
Category-defining queries ("what is X", "X vs Y", "best X for…") were routed by Google AI Overviews to third-party aggregators and review sites. The vendor's own product pages — keyword-optimised but not answer-optimised — were ignored as source citations.
Approach
- Mapped a basket of 80 category prompts and their current AIO citation sources to identify the 24 entities the model kept anchoring its answers around.
- Rewrote 18 category pages with a 60-word direct-answer intro, then deepened them with comparison tables and named criteria — the formats AIO favours when summarising.
- Shipped FAQPage + DefinedTerm schema and a 24-term glossary that became the canonical definition source AI engines could quote.
Results
- Tracked AI-Overview citation share moved from 9% → 61% across the 80-prompt basket in two quarters.
- Non-brand organic sessions +118%; non-brand demo requests +63%.
- Aggregator dependency: own-domain citations now outnumber third-party aggregators by 3.4×.
Evidence
- Google AI Overviews — Weekly 80-prompt AIO basket; citation source domains logged per answer.
- Search Console — Non-brand query cohort — impressions, CTR and position deltas.
- GA4 — Non-brand organic sessions and demo-request conversions.
- Perplexity — Cross-engine sanity check on the same 80-prompt basket.
GSC + internal AIO tracker (weekly 80-prompt basket) · client identity redacted under NDA.
FAQ
Why do Google AI Overviews cite aggregators and review sites instead of the actual product page?
AIO summaries reward content engineered for direct extraction: a 60-word answer-first intro, comparison tables, named criteria and DefinedTerm-style glossaries. Aggregators are built natively in that shape; most product pages are keyword-optimised but not answer-optimised, so AIO ignores them as source citations even when they outrank the aggregator on classic SERPs.
How do you measure AI-Overview citation share rigorously enough to report on it?
Define a fixed prompt basket (here, 80 category-defining queries), re-run them weekly in incognito with stable locale settings, and log the source domain AIO cites for each answer. Citation share = (own-domain mentions / total tracked answers). The same basket re-run quarter-over-quarter is what moved from 9% → 61% in this engagement.
What role does a glossary play in AEO for B2B SaaS?
A 24-term entity-anchored glossary becomes the canonical definition source AI engines quote when answering 'what is X' queries. Each entry ships with DefinedTerm + FAQPage schema and a ≤60-word answer-first body, so LLMs treat the vendor's domain as the authoritative source rather than a third-party aggregator. That's how non-brand organic sessions grew +118% in two quarters.