Generative Engine Optimisation Isn’t a Buzzword. It’s a Research Discipline.
For the past year or so, *Generative Engine Optimisation* (GEO) has been talked about as if it were simply “SEO for ChatGPT”. That shorthand does it a disservice.
“GEO isn’t a marketing fad dreamed up in a boardroom. It is grounded in serious academic work on how AI-powered search systems actually operate – and that distinction matters if you want your content to be visible in the next generation of search.”
The scientific foundation was laid in 2023 by researchers from Princeton, Georgia Tech, the Allen Institute for AI, and IIT Delhi, in a paper later accepted to KDD 2024. This wasn’t speculative thought leadership; it was empirical research into how large language models retrieve, synthesise, and present information.
From Search Engines to Generative Engines
The research introduces a critical shift in thinking: we are no longer optimising for *search engines*, but for what the authors call *generative engines*.
Traditional search engines retrieve and rank documents. Generative engines do something fundamentally different. They retrieve information from multiple sources, synthesise it using large language models, and generate an original response – often with inline attribution.
That difference breaks many of the assumptions SEO has relied on for two decades.
GEO sits at the intersection of computational linguistics, cognitive science, and machine learning. Instead of focusing on keyword placement or ranking signals, it addresses how language models recognise patterns, how they use their context window, and how probability distributions shape what ultimately appears in a generated answer.
In short: it’s not about being number one on a list. It’s about being *included* in the synthesis.
How GEO Was Tested (And Why That Matters)
One of the strengths of the research is its methodology. The authors didn’t rely on anecdote or tool screenshots. They built GEO-bench: a large-scale benchmark of 10,000 queries spanning multiple domains, each tagged by intent, difficulty, domain, and expected answer format.
The experimental setup was deliberately realistic. First, top sources were fetched from Google Search. Then GPT-3.5-turbo was used to generate answers with inline citations, mirroring how generative search systems work in the wild.
Crucially, the researchers didn’t measure success using traditional rankings. They introduced new visibility metrics designed specifically for generative engines, including:
- Position-adjusted word count (how prominently a source appears in generated responses)
- Subjective impression scores, measuring relevance, influence, uniqueness, and likely user engagement
Using these metrics, they tested nine distinct optimisation methods.
The results were telling.
What Actually Improves Visibility in AI Search
Certain GEO techniques consistently outperformed others. Adding clear citations, quotations, and concrete statistics increased visibility in generated responses by as much as 40 per cent.
Meanwhile, many familiar SEO habits barely moved the needle. Keyword stuffing, in particular, showed minimal impact on generative engines – a finding that should give pause to anyone still writing for algorithms rather than understanding.
The research also highlights that GEO is not one-size-fits-all. Effectiveness varies by domain:
- An authoritative, declarative tone works best for debate and historical topics
- Citation-rich content performs strongest for factual queries
- Structural clarity matters more than keyword density
This reinforces an uncomfortable truth for some marketers: optimising for AI means writing better, not trickier.
Why This Changes Content Strategy
GEO forces a rethink of what “optimised” content looks like. If your material can’t be easily understood, trusted, and recomposed by a language model, it risks invisibility – regardless of how well it ranks today.
“What the research makes clear is that generative visibility is earned through clarity, evidence, and structure. AI systems reward content that behaves like a good academic source or a solid piece of journalism: well-sourced, precise, and unambiguous.”
That may feel less glamorous than chasing hacks. But it’s a more durable advantage.
As generative engines continue to replace blue links with synthesised answers, GEO will stop being a niche concern and become a baseline competency. Those who treat it as a discipline – grounded in how these systems actually work – will be the ones whose voices are carried forward.
The rest will simply be paraphrased out of existence.
