For over a decade, SEO has operated within a fixed constraint: Google’s deep learning ranking systems only evaluate the top 20–30 candidate pages because running neural networks on more results is too expensive. That number wasn’t chosen for quality reasons. It was set by hardware budgets and memory costs. Court testimony from Google’s VP of Search confirmed it, and now Google Research has published the algorithm that could remove the constraint. TurboQuant compresses vector representations by 4x whilst maintaining retrieval quality, making it economically viable to evaluate far larger candidate sets. When the ranking window widens, the rules change. Sites with strong content and structured data get a fair hearing against established players with dominant backlink profiles. The moat around incumbent rankings is about to shrink.
—
Google has historically ranked pages using a two-stage process that evaluates tens of thousands of candidates before applying deep learning (RankBrain, BERT) to just 20–30 finalists. This narrow window exists because running neural ranking on more pages is too expensive in compute and memory. That constraint was confirmed under oath by Google’s VP of Search, Pandu Nayak, during the DOJ antitrust trial.
Now the hardware economics are shifting. Google has published TurboQuant, a vector compression technique that reduces memory requirements by 4x whilst keeping retrieval quality high. CEO Sundar Pichai has acknowledged severe supply constraints on memory and foundry capacity, but TurboQuant addresses exactly that bottleneck by making retrieval indexing “virtually free” and reducing memory load per vector dramatically.
If deployed, TurboQuant lets Google evaluate a much larger candidate set before final ranking without adding hardware cost. The 20–30 page window was never a design decision. It was a budget ceiling. When the ceiling lifts, the entire competitive surface changes.
Why widening the search pool is good news
A wider candidate set levels the playing field. Under the current constraint, strong content on smaller or newer sites often never reaches the deep learning ranking layer because it gets culled in early retrieval stages dominated by classical signals like domain authority and link equity. The top 20–30 slots tend to go to established players with robust backlink profiles, not necessarily the pages with the best answers.
When Google can afford to evaluate 100 or 200 candidates instead of 20, retrieval-ready content gets a fair hearing. Pages with clear, citable claims, strong entity associations and semantic coherence can enter the ranking window even without legacy domain authority. Sites that have invested in content quality and structured information rather than link-building arms races get a shot they didn’t have before. The moat around incumbent positions shrinks.
For SMEs, local businesses and specialist publishers without big backlink budgets, this matters. If your page is genuinely retrieval-friendly (meaning AI systems can extract, verify and cite it), you’re now competing on content merit in a larger pool rather than being filtered out before ranking even starts. The game shifts from “can I outrank these 20 entrenched sites” to “can I be one of the 100 or 200 pages Google considers worth evaluating”. That’s a much more achievable threshold for quality content.
In practical terms for your consultancy clients: automotive retailers and estate agents with well-structured, citation-ready content (clean JSON-LD, strong NAP consistency, clear expertise signals) will have a better chance of appearing in AI-mediated results and wider ranking windows than they do now, where they’re often squeezed out by aggregator sites with stronger link profiles.
The shift favours signal over legacy authority.