Google’s Gary Illyes Warns AI Agents Will Create Web Congestion via @sejournal, @MattGSouthern
Google's Gary Illyes suggests that AI-driven bots could overwhelm websites, offering a unique perspective that crawling isn't the primary issue. The post Google’s Gary Illyes Warns AI Agents Will Create Web Congestion appeared first on Search Engine Journal.

A Google engineer has warned that AI agents and automated bots will soon flood the internet with traffic.
Gary Illyes, who works on Google’s Search Relations team, said “everyone and my grandmother is launching a crawler” during a recent podcast.
The warning comes from Google’s latest Search Off the Record podcast episode.
AI Agents Will Strain Websites
During his conversation with fellow Search Relations team member Martin Splitt, Illyes warned that AI agents and “AI shenanigans” will be significant sources of new web traffic.
Illyes said:
“The web is getting congested… It’s not something that the web cannot handle… the web is designed to be able to handle all that traffic even if it’s automatic.”
This surge occurs as businesses deploy AI tools for content creation, competitor research, market analysis, and data gathering. Each tool requires crawling websites to function, and with the rapid growth of AI adoption, this traffic is expected to increase.
How Google’s Crawler System Works
The podcast provides a detailed discussion of Google’s crawling setup. Rather than employing different crawlers for each product, Google has developed one unified system.
Google Search, AdSense, Gmail, and other products utilize the same crawler infrastructure. Each one identifies itself with a different user agent name, but all adhere to the same protocols for robots.txt and server health.
Illyes explained:
“You can fetch with it from the internet but you have to specify your own user agent string.”
This unified approach ensures that all Google crawlers adhere to the same protocols and scale back when websites encounter difficulties.
The Real Resource Hog? It’s Not Crawling
Illyes challenged conventional SEO wisdom with a potentially controversial claim: crawling doesn’t consume significant resources.
Illyes stated:
“It’s not crawling that is eating up the resources, it’s indexing and potentially serving or what you are doing with the data.”
He even joked he would “get yelled at on the internet” for saying this.
This perspective suggests that fetching pages uses minimal resources compared to processing and storing the data. For those concerned about crawl budget, this could change optimization priorities.
From Thousands to Trillions: The Web’s Growth
The Googlers provided historical context. In 1994, the World Wide Web Worm search engine indexed only 110,000 pages, whereas WebCrawler managed to index 2 million. Today, individual websites can exceed millions of pages.
This rapid growth necessitated technological evolution. Crawlers progressed from basic HTTP 1.1 protocols to modern HTTP/2 for faster connections, with HTTP/3 support on the horizon.
Google’s Efficiency Battle
Google spent last year trying to reduce its crawling footprint, acknowledging the burden on site owners. However, new challenges continue to arise.
Illyes explained the dilemma:
“You saved seven bytes from each request that you make and then this new product will add back eight.”
Every efficiency gain is offset by new AI products requiring more data. This is a cycle that shows no signs of stopping.
What Website Owners Should Do
The upcoming traffic surge necessitates action in several areas:
Infrastructure: Current hosting may not support the expected load. Assess server capacity, CDN options, and response times before the influx occurs. Access Control: Review robots.txt rules to control which AI crawlers can access your site. Block unnecessary bots while allowing legitimate ones to function properly. Database Performance: Illyes specifically pointed out “expensive database calls” as problematic. Optimize queries and implement caching to alleviate server strain. Monitoring: Differentiate between legitimate crawlers, AI agents, and malicious bots through thorough log analysis and performance tracking.The Path Forward
Illyes pointed to Common Crawl as a potential model, which crawls once and shares data publicly, reducing redundant traffic. Similar collaborative solutions may emerge as the web adapts.
While Illyes expressed confidence in the web’s ability to manage increased traffic, the message is clear: AI agents are arriving in massive numbers.
Websites that strengthen their infrastructure now will be better equipped to weather the storm. Those who wait may find themselves overwhelmed when the full force of the wave hits.
Listen to the full podcast episode below:
Featured Image: Collagery/Shutterstock