AI crawler visits robots.txt first because it’s the cheapest way to learn crawl permissions (REP) and often discover sitemap URLs. After that, bots typically fetch the sitemap, homepage, or seed URLs, depending on caching, user-triggered retrieval, and rate limits.