This is a text file webmasters create to instruct web robots (typically search engine robots) how to crawl pages on their website. In practice, robots.txt files indicate whether certain user agents (web-crawling software) can or cannot crawl parts of a website. These crawl instructions are specified by “disallowing” or “allowing” the behavior of certain (or all) user agents.