A tool to analyze and validate robots.txt files for syntax errors based on robots exclusion de-facto standards.
Robots.txt or the robots exclusion protocol (REP), is a text file webmasters create to instruct search engine robots how to crawl and index pages on their website. It is not mandatory but great when search engines frequently visit your site and index your content but often there are cases when indexing parts of your online content is not what you want. Robots.txt needs to be placed in the top-level directory of a web server in order to be find by search engines.
Related Tools:Spider View|Google Banned Checker
Enter Path to Robots.txt File
Enter Robots.txt File Contents (Priority)
User Agent (Optional)
URLs to Test (Optional)
Search Engine Tools