Create a standard robots.txt file to guide search engines like Google and Bing. Control access to your folders, block sensitive paths, and define your sitemap location.
Rules for User-agent: * (All Bots)
Enter one path per line (e.g., /admin/)
Help bots find your content
A robots.txt file is a simple text file placed in the root directory of your website. It uses the "Robots Exclusion Protocol" to communicate with web crawlers (also known as spiders or bots) like Googlebot and Bingbot.
Specifies which bot the rule applies to. User-agent: * means the rule applies to ALL bots.
Tells the bot NOT to visit a specific file or directory. E.g., Disallow: /admin/.
Essential for Webmasters.
Large websites have limited crawl budgets. Using robots.txt efficiently ensures Google spends its time indexing your products and articles, not your temporary files or search result pages.
Direct traffic effectively.
No. Malicious bots often ignore robots.txt files. To block hackers, you should use server-side rules (like .htaccess) or a firewall. Robots.txt is a "polite request" protocol.
Upload the generated file to the main folder of your website (the "root"). It must be accessible at yourdomain.com/robots.txt.