# https://www.robotstxt.org/robotstxt.html # https://developers.google.com/search/docs/crawling-indexing/robots/robots_txt # robots.txt file controls crawling of URLs under https://example.com. # All crawlers are disallowed to crawl files in the "includes" directory, such # as .css, .js, but Google needs them for rendering, so Googlebot is allowed # to crawl them. User-agent: * Allow: / Sitemap: https://blutechconsulting.com/sitemap.xml