Robots.txt Disallow: / in Search Console
-
Two days ago I found out through search console that my website's Robots.txt has changed to
User-agent: *
Disallow: /When I check the robots.txt in the website it looks fine - I see its blocked just in search console( in the robots.txt tester).
when I try to do fetch as google to the homepage I see its blocked. Any ideas why would robots.txt block my website? it was fine until the weekend.
- before that, in the last 3 months I saw I had blocked resources in the website and I brought back pages with fetch as google.
Any ideas?
-
Hello Ran,
Just to clarify, in search console, when you go to Crawl-> Robots.txt Tester and in the middle right clicking en el see live robots.txt, it doesnt show the correct file?
It could be that google isnt recrawling the new robots.txt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Drupal, http/https, canonicals and Google Search Console
I’m fairly new in an in-house role and am currently rooting around our Drupal website to improve it as a whole. Right now on my radar is our use of http / https, canonicals, and our use of Google Search Console. Initial issues noticed: We serve http and https versions of all our pages Our canonical tags just refer back to the URL it sits on (apparently a default Drupal thing, which is not much use) We don’t actually have https properties added in Search Console/GA I’ve spoken with our IT agency who migrated our old site to the current site, who have recommended forcing all pages to https and setting canonicals to all https pages, which is fine in theory, but I don’t think it’s as simple as this, right? An old Moz post I found talked about running into issues with images/CSS/javascript referencing http – is there anything else to consider, especially from an SEO perspective? I’m assuming that the appropriate certificates are in place, as the secure version of the site works perfectly well. And on the last point – am I safe to assume we have just never tracked any traffic for the secure version of the site? 😞 Thanks John
Technical SEO | | joberts0 -
Subdomain/subfolder question
Hi community, Let's say I have a men's/women's clothing website. Would it be better to do clothing.com/mens and clothing.com/womens OR mens.clothing.com and womens.clothing.com? I understand Moz's stance on blogs that it should be clothing.com/blog, but wanted to ask for this different circumstance. Thanks for your help!
Technical SEO | | IceIcebaby0 -
Adding multi-language sitemaps to robots.txt
I am working on a revamped multi-language site that has moved to Magento. Each language runs off the core coding so there are no sub-directories per language. The developer has created sitemaps which have been uploaded to their respective GWT accounts. They have placed the sitemaps in new directories such as: /sitemap/uk/sitemap.xml /sitemap/de/sitemap.xml I want to add the sitemaps to the robots.txt but can't figure out how to do it. Also should they have placed the sitemaps in a single location with the file identifying each language: /sitemap/uk-sitemap.xml /sitemap/de-sitemap.xml What is the cleanest way of handling these sitemaps and can/should I get them on robots.txt?
Technical SEO | | MickEdwards0 -
Google is indexing blocked content in robots.txt
Hi,Google is indexing some URLs that i don't want to be indexed and also is indexing the same URLs with https. This URLs are blocked in the file robots.txt.I've tried to block this URLs through Google WebmasterTools but Google doesn't let me do it because this URL are httpsThe file robots.txt is correct so, what can i do to avoid this content to be indexed?
Technical SEO | | elisainteractive0 -
I cannot find a way to implement to the 2 Link method as shown in this post: http://searchengineland.com/the-definitive-guide-to-google-authorship-markup-123218
Did Google stop offering the 2 link method of verification for Authorship? See this post below: http://searchengineland.com/the-definitive-guide-to-google-authorship-markup-123218 And see this: http://www.seomoz.org/blog/using-passive-link-building-to-build-links-with-no-budget In both articles the authors talk about how to set up Authorship snippets for posts on blogs where they have no bio page and no email verification just by linking directly from the content to their Google+ profile and then by linking the from the the Google+ profile page (in the Contributor to section) to the blog home page. But this does not work no matter how many ways I trie it. Did Google stop offering this method?
Technical SEO | | jeff.interactive0 -
Why is Google stripping/replacing my TITLE tag for the site with the BRAND Name only when looking at BRAND level search
When doing a search in Google (US Proxy) - Google is stripping and replacing my functional TITLE with the brand name only (say 'Nike'), but if you do a specific search term like ('buy nike shoes') and see a top 10 listing for my site's homepage, now the title works and shows correctly. I saw this a few years ago with another one of my company domains, but didn't ask the question as it worked out. Thanks for any insight.. NOTE: It's not damaging any results, or rankings for the site.. but: when searching for BRAND name of the company, like I explained, it's replacing a optimized title for the BRAND name, and then re-placing it naturally when deep search brings up the homepage and the TITLE looks fine.. Very weird at best! Thanks, Rob
Technical SEO | | RobMay0 -
Using robots.txt to deal with duplicate content
I have 2 sites with duplicate content issues. One is a wordpress blog. The other is a store (Pinnacle Cart). I cannot edit the canonical tag on either site. In this case, should I use robots.txt to eliminate the duplicate content?
Technical SEO | | bhsiao0 -
How does having an SSL impact search optimization?
My site has a forced redirect on the home page to https://. But that's only on the home page...other site pages are visible. How does this impact search engines crawling my site?
Technical SEO | | AquariumDigital0