Temporarily suspend Googlebot without blocking users
-
We'll soon be launching a redesign, on a new platform, migrating millions of pages to new URLs.
How can I tell Google (and other crawlers) to temporarily (a day or two) ignore my site? We're hoping to buy ourselves a small bit of time to verify redirects and live functionality before allowing Google to crawl and index the new architecture.
GWT's recommendation is to 503 all pages - including robots.txt, but that also makes the site invisible to real site visitors, resulting in significant business loss. Bad answer.
I've heard some recommendations to disallow all user agents in robots.txt. Any answer that puts the millions of pages we already have indexed at risk is also a bad answer.
Thanks
-
So it seems like we've gone full circle.
The initial question was, "How can I tell Google (and other crawlers) to temporarily (a day or two) ignore my site? We're hoping to buy ourselves a small bit of time to verify redirects and live functionality before allowing Google to crawl and index the new architecture."
Sounds like the answer is, 'that's not possible'.
-
Putting a noindex/nofollow on an index url will remove it from SERPs, although some ulrs will still show for direct search (using the url itself as a KW) but even then they will appear as clear links without any TItle/Description details.
Using a 301 redirect will remove the old page from index, regardless of noindex/nofollow.
If you are using a noindex/nofollow for the new url - both will not show.
-
Thank you, Ruth!
Can I ask a clarifying question?
If I put a noindex/nofollow on the new urls, wouldn't the result be the same as if I put noindex/nofollow on the indexed urls? There is only one instance of each page - and all of the millions of indexed URLs will be redirecting to new urls.
Here is my assumption: if I put noindex/nofollow on the new urls - a search bot will crawl the old url, follow the redirect to the new url, detect the noindex/nofollow, and then drop the old, indexed url from their index. Is that the wrong assumption?
-
I would use robots.txt to noindex the whole website as well - but just the new pages, not the old ones. Then when you're ready to be crawled, remove the robots.txt entry and Fetch as Googlebot to get re-crawled. You may fall out of the index for a day or two but should quickly be re-indexed.
Another solution would be to use the meta robots tag to individually noindex each page (if there's a way to do that in your CMS, obviously adding them by hand wouldn't be scalable), and then remove. That may increase your chances of getting re-crawled and re-indexed sooner.
-
Thanks for the response, Mark.
It sounds as if you tried this on a few new pages.
I'm talking about millions of existing pages.
Would you robots.txt noindex your entire website? Seems like you'd run a huge risk of being dumped from the index entirely.
-
I recommend robots text noindex, nofollow.
That way people can still see the pages they just aren't indexed in Google yet.
As we developed some new pages on one of our sites we did this and we could still view pages and send folks there that we wanted to see the content for feedback - but no one else knew they were there.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to load the mobile version of a page without the desktop version in the background (and vice versa)
Let’s say your designer wants your homepage to be fairly image heavy. Then let’s say they want to use DIFFERENT images for desktop and mobile. You appease them and make this work. But now your homepage is slow (makes sense, right? It’s loading both sets of images but only displaying one set). You lazy load and compress but your home page takes SIX SECONDS to load. The rest of your site loads in just under two. This can only be having a negative impact on SEO. You won’t convince your designer to cut the images. What do you do? My immediate thought is to look for a way of only loading the content relevant to that screen size. Sure, it won’t reshuffle itself on desktop when you drag your Chrome window to the size of a phone. But who cares? We’re the only peope who do that anyway. Is this possible? Do you have any better ideas?
Technical SEO | | MSGroup0 -
Can I Block https URLs using Host directive in robots.txt?
Hello Moz Community, Recently, I have found that Google bots has started crawling HTTPs urls of my website which is increasing the number of duplicate pages at our website. Instead of creating a separate robots.txt file for https version of my website, can I use Host directive in the robots.txt to suggest Google bots which is the original version of the website. Host: http://www.example.com I was wondering if this method will work and suggest Google bots that HTTPs URLs are the mirror of this website. Thanks for all of the great responses! Regards,
Technical SEO | | TJC.co.uk
Ramendra0 -
Site not getting indexed by googlebot.
The following question is in regards to http://footeschool.org/. This site is not getting indexed with google(googlebot) This only happens when the user agent is set googlebot. This is a recent issue. We are using DNN as CMS. Are there any suggestion to help resolve this issue?
Technical SEO | | bcmull0 -
Mobilegeddon Help - Googlebot Mobile cHTML & Mobile: XHTML/WML
Our website is (www.billboard.com) and we have a mobile website on a sub-domain (www.m.billboard.com). We are currently only redirecting Googlebot Type "Mobile: Smartphone" to our m.billboard.com domain. We are not redirecting Googlebot Mobile: cHTML & Mobile: XHTML/WML Using this URL as an example: http://www.billboard.com/articles/news/1481451/ravi-shankar-to-receive-lifetime-achievement-grammy, I fetched the URL via Google webmaster tools: http://goo.gl/8m4lQD As you can see only the 3rd Googlebot mobile was redirected, while the first 2 Google bot mobile spiders resolved 200 for the desktop page. QUESTION: could this be hurting our domain / any page that is not redirecting properly post mobilegeddon?
Technical SEO | | Jay-T0 -
Can Googlebot read the content on our homepage?
Just for fun I ran our homepage through this tool: http://www.webmaster-toolkit.com/search-engine-simulator.shtml This spider seems to detect little to no content on our homepage. Interior pages seem to be just fine. I think this tool is pretty old. Does anyone here have a take on whether or not it is reliable? Should I just ignore the fact that it can't seem to spider our home page? Thanks!
Technical SEO | | danatanseo0 -
Redirecting root domain to a page based on user login
We have our main URL redirecting non-logged in users to a specific page and logged in users are directed to their dashboard when going to the main URL. We find this to be the most user-friendly, however, this is all being picked up as a 302 redirect. I am trying to advise on the ideal way to accomplish this, but I am not having much luck in my search for information. I believe we are going to put a true homepage at the root domain and simply redirect logged in users as usual when they hit the URL, but I'm still concerned this will cause issues with Google and other search engines. Anyone have experience with domains that need to work in this manner? Thank you! Anna
Technical SEO | | annalytical0 -
Base HREF set without HTTP. Will this cause search issues?
The base href has been set in the following format: <base href="//www.example.com/"> I am working on a project where many of the programming team don't believe that SEO has an impact on a website. So, we often see some strange things. Recently, they have rolled out an update to the website template that includes the base href I listed above. I found out about it when some of our tools such as Xenu link checker - suddenly stopped working. Google appears to be indexing the the pages fine and following the links without any issue - but I wonder if there is any long term SEO considerations to building the internal links in this manner? Thanks!
Technical SEO | | Nebraska0 -
On page optimisation: Good for the users and engines?
I would like to rank for words as:
Technical SEO | | madsurfer
windsurfing equipment
windsurfing news
windsurfing sails
windsurfing boards etc. Now am I wondering if I should use exact those words in the navigation/titles/descriptions because it seems not user friendly. The whole website is about windsurfing thus naming it just “equipement” instead of “windsurfing equipment” would be clear to a visitor that I am talking about that windsurfing related topic. Here is an example: http://madwindsurfing.com/cat/competitions-events/
I can even change the URL to http://madwindsurfing.com/cat/windsurfing-competitions-events/ What would be the best way of choosing the naming/descriptions when I do on-page optimisation which is good for the engines and for the users and who would you do in my case?0