New server update + wrong robots.txt = lost SERP rankings
-
Over the weekend, we updated our store to a new server. Before the switch, we had a robots.txt file on the new server that disallowed its contents from being indexed (we didn't want duplicate pages from both old and new servers).
When we finally made the switch, we somehow forgot to remove that robots.txt file, so the new pages weren't indexed. We quickly put our good robots.txt in place, and we submitted a request for a re-crawl of the site.
The problem is that many of our search rankings have changed. We were ranking #2 for some keywords, and now we're not showing up at all. Is there anything we can do? Google Webmaster Tools says that the next crawl could take up to weeks! Any suggestions will be much appreciated.
-
Dr. Pete,
I just ran across one of your webinars yesterday and you brought up some great ideas. Earned a few points in my book
Too often SEOs see changes in the rankings and react to counter-act the change. Most of the time these bounces are actually a GOOD sign. It means Google saw your changes and is adjusting to them. If your changes were positive you should see positive results. I have rarely found an issue where a user made a positive change and got a negative result from Google. Patience is a virtue.
-
Thanks everyone for the help! Fortunately we remedied the problem almost immediately, so it only took about a day to get our rankings back. I think the sitemap and fixed robots.txt were the most important factors.
-
I agree, let Google re-index first and then re evaluate the situation.
-
I hate to say it, but @inhouseninja is right - there's not a lot you can do, and over-reacting could be very dangerous. In other words - don't make a ton of changes just to offset this - Google will re-index.
A few minor cues that are safe:
(1) Re-submit your XML sitemap
(2) Build a few new links (authoritative ones, especially)
(3) Hit social media with your new URLs
All 3 are at least nudges to re-index. They aren't magic bullets, but you need to get Google's attention.
-
Remain calm. You should be just fine. It just takes time for Google to digest the new robots.txt. I would be concerned if things didn't change in 3-4 weeks. Adopt a rule to not freak out on Google until you've given the problem 14 days to resolve. Sometimes Google moves things around and this is natural.
If you want Google to crawl your site faster, build some links and do some social media. That will encourage Google to speed it up.
-
If this is all that happened the next crawl should fix it. Just sit tight and they should bounce up again in a week or so.
-
That does not sound fun at all.... So you just changed the server, complete copy?
My first question would be other than the server did anything else change? Copy or URL's
My second question would be is the other server still up and live to the internet?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sitelinks are wrong
When I search my website on Google, the sitelinks that I have appear to be wrong. How can I fix this? I have all of my pages optimized.
Intermediate & Advanced SEO | | litesourceinc0 -
How to make Google index your site? (Blocked with robots.txt for a long time)
The problem is the for the long time we had a website m.imones.lt but it was blocked with robots.txt.
Intermediate & Advanced SEO | | FCRMediaLietuva
But after a long time we want Google to index it. We unblocked it 1 week or 8 days ago. But Google still does not recognize it. I type site:m.imones.lt and it says it is still blocked with robots.txt What should be the process to make Google crawl this mobile version faster? Thanks!0 -
Huge increase in server errors and robots.txt
Hi Moz community! Wondering if someone can help? One of my clients (online fashion retailer) has been receiving huge increase in server errors (500's and 503's) over the last 6 weeks and it has got to the point where people cannot access the site because of server errors. The client has recently changed hosting companies to deal with this, and they have just told us they removed the DNS records once the name servers were changed, and they have now fixed this and are waiting for the name servers to propagate again. These errors also correlate with a huge decrease in pages blocked by robots.txt file, which makes me think someone has perhaps changed this and not told anyone... Anyone have any ideas here? It would be greatly appreciated! 🙂 I've been chasing this up with the dev agency and the hosting company for weeks, to no avail. Massive thanks in advance 🙂
Intermediate & Advanced SEO | | labelPR0 -
Manual action penalty revoked, rankings still low, if we create a new site can we use the old content?
Scenario:
Intermediate & Advanced SEO | | peteboyd
A website that we manage was hit with a manual action penalty for unnatural incoming links (site-wide). The penalty was revoked in early March and we're still not seeing any of our main keywords rank high in Google (we are found on page 10 and beyond). Our traffic metrics from March 2014 (after the penalty was revoked) - July 2014 compared to November 2013 - March 2014 was very similar. Question: Since the website was hit with a manual action penalty for unnatural links, is the content affected as well? If we were to take the current website and move it to a new domain name (without 301 redirecting the old pages), would Google see it as a brand new website? We think it would be best to use brand new content but the financial costs associated are a large factor in the decision. It would be preferred to reuse the old content but has it already been tarnished?0 -
Can changing G+ authorship on a well-ranking article drop its search ranking?
We have an article that ranks #1 in Google SERP for the keyword we want it to rank for. We decided to revise the article because although it's performing well, we knew it could be better and more informative for the user. Now that we've revised the content, we're wondering: Should we update the article author (and the G+ authorship markup) to reflect that the revisor authored the content, or keep the original author listed? Can changing G+ authorship on an article impact its search ranking, or is that an issue that's a few Google algorithm updates down the road?
Intermediate & Advanced SEO | | pasware0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
New site - when will it rank?
We changed our domain 6 weeks ago as we had a penalty we couldn't shake off... My question is: How long will it take to rank for our keywords. I appreciate this is a difficult questions as there are a lot of factors that will effect our ranking. Do Google wait a period of time before allowing a new site to rank well?
Intermediate & Advanced SEO | | jj34340 -
How to clean up a SERP?
I have a new customer and he wants me to clear up the SERP for his branded keyword, the SERP currently has his site and two other sites related to him under his result... Under that is bad reviews and old reports. My client does own the top spot (#1) for his branded name. My client has a: linkedin facebook twitter myspace I was thinking to push all these to the first page, this will clear up some of those bad reviews. What are your thoughts? Have any of you ever had this type of case? I need to get 6 different sites to all rank for the same exact key term, however I have the top spot to link from...
Intermediate & Advanced SEO | | SEODinosaur0