Development/Test Ecommerce Website Mistakenly Indexed
-
My question is - relatively speaking, how damaging to SEO is it to have BOTH your development/testing site and your live version indexed/crawled by Google and appearing in the SERPs?
We just launched about a month ago, and made a change to the robots text on the development site without noticing ... which lead to it being indexed too.So now the ecommerce website is duplicated in Google ... each under different URLs of course (and on diff servers, DNS etc)
We'll fix it right away ... and block crawlers to the development site. But again, may general question is what is the general damage to SEO ... if any ... created by this kind of mistake. My feeling is nothing significant
-
No my friend, no! I'm saying we'll point the existing staging/testing environment to the production version and will stop using it as staging instead of closing it completely like I mentioned earlier. And, we'll launch a fresh instance for staging/testing use case.
This will help us transferring majority if the link juice of already indexed staging/testing instance.
-
Why would you want to 301 a staging/dev environment to a production site? Unless you plan on making live changes to the production server (not safe), you'd want to keep them separate. Especially for eCommerce it would be important to have different environments to test and QA before pushing a change live. Making any change that impacts a number of pages could damage your ability to generate revenue from the site. You don't take down the development/testing site, because that's your safe environment to test changes before pushing updates to production.
I'm not sure I follow your recommendation. Am I missing a critical point?
-
Hi Eric,
Well, that's a valid point that bots might have considered your staging instances as the main website and hence, this could end up giving you nothing but a face palm.
The solution you suggested is similar to the one I suggested where we are not getting any benefit from the existing instance by removing it or putting noindex everywhere.
My bad! I assumed your staging/testing instance(s) got indexed recently only and are not very powerful from domain & page authority perspective. In fact, being a developer, I should have considered the worst case only
Thanks for pointing out the worst case Eric i.e when your staging/testing instances are decently old and you don't want to loose their SEO values while fixing this issue. And, here'e my proposed solution for it: don't removed the instance, don't even put a noindex everywhere. The better solution would be establishing a 301 redirect bridge from your staging/testing instance to your original website. In this case, ~90% of the link juice that your staging/testing instances have earned, will get passed. Make sure each and every URL of the staging/testing instance is properly 301 redirecting to the original instance.
Hope this helps!
-
It could hurt you in the long run (Google may decide the dev site is more relevant than your live site), but this is an easy fix. No-index your dev site. Just slap a site-wide noindex meta tag across all the pages, and when you're ready to move that code to the production site you remove that instance of code.
Disallowing from the robots.txt file will help, but that's a soft request. The best way to keep the dev site from being indexed is to use the noindex tag. Since it seems like you want to QA in a live environment that would prevent search engines from indexing the site, and still allow you to test in a production-like scenario.
-
Hey,
I recently faced the same issue when the staging instances got indexed accidentally and we were open for the duplicate content penalty (well, that's not cool). After a decent bit of research, I followed the following steps and got rid of this issue:
- I removed my staging instances i.e staging1.mysite.com, staging2.mysite.com and so on. Removing such instances helps you deindex already indexed pages faster than just blocking the whole website from robots.txt
- Relaunched the staging instances with a slightly different name like new-staging1.mysite.com, new-staging2.mysite.com and disallow bots on these instances from the day zero to avoid this mess again.
This helped me fixing this issue asap. Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have a metadata issue. My site crawl is coming back with missing descriptions, but all of the pages look like site tags (i.e. /blog/?_sft_tag=call-routing)
I have a metadata issue. My site crawl is coming back with missing descriptions, but all of the pages look like site tags (i.e. /blog/?_sft_tag=call-routing)
Intermediate & Advanced SEO | | amarieyoussef0 -
Website cache has removed
Hi Team, I am facing an issue with cache of the website, despite various r&d I couldn't able to find the solution as code seems to be ok to me. Can any one of you check and let me know why home page and some of the product pages removed from the caching. See here: https://bit.ly/2Kna3PD Appreciate a quick response! Thanks
Intermediate & Advanced SEO | | Devtechexpert0 -
Why do I have so many extra indexed pages?
Stats- Webmaster Tools Indexed Pages- 96,995 Site: Search- 97,800 Pages Sitemap Submitted- 18,832 Sitemap Indexed- 9,746 I went through the search results through page 28 and every item it showed was correct. How do I figure out where these extra 80,000 items are coming from? I tried crawling the site with screaming frog awhile back but it locked because of so many urls. The site is a Magento site so there are a million urls, but I checked and all of the canonicals are setup properly. Where should I start looking?
Intermediate & Advanced SEO | | Tylerj0 -
Our Web Site Is candere.com. Its PA and back link status are different for https://www.candere.com, http://www.candere.com, https://candere.com, and http://candere.com. Recently, we have completely move from http to https.
How can we fix it, so that we may mot lose ranking and authority.
Intermediate & Advanced SEO | | Dhananjayukumar0 -
Ranking problems with international website
Hey there, we have some ranking issues with our international website. It would be great if any of you could share their thoughts on that. The website uses subfolders for country and language (i.e. .com/uk/en) for the website of the UK branch in English. As the company has branches all over the world and also offers their content in many languages the url structure is quite complex. A recent problem we have seen is that in certain markets the website is not ranking with the correct country. Especially in the UK and the US, Google prefers the country subfolder for Ghana (.com/gh/en) over the .com/us/en and .com/uk/en versions. We have hreflang setup and should also have some local backlinks pointing to the correct subfolders as we switched from many ccTLDs to one gTLD. What confuses me is that when I check for incoming links (Links to your site) with GWT, the subfolder (.com/gh/en) is listed quite high in the column (Your most linked content). However the listed linking domains are not linking at all to this folder as far as I am aware. If I check them with a redirect checker they all link to different subfolders. So I have now idea why Google gives such high authority to this subfolder over the specific country subfolders. The content is pretty much identical at this stage. Has any of you experienced similar behaviour and could point me in a promising direction? Thanks a lot. Regards, Jochen
Intermediate & Advanced SEO | | Online-Marketing-Guy0 -
Index process multi language website for different countries
We are in charge of a website with 7 languages for 16 countries. There are only slight content differences by countries (google.de | google.co.uk). The website is set-up with the correct language & country annotation e.g. de/DE/ | de/CH/ | en/GB/ | en/IE. All unwanted annotations are blocked by robots.txt. The «hreflang alternate» are also set. The objective is, to make the website visible in local search engines. Therefore we have submitted a overview sitemap connected with a sitemap per country. The sitemap has been submitted now for quite a while, but Google has indexed only 10 % of the content. We are looking for suggestion to boost the index process.
Intermediate & Advanced SEO | | imsi0 -
Using a 302 re-direct from http://www to https://www to secure customer data
My website sends Customers from a http://www.mysite.com/features page to a https://www.mysite.com/register page which is an account sign-up form using a 302 re-direct. Any page that collects customer data has an authenticated SSL certificate to protect any data on the site. Is this 302 the most appropriate way of doing this as the weekly crawl picks it up as being bad practise? Is there a better alternative?
Intermediate & Advanced SEO | | Ubique0 -
Reciprocal Links and nofollow/noindex/robots.txt
Hypothetical Situations: You get a guest post on another blog and it offers a great link back to your website. You want to tell your readers about it, but linking the post will turn that link into a reciprocal link instead of a one way link, which presumably has more value. Should you nofollow your link to the guest post? My intuition here, and the answer that I expect, is that if it's good for users, the link belongs there, and as such there is no trouble with linking to the post. Is this the right way to think about it? Would grey hats agree? You're working for a small local business and you want to explore some reciprocal link opportunities with other companies in your niche using a "links" page you created on your domain. You decide to get sneaky and either noindex your links page, block the links page with robots.txt, or nofollow the links on the page. What is the best practice? My intuition here, and the answer that I expect, is that this would be a sneaky practice, and could lead to bad blood with the people you're exchanging links with. Would these tactics even be effective in turning a reciprocal link into a one-way link if you could overlook the potential immorality of the practice? Would grey hats agree?
Intermediate & Advanced SEO | | AnthonyMangia0