I try to apply best duplicate content practices, but my rankings drop!
-
Hey,
An audit of a client's site revealed that due to their shopping cart, all their product pages were being duplicated.
http://www.domain.com.au/digital-inverter-generator-3300w/
and
http://www.domain.com.au/shop/digital-inverter-generator-3300w/
The easiest solution was to just block all /shop/ pages in Google Webmaster Tools (redirects were not an easy option).
This was about 3 months ago, and in months 1 and 2 we undertook some great marketing (soft social book marking, updating the page content, flickr profiles with product images, product manuals onto slideshare etc).
Rankings went up and so did traffic.
In month 3, the changes in robots.txt finally hit and rankings decreased quite steadily over the last 3 weeks.
Im so tempted to take off the robots restriction on the duplicate content.... I know I shouldnt but, it was working so well without it?
Ideas, suggestions?
-
Agreed with Alan (deeper in the comments) - you may have cut off links to these pages or internal link-juice flow. It would be much better to either 301-redirect the "/shop" pages or use the canonical tag on those pages. In Apache, the 301 is going to be a lot easier - if "/shop/product" always goes to "/product" you can set up a rewrite rule in .htaccess and you don't even need to modify the site code (which site-wide canonical tags would require).
The minor loss from the 301s should be much less than the problems that may have been created with Robots.txt. As Alan said, definitely re-point your internal links to the canonical (non-shop) version.
-
No I mean internal links,
if you do a 301, your internal links poiting to shop/ urls will still work, but they will lose a little link juice when they are 301 redirected. you should point them to the final destination the non shop url.
-
No, it was a very new site. There was only 10 links all to the root.
Thanks for your help.
-
Yes i would 301 then
Are there any links pointing to the shop/ version, internal links? then i would fix them also, as 301's leak link juice you should create link that go directly to the destionation page where you can.
-
The difference between blocking something in robots.txt and a 301?
The duplicates are actively created.
When the products are added to the cart plugin, they automatically create the /shop/product page. These pages were horrible for SEO, and as they were automatically created they could not be edited easily (the plugin developers clearly had no SEO understanding).
My client's developer created a new WP Post for every product and added in a shortcode calling the product from the plugin. This created the duplicate. As this is a wordpress post, the SEO was far more adaptable.
-
I dont understand the difference, is this the reason behind the duplicates?
-
No it's Apache.
Would you guys say it's best to just 301 rewrite all /shop/product to /product? Then unblock from robots.txt?
-
I work with Microsoft Technolgies, I dont work with CMS like WP.
On IIS i would use a outgoing url rewrite url to insert the meta tag. You can do this without touching the website.
Are you by any luck hosting on IIS or are you on Apache?
-
The shopping cart WP plugin is creates the /shop/product pages automatically. I have very little control over them.
Instead the developer has created post pages and inserted the product short codes (this gives the /product effect). I have far more control over these pages, and as such they are far better for SEO.
Do you know of a way I can no index/follow, all /shop pages in the robots.txt?
-
I have been telling other not to do what you have done. what is better is you use "no-index follow" tags instead.
Link juice flows into your pages blocked by robots though links to them never to be seen again. If you use the noindex-follow meta tag you allow the link juice to flow in and out.
The best idea is not to have the duplicates, after thet you should use a canonical tag, if that is not posible then use the noindex, follow tag.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Backup Server causing duplicate content flag?
Hi, Google is indexing pages from our backup server. Is this a duplicate content issue? There are essentially two versions of our entire domain indexed by Google. How do people typically handle this? Any thoughts are appreciated. Thanks, Yael
Intermediate & Advanced SEO | | yaelslater0 -
Five best practice for SEO
Hello Everyone, I am new to SEO and need some help. I have 6 sites and I need to know what are the top 5 strategies for Off page seo. I mean the most important ones. Thanks Abie
Intermediate & Advanced SEO | | signsny0 -
Duplicate page content on numerical blog pages?
Hello everyone, I'm still relatively new at SEO and am still trying my best to learn. However, I have this persistent issue. My site is on WordPress and all of my blog pages e.g page one, page two etc are all coming up as duplicate content. Here are some URL examples of what I mean: http://3mil.co.uk/insights-web-design-blog/page/3/ http://3mil.co.uk/insights-web-design-blog/page/4/ Does anyone have any ideas? I have already no indexed categories and tags so it is not them. Any help would be appreciated. Thanks.
Intermediate & Advanced SEO | | 3mil0 -
SEO Best practice for competitions
I am considering running a competition and wanted to get some feedback on SEO Best Practice. We will have a unique competition URL - following the completion of the competition it will be 301'd to home page Every entrant will be given a unique URL for the competition to share, if someone enters using there URL they get an extra ticket. This means we will create a large number of new unique URL's over a short period of time, the pages however will have the same content. Is this potentially bad for Duplicate content?Any advice? Perhaps a canonical tag on all unique competition entrant URLs? Any other considerations?
Intermediate & Advanced SEO | | RobertChapman0 -
Duplicate content clarity required
Hi, I have access to a masive resource of journals that we have been given the all clear to use the abstract on our site and link back to the journal. These will be really useful links for our visitors. E.g. http://www.springerlink.com/content/59210832213382K2 Simply, if we copy the abstract and then link back to the journal source will this be treated as duplicate content and damage the site or is the link to the source enough for search engines to realise that we aren't trying anything untoward. Would it help if we added an introduction so in effect we are sort of following the curating content model? We are thinking of linking back internally to a relevant page using a keyword too. Will this approach give any benefit to our site at all or will the content be ignored due to it being duplicate and thus render the internal links useless? Thanks Jason
Intermediate & Advanced SEO | | jayderby0 -
Duplicate Content Question
My understanding of duplicate content is that if two pages are identical, Google selects one for it's results... I have a client that is literally sharing content real-time with a partner...the page content is identical for both sites, and if you update one page, teh otehr is updated automatically. Obviously this is a clear cut case for canonical link tags, but I'm cuious about something: Both sites seem to show up in search results but for different keywords...I would think one domain would simply win out over the other, but Google seems to show both sites in results. Any idea why? Also, could this duplicate content issue be hurting visibility for both sites? In other words, can I expect a boost in rankings with the canonical tags in place? Or will rankings remain the same?
Intermediate & Advanced SEO | | AmyLB0 -
Wordpress Duplicate Content
We have recently moved our company's blog to Wordpress on a subdomain (we utilize the Yoast SEO plugin). We are now experiencing an ever-growing volume of crawl errors (nearly 300 4xx now) for pages that do not exist to begin with. I believe it may have something to do with having the blog on a subdomain and/or our yoast seo plugin's indexation archives (author, category, etc) --- we currently have Subpages of archives and taxonomies, and category archives in use. I'm not as familiar with Wordpress and the Yoast SEO plugin as I am with other CMS' so any help in this matter would be greatly appreciated. I can PM further info if necessary. Thank you for the help in advance.
Intermediate & Advanced SEO | | BethA0 -
Duplicate page Content
There has been over 300 pages on our clients site with duplicate page content. Before we embark on a programming solution to this with canonical tags, our developers are requesting the list of originating sites/links/sources for these odd URLs. How can we find a list of the originating URLs? If you we can provide a list of originating sources, that would be helpful. For example, our the following pages are showing (as a sample) as duplicate content: www.crittenton.com/Video/View.aspx?id=87&VideoID=11 www.crittenton.com/Video/View.aspx?id=87&VideoID=12 www.crittenton.com/Video/View.aspx?id=87&VideoID=15 www.crittenton.com/Video/View.aspx?id=87&VideoID=2 "How did you get all those duplicate urls? I have tried to google the "contact us", "news", "video" pages. I didn't get all those duplicate pages. The page id=87 on the most of the duplicate pages are not supposed to be there. I was wondering how the visitors got to all those duplicate pages. Please advise." Note, the CMS does not create this type of hybrid URLs. We are as curious as you as to where/why/how these are being created. Thanks.
Intermediate & Advanced SEO | | dlemieux0