Best to Fix Duplicate Content Issues on Blog If URLs are Set to "No-Index"
-
Greetings Moz Community:
I purchased a SEMrush subscription recently and used it to run a site audit.
The audit detected 168 duplicate content issues mostly relating to blog posts tags. I suspect these issues may be due to canonical tags not being set up correctly.
My developer claims that since these blog URLs are set to "no-index" these issues do not need to be corrected. My instinct would be to avoid any risk with potential duplicate content. To set up canonicalization correctly. In addition, even if these pages are set to "no-index" they are passing page rank. Further more I don't know why a reputable company like SEMrush would consider these errors if in fact they are not errors.
So my question is, do we need to do anything with the error pages if they are already set to "no-index"? Incidentally the site URL is www.nyc-officespace-leader.com. I am attaching a copy of the SEMrush audit.
Thanks, Alan
-
Thanks for cleaning that up, Dennis. That is great advice.
-
I encounter sometimes that with my clients. The basic thing to do is just to add a canonical since they are already noindexed especially for themes that utilize certain pages within a page. Crazy sounding but some themes actually does this so you can't remove the duplicate page, so noindexing it then adding a canonical is already good enough.
But since you mentioned these are just tags, then simply noindexing them is fine. (I'm assuming these are just basic wordpress tags)
As for your pagination question, use a canonical to link to a URL where all the posts are shown. That's the basic rule for that situation and it's somewhere in Google guidelines about pagination
-
Hi Reserve:
Thanks for your response.
Google is able to view this content because of links that go to and from it? So I am not protected by the no-index tag?
I am very unfamiliar with the strange tags generated by Wordpress. Do you think that such tags as the following can be removed without any detrimental effect? If the URLS for these tags are removed should there be redirects added? http://www.nyc-officespace-leader.com/blog/tag/boutique-space, http://www.nyc-officespace-leader.com/blog/tag/meatpacking-district, http://www.nyc-officespace-leader.com/blog/tag/restaurant-space, http://www.nyc-officespace-leader.com/blog/tag/retail-space, http://www.nyc-officespace-leader.com/blog/tag/store-space, http://www.nyc-officespace-leader.com/blog/tag/the-plaza-district, http://www.nyc-officespace-leader.com/blog/tag/times-square, http://www.nyc-officespace-leader.com/blog/tag/chelsea, http://www.nyc-officespace-leader.com/blog/tag/upper-east-side, http://www.nyc-officespace-leader.com/blog/tag/upper-west-side
Also, should canonical tags be added to blog URLs even if they are set to no-index? For example:
http://www.nyc-officespace-leader.com/blog/page/2
http://www.nyc-officespace-leader.com/blog/page/3
http://www.nyc-officespace-leader.com/blog/page/4
Thanks, Alan
-
I would remove them, to be safe. Google sees them regardless of the "no-index", and I think that the cleaner you can get your data, the better off you will be in the long run. While there may be no harm at this time, things always change. I know one thing for sure, and that is that you don't want a duplicate content issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My company wants to set up some blogs - what's best practice in getting started from scratch?
My company wants to set up two or three blogs (on previously unused domains) with the idea being to disseminate good content that gets picked up in SERPs and acts as a lead generator, shows us to be authorities in our market, creates brand (or individual employee who's doing the blogging) awareness etc... From scratch, what are all the boxes that should be ticked to make this work from the outset? What are the must haves?With all the ideals in place, how long could it realistically take to make this work? What are some pitfalls to look out for? Any advice in general will be appreciated. Thanks, M
Intermediate & Advanced SEO | | Martin_S0 -
Duplicate Content... Really?
Hi all, My site is www.actronics.eu Moz reports virtually every product page as duplicate content, flagged as HIGH PRIORITY!. I know why. Moz classes a page as duplicate if >95% content/code similar. There's very little I can do about this as although our products are different, the content is very similar, albeit a few part numbers and vehicle make/model. Here's an example:
Intermediate & Advanced SEO | | seowoody
http://www.actronics.eu/en/shop/audi-a4-8d-b5-1994-2000-abs-ecu-en/bosch-5-3
http://www.actronics.eu/en/shop/bmw-3-series-e36-1990-1998-abs-ecu-en/ate-34-51 Now, multiply this by ~2,000 products X 7 different languages and you'll see we have a big dupe content issue (according to Moz's Crawl Diagnostics report). I say "according to Moz..." as I do not know if this is actually an issue for Google? 90% of our products pages rank, albeit some much better than others? So what is the solution? We're not trying to deceive Google in any way so it would seem unfair to be hit with a dupe content penalty, this is a legit dilemma where our product differ by as little as a part number. One ugly solution would be to remove header / sidebar / footer on our product pages as I've demonstrated here - http://woodberry.me.uk/test-page2-minimal-v2.html since this removes A LOT of page bloat (code) and would bring the page difference down to 80% duplicate.
(This is the tool I'm using for checking http://www.webconfs.com/similar-page-checker.php) Other "prettier" solutions would greatly appreciated. I look forward to hearing your thoughts. Thanks,
Woody 🙂1 -
"Category" word in URLs of blog is it SEO Friendly URL ??
Hello respected community members, I saw many times that "Category" word comes in URL of blog. So my que is that is this negative for SEO or Positive. & if we don't wanna to come CATEGORY in URL how can we remove while URL Optimization ?
Intermediate & Advanced SEO | | sourabhrana390 -
Sudden increase in number of indexed URLs. How ca I know what URLs these are?
We saw a spike in the total number of indexed URLs (17,000 to 165,000)--what would be the most efficient way to find out what the newly indexed URLs are?
Intermediate & Advanced SEO | | nicole.healthline0 -
Google indexing issue?
Hey Guys, After a lot of hard work, we finally fixed the problem on our site that didn't seem to show Meta Descriptions in Google, as well as "noindex, follow" on tags. Here's my question: In our source code, I am seeing both Meta descriptions on pages, and posts, as well as noindex, follow on tag pages, however, they are still showing the old results and tags are also still showing in Google search after about 36 hours. Is it just a matter of time now or is something else wrong?
Intermediate & Advanced SEO | | ttb0 -
Duplicate Content/ Indexing Question
I have a real estate Wordpress site that uses an IDX provider to add real estate listings to my site. A new page is created as a new property comes to market and then the page is deleted when the property is sold. I like the functionality of the service but it creates a significant amount of 404's and I'm also concerned about duplicate content because anyone else using the same service here in Las Vegas will have 1000's of the exact same property pages that I do. Any thoughts on this and is there a way that I can have the search engines only index the core 20 pages of my site and ignore future property pages? Your advice is greatly appreciated. See link for example http://www.mylvcondosales.com/mandarin-las-vegas/
Intermediate & Advanced SEO | | AnthonyLasVegas0 -
Penalised for duplicate content, time to fix?
Ok, I accept this one is my fault but wondering on time scales to fix... I have a website and I put an affiliate store on it, using merchant datafeeds in a bid to get revenue from the site. This was all good, however, I forgot to put noindex on the datafeed/duplicate content pages and over a period of a couple of weeks the traffic to the site died. I have since nofollowed or removed the products but some 3 months later my site still will not rank for the keywords it was ranking for previously. It will not even rank if I type in the sites' name (bright tights). I have searched for the name using bright tights, "bright tights" and brighttights but none of them return the site anywhere. I am guessing that I have been hit with a drop x place penalty by Google for the duplicate content. What is the easiest way around this? I have no warning about bad links or the such. Is it worth battling on trying to get the domain back or should I write off the domain, buy a new one and start again but minus the duplicate content? The goal of having the duplicate content store on the site was to be able to rank the category pages in the store which had unique content on so there were no problems with that which I could foresee. Like Amazon et al, the categories would have lists of products (amongst other content) and you would click through to the individual product description - the duplicate page. Thanks for reading
Intermediate & Advanced SEO | | Grumpy_Carl0 -
"Duplicate" Page Titles and Content
Hi All, This is a rather lengthy one, so please bear with me! SEOmoz has recently crawled 10,000 webpages from my site, FrenchEntree, and has returned 8,000 errors of duplicate page content. The main reason I have so many is because of the directories I have on site. The site is broken down into 2 levels of hierachy. "Weblets" and "Articles". A weblet is a landing page, and articles are created within these weblets. Weblets can hold any number of articles - 0 - 1,000,000 (in theory) and an article must be assigned to a weblet in order for it to work. Here's how it roughly looks in URL form - http://www.mysite.com/[weblet]/[articleID]/ Now; our directory results pages are weblets with standard content in the left and right hand columns, but the information in the middle column is pulled in from our directory database following a user query. This happens by adding the query string to the end of the URL. We have 3 main directory databases, but perhaps around 100 weblets promoting various 'canned' queries that users may want to navigate straight into. However, any one of the 100 directory promoting weblets could return any query from the parent directory database with the correct query string. The problem with this method (as pointed out by the 8,000 errors) is that each possible permutation of search is considered to be it's own URL, and therefore, it's own page. The example I will use is the first alphabetically. "Activity Holidays in France": http://www.frenchentree.com/activity-holidays-france/ - This link shows you a results weblet without the query at the end, and therefore only displays the left and right hand columns as populated. http://www.frenchentree.com/activity-holidays-france/home.asp?CategoryFilter= - This link shows you the same weblet with the an 'open' query on the end. I.e. display all results from this database. Listings are displayed in the middle. There are around 500 different URL permutations for this weblet alone when you take into account the various categories and cities a user may want to search in. What I'd like to do is to prevent SEOmoz (and therefore search engines) from counting each individual query permutation as a unique page, without harming the visibility that the directory results received in SERPs. We often appear in the top 5 for quite competitive keywords and we'd like it to stay that way. I also wouldn't want the search engine results to only display (and therefore direct the user through to) an empty weblet by some sort of robot exclusion or canonical classification. Does anyone have any advice on how best to remove the "duplication" problem, whilst keeping the search visibility? All advice welcome. Thanks Matt
Intermediate & Advanced SEO | | Horizon0