Wordpress duplicate pages
-
I am using Wordpress
and getting duplicate content Crawler error for following two pages
http://edustars.yourstory.in/tag/edupristine/
http://edustars.yourstory.in/tag/education-startups/
These two are tags which take you to the same page.
All the other tags/categories which take you to the same page or have same title are also throwing errors, how do i fix it?
-
Thank You
-
Hi Bhanu,
Domisol is correct. NoFollow is talked about more frequently, but I'm referring to what is called NoIndex. Using NoIndex tells Google not to show that page in the search results. In contrast, NoFollow tells Google not to pass value through all or some of the links on a page.
I also agree with Domisol that NoFollow should not be used along with NoIndex in this situation. Allowing the link value to flow freely throughout the site and selectively choosing to remove the tag pages is a better solution. Doing this will help index more pages with more obscure tags, too.
To NoIndex your tag pages, go to the Indexation page under the SEO options. Under Indexation Rules, select "Subpages of Archives and taxonomies," "Tag Archives," and "Date-based Archives." If you're not doing anything special with the Author pages or only have one author, go ahead and select that, too. I would prefer to see you add unique content to the Category page before you NoIndex it, but that one could arguably go both ways.
-
Hi Bhamu,
we are not talking about no follow here. We are talking about no index.
So you should set to no index all archive pages of Wordpress, meaning categories, author and tags, basically all the pages that are collections of other pages (or posts).
Setting those archive pages to no follow would be a mistake, because you'd miss the link juice propagation to all the pages of your site.
Hope it helps,
DoMiSol
-
Yes i am using Yoast's SEO Plugin.
Can you please suggest what all should be set to no follow other then tag?
-
I agree with Kane, Also, SEOmoz have the good guide regarding this http://www.seomoz.org/blog/setup-wordpress-for-seo-success check this out and you will have a clear idea of what to do when you stuck with this situation.
-
You're getting the error because, in essence, those pages are identical.
You should set your tag pages to noindex. It looks like you already have Yoast's SEO plugin installed - go into the settings and you should find it as an option.
That may or may not make the SEOMoz campaign errors go away, but it fixes the issue from an SEO perspective.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Quick Fix to "Duplicate page without canonical tag"?
When we pull up Google Search Console, in the Index Coverage section, under the category of Excluded, there is a sub-category called ‘Duplicate page without canonical tag’. The majority of the 665 pages in that section are from a test environment. If we were to include in the robots.txt file, a wildcard to cover every URL that started with the particular root URL ("www.domain.com/host/"), could we eliminate the majority of these errors? That solution is not one of the 5 or 6 recommended solutions that the Google Search Console Help section text suggests. It seems like a simple effective solution. Are we missing something?
Technical SEO | | CREW-MARKETING1 -
How to deal with duplicated content on product pages?
Hi, I have a webshop with products with different sizes and colours. For each item I have a different URL, with almost the same content (title tag, product descriptions, etc). In order to prevent duplicated content I'am wondering what is the best way to solve this problem, keeping in mind: -Impossible to create one page/URL for each product with filters on colour and size -Impossible to rewrite the product descriptions in order to be unique I'm considering the option to canonicolize the rest of de colours/size variations, but the disadvantage is that in case the product is not in stock it disappears from the website. Looking forward to your opinions and solutions. Jeroen
Technical SEO | | Digital-DMG0 -
Can i use "nofollow" tag on product page (duplicated content)?
Hi, im working on my webstore SEO. I got descriptions from official seller like "Bosch". I got more than 15.000 items so i cant create unique content for each product. Can i use nofollow tag for each product and create great content on category pages? I dont wanna lose rankings because duplicated content. Thank you for help!
Technical SEO | | pejtupizdo0 -
3,511 Pages Indexed and 3,331 Pages Blocked by Robots
Morning, So I checked our site's index status on WMT, and I'm being told that Google is indexing 3,511 pages and the robots are blocking 3,331. This seems slightly odd as we're only disallowing 24 pages on the robots.txt file. In light of this, I have the following queries: Do these figures mean that Google is indexing 3,511 pages and blocking 3,331 other pages? Or does it mean that it's blocking 3,331 pages of the 3,511 indexed? As there are only 24 URLs being disallowed on robots.text, why are 3,331 pages being blocked? Will these be variations of the URLs we've submitted? Currently, we don't have a sitemap. I know, I know, it's pretty unforgivable but the old one didn't really work and the developers are working on the new one. Once submitted, will this help? I think I know the answer to this, but is there any way to ascertain which pages are being blocked? Thanks in advance! Lewis
Technical SEO | | PeaSoupDigital0 -
Google showing https:// page in search results but directing to http:// page
We're a bit confused as to why Google shows a secure page https:// URL in the results for some of our pages. This includes our homepage. But when you click through it isn't taking you to the https:// page, just the normal unsecured page. This isn't happening for all of our results, most of our deeper content results are not showing as https://. I thought this might have something to do with Google conducting searches behind secure pages now, but this problem doesn't seem to affect other sites and our competitors. Any ideas as to why this is happening and how we get around it?
Technical SEO | | amiraicaew0 -
Duplicate Page Content for sorted archives?
Experienced backend dev, but SEO newbie here 🙂 When SEOmoz crawls my site, I get notified of DPC errors on some list/archive sorted pages (appending ?sort=X to the url). The pages all have rel=canonical to the archive home. Some of the pages are shorter (have only one or two entries). Is there a way to resolve this error? Perhaps add rel=nofollow to the sorting menu? Or perhaps find a method that utilizes a non-link navigation method to sort / switch sorted pages? No issues with duplicate content are showing up on google webmaster tools. Thanks for your help!
Technical SEO | | jwondrusch0 -
Off-page SEO and on-page SEO improvements
I would like to know what off-page SEO and on-page SEO improvements can be made to one of our client websites http://www.nd-center.com Best regards,
Technical SEO | | fkdpl2420 -
When Is It Good To Redirect Pages on Your Site to Another Page?
Suppose you have a page on your site that discusses a topic that is similar to another page but targets a different keyword phrase. The page has medium quality content, no inbound links, and the attracts little traffic. Should you 301 redirect the page to a stronger page?
Technical SEO | | ProjectLabs1