How would you handle 12,000 "tag" pages on Wordpress site?
-
We have a Wordpress site where /tag/ pages were not set to "noindex" and they are driving 25% of site's traffic (roughly 100,000 visits year to date). We can't simply "noindex" them all now, or we'll lose a massive amount of traffic. We can't possibly write unique descriptions for all of them. We can't just do nothing or a Panda update will come by and ding us for duplicate content one day (surprised it hasn't already). What would you do?
-
Yep, already implemented. Good point though.
-
Definitely. I start with the 30 day view, then go to YTD, then push the start date back to 1/1/2011. That's my 3 step process every time I'm investigating a situation.
I've seen at least 20 of our sites decline in traffic in the past few months due to the April & June Panda updates. The dates of decline in Webmaster Tools (Traffic > Search Queries) line up perfectly with the various recent Panda updates.
Fixing /tag/ issues is one thing...but we have a monumental task of rewriting massive amounts of product descriptions next. We also have a fair amount of "no-indexing" or canonicalizing to do with our syndicated content. We'll be better for it in the end. I only wish I knew about these situations much sooner.
As I tell everyone, protect your unique content with all you've got...and keep duplicate content nowhere near your site. It's just too risky.
-
Additionally, make sure your posts have rel=canonical.
-
Are you looking at your analytics as far back as early 2011?
I'm come across people who were hit on known Panda update day that weren't aware they were ....as strange as it may sound.
-
Thank you both...and, we're thinking alike. I recently went through our 60+ Wordpress sites addressing the issue of non-indexed /tag/ pages and also ensuring they weren't in the sitemap via our Sitemap plugin.
For the sites that had hundreds or thousands of /tag/ pages, but very little traffic in Google Analytics (Search > Organic w/ Landing Page as "primary dimension")...I just went ahead and set them to "noindex").
For sites where the /tag/ pages were driving a fair amount of traffic (10% of site total or more), I had our editors write unique descriptions for the top 50-100 (like we do with category pages) and then we set the rest to "noindex,follow" via the meta robots tag.
For this one site...I just haven't found an easy solution that didn't leave an uneasy feeling in my stomach. It's tough to give up 25% of your traffic in hopes that Google will get it right and rank your real content higher in place of these /tag/ pages.
Uh oh...I just checked Analytics and or organic traffic started creeping down @ July 13th. When I look at just the /tag/ pages in the organic landing pages section, I see that they dropped in traffic @ 50-60%. Something bad is happening. I am setting them to "noindex" immediately.
Definitely can't wait to read your post. I'll be writing my own on www.kernmedia.com in the near future as well.
-
Looking forward to that post, Dan.
-
Hi
I'm actually going to be addressing this exact question on a post for Moz in the coming weeks - so keep an eye out for that.
But in short, here's what I do;
Analytics
- run a report for landing tag pages (with a filter) - over the last three months
- apply an advanced segment to see google only traffic
- dump the report into a CSV
Webmaster Tools
- view a impressions / clicks report by top pages (not keyword) - also zoom out as far as you can
- filter for web only (not images)
- dump the report into a csv
VLookup in Excel
using a VLookup in excel - combine the two reports matching data to the URLs (you'll end up discarding some non-tag pages from wmt) - the end result will be a master spreadsheet, with the following columns;
- URL
- Impressions
- clicks
- avg position
- visits
- pages/visit
- avg visit duration
- % new visits
- bounce rate
(These are all the default report metrics. I actually prefer a custom landing page report in analytics, but this works fine.)
Analyze
Then, you do your sorting, filtering etc - to decide how valuable the tag traffic has been. In general, you're trying to look for an overwhelming reason for the value add of having those pages in there. they might get visits, but what's onsite behavior? maybe they get visits, but perhaps only from a small handle of tag pages?
In the post I do, I'll cover more about how to analyze this report etc.
As Klarke put so well, the actual posts should rank in their place. Those tend to have better results when people land on those.
Remove
If you decide to remove, do so carefully. Do it on a weekend or just before a downtime. If you use Yoast simply select to noindex tag archives.
Also, rememeber to exclude tags from your XML sitemap.
Then watch webmaster tools etc and watch for their removal.
--- I did this process on a site with 9,000 tag pages in the index and results were very good.
-Dan
-
I would "noindex,follow" them. Don't block them with robots.txt.
With that many pages, you're certainly running the risk of being hit by Panda.Those tag pages shouldn't be ranking, instead the individual posts should be in those positions. If I were you, I would take the chance and do the noindex, with the expectation that Google will appropriately rank the posts in their place.
I'd say those are better odds as against losing 50 - 80% of traffic in a panda update.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages that did NOT 301 redirect to the new site
Hi, Is there a tool out there that can tell me what pages did NOT 301 redirect to the new sites? I need something rather than going into google.com and typing in site:oldsite.com to see if it's still indexed and if it's not 301 redirecting.. I'm not sure if screaming frog can do that. Thanks.
Intermediate & Advanced SEO | | ggpaul5620 -
Landing pages, are my pages competing?
If I have identified a keyword which generates income and when searched in google my homepage comes up ranked second, should I still create a landing page based on that keyword or will it compete with my homepage and cause it to rank lower?
Intermediate & Advanced SEO | | The_Great_Projects0 -
Why are "noindex" pages access denied errors in GWT and should I worry about it?
GWT calls pages that have "noindex, follow" tags "access denied errors." How is it an "error" to say, "hey, don't include these in your index, but go ahead and crawl them." These pages are thin content/duplicate content/overly templated pages I inherited and the noindex, follow tags are an effort to not crap up Google's view of this site. The reason I ask is that GWT's detection of a rash of these access restricted errors coincides with a drop in organic traffic. Of course, coincidence is not necessarily cause. Should I worry about it and do something or not? Thanks... Darcy
Intermediate & Advanced SEO | | 945010 -
Should I "NoIndex" Pages with Almost no Unique Content
I have a real estate site with MLS data (real estate listings shared across the Internet by Realtors, which means data exist across the Internet already). Important pages are the "MLS result pages" - the pages showing thumbnail pictures of all properties for sale in a given region or neighborhood. 1 MLS result page may be for a region and another for a neighborhood within the region:
Intermediate & Advanced SEO | | khi5
example.com/region-name and example.com/region-name/neighborhood-name
So all data on the neighborhood page will be 100% data from the region URL. Question: would it make sense to "NoIndex" such neighborhood page, since it would reduce nr of non-unique pages on my site and also reduce amount of data which could be seen as duplicate data? Will my region page have a good chance of ranking better if I "NoIndex" the neighborhood page? OR, is Google so advanced they know Realtors share MLS data and worst case simple give such pages very low value, but will NOT impact ranking of other pages on a website? I am aware I can work on making these MLS result pages more unique etc, but that isn't what my above question is about. thank you.0 -
Do image "lightbox" photo gallery links on a page count as links and dilute PageRank?
Hi everyone, On my site I have about 1,000 hotel listing pages, each which uses a lightbox photo gallery that displays 10-50 photos when you click on it. In the code, these photos are each surrounded with an "a href", as they rotate when you click on them. Going through my Moz analytics I see that these photos are being counted by Moz as internal links (they point to an image on the site), and Moz suggests that I reduce the number of links on these pages. I also just watched Matt Cutt's new video where he says to disregard the old "100 links max on a page" rule, yet also states that each link does divide your PageRank. Do you think that this applies to links in an image gallery? We could just switch to another viewer that doesn't use "a href" if we think this is really an issue. Is it worth the bother? Thanks.
Intermediate & Advanced SEO | | TomNYC0 -
How does the crawl find duplicate pages that don't exist on the site?
It looks like I have a lot of duplicate pages which are essentially the same url with some extra ? parameters added eg: http://www.merlin.org.uk/10-facts-about-malnutrition http://www.merlin.org.uk/10-facts-about-malnutrition?page=1 http://www.merlin.org.uk/10-facts-about-malnutrition?page=2 These extra 2 pages (and there's loads of pages this happens to) are a mystery to me. Not sure why they exist as there's only 1 page. Is this a massive issue? It's built on Drupal so I wonder if it auto generates these pages for some reason? Any help MUCH appreciated. Thanks
Intermediate & Advanced SEO | | Deniz0 -
E-commerce Site - Filter Pages
Hi, We have a client who has a fairly large e-commerce site that went live quite recently. The site is near enough fully indexed by Google, but one thing I've noticed is that filtered search results pages are being indexed, all with duplicate page titles. Obviously this is an issue that needs to be looked at ASAP. My questions is this - would we be better tweaking site settings so that page titles are constructed from the filters (brand/price/size) and therefore unique (and useful for searchers who are after a specific brand or size of a given item). Or should we rel=canonical the filtered pages so that they are eventually dropped from the index (the safer of the two options)? Thanks in advance for your help!
Intermediate & Advanced SEO | | jasarrow0 -
Dynamically creating unique page titles on enterprise site
Hi, I want to dynamically create unique page titles (possible meta descriptions too) on a 10k page site. Many of the page titles are either duplicates or are missing. I heard about the option of grabbing the page titles from a database or possibly using the h1 as the page title. solmelia.com (the website consist of mostly static pages) Any suggestions would be much appreciated. Best Regards,
Intermediate & Advanced SEO | | Melia0