Huge number of indexed pages with no content
-
Hi,
We have accidentally had Google indexed lots os our pages with no useful content at all on them.
The site in question is a directory site, where we have tags and we have cities. Some cities have suppliers for almost all the tags, but there are lots of cities, where we have suppliers for only a handful of tags.
The problem occured, when we created a page for each cities, where we list the tags as links.
Unfortunately, our programmer listed all the tags, so not only the ones, where we have businesses, offering their services, but all of them!
We have 3,142 cities and 542 tags. I guess, that you can imagine the problem this caused!
Now I know, that Google might simply ignore these empty pages and not crawl them again, but when I check a city (city site:domain) with only 40 providers, I still have 1,050 pages indexed. (Yes, we have some issues between the 550 and the 1050 as well, but first things first:))
These pages might not be crawled again, but will be clicked, and bounces and the whole user experience in itself will be terrible.
My idea is, that I might use meta noindex for all of these empty pages and perhaps also have a 301 redirect from all the empty category pages, directly to the main page of the given city.
Can this work the way I imagine? Any better solution to cut this really bad nightmare short?
Thank you in advance.
Andras
-
Thank you again, John. I will fix this, based on our discussion.
-
NoIndex I think is slightly superfluous as the 301 will take care of it and also point people to a proper result and give Google a redirected result.
However SEOMoz's Robots information page page suggests:
"In most cases, meta robots with parameters
"noindex, follow"
should be employed as a way to to restrict crawling or indexation."- So maybe consider that...
As for Robots, you can check out SEOMoz's Robots information page where it has information on wildcards, which you could use, which I THINK would work (i.e. http://domain.com/*/tags ?
Not quite sure on that last bit though...
-
Thank you for your reply, Josh.
I will then use the 301, but should I also use the noindex tag for these pages to be removed from the index?
Does it make an emphasis on my intention, or it adds no extra to the process? Perhaps, they should not be used together at all, as basically they are meant for different tasks.
(Unfortunatyly, robots.txt is not really a solution, as we have the following url structure:
Since all the cities have at least a couple of valid tags, I can't specify the path to be excluded from indexing. I would also try not to add 2,000+ cities individually.
As for GWT, url removal for this number of pages might also not be an option, as I have minimum 100,000+ no-value pages to be removed (the limit is 500 per month).)
-
I would agree, just setup a 301 redirect so that users don't bounce and actually get directed to something remotely useful, even just a listing of all the tags around the site or a home page or something (even if you do the below, to ensure users who stumble on these pages are still happy).
You could also use a robots.txt file to show which ones you don't want to be indexed, and finally you may also use Google's Webmaster Tools to manually remove particular pages!
A combo of all of those will work a treat!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content, although page has "noindex"
Hello, I had an issue with some pages being listed as duplicate content in my weekly Moz report. I've since discussed it with my web dev team and we decided to stop the pages from being crawled. The web dev team added this coding to the pages <meta name='robots' content='max-image-preview:large, noindex dofollow' />, but the Moz report is still reporting the pages as duplicate content. Note from the developer "So as far as I can see we've added robots to prevent the issue but maybe there is some subtle change that's needed here. You could check in Google Search Console to see how its seeing this content or you could ask Moz why they are still reporting this and see if we've missed something?" Any help much appreciated!
Technical SEO | | rj_dale0 -
Site Crawl -> Duplicate Page Content -> Same pages showing up with duplicates that are not
These, for example: | https://im.tapclicks.com/signup.php/?utm_campaign=july15&utm_medium=organic&utm_source=blog | 1 | 2 | 29 | 2 | 200 |
Technical SEO | | writezach
| https://im.tapclicks.com/signup.php?_ga=1.145821812.1573134750.1440742418 | 1 | 1 | 25 | 2 | 200 |
| https://im.tapclicks.com/signup.php?utm_source=tapclicks&utm_medium=blog&utm_campaign=brightpod-article | 1 | 119 | 40 | 4 | 200 |
| https://im.tapclicks.com/signup.php?utm_source=tapclicks&utm_medium=marketplace&utm_campaign=homepage | 1 | 119 | 40 | 4 | 200 |
| https://im.tapclicks.com/signup.php?utm_source=blog&utm_campaign=first-3-must-watch-videos | 1 | 119 | 40 | 4 | 200 |
| https://im.tapclicks.com/signup.php?_ga=1.159789566.2132270851.1418408142 | 1 | 5 | 31 | 2 | 200 |
| https://im.tapclicks.com/signup.php/?utm_source=vocus&utm_medium=PR&utm_campaign=52release | Any suggestions/directions for fixing or should I just disregard this "High Priority" moz issue? Thank you!0 -
Can Googlebot crawl the content on this page?
Hi all, I've read the posts in Google about Ajax and javascript (https://support.google.com/webmasters/answer/174992?hl=en) and also this post: http://moz.com/ugc/can-google-really-access-content-in-javascript-really. I am trying to evaluate if the content on this page, http://www.vwarcher.com/CustomerReviews, is crawlable by Googlebot? It appears not to be. I perused the sitemap and don't see any ugly Ajax URLs included as Google suggests doing. Also, the page is definitely indexed, but appears the content is only indexed via its original source (Yahoo!, Citysearch, Google+, etc.). I understand why they are using this dynamic content, because it looks nice to an end-user and requires little to no maintenance. But, is it providing them any SEO benefit? It appears to me that it would be far better to take these reviews and simply build them into HTML. Thoughts?
Technical SEO | | danatanseo0 -
Why are some pages now duplicate content?
It is probably a silly question, but all of a sudden, the following pages of one of my clients are reported as Duplicate content. I cannot understand why. They weren't before... http://www.ciaoitalia.nl/product/pizza-originale/mediterranea-halal
Technical SEO | | MarketingEnergy
http://www.ciaoitalia.nl/product/pizza-originale/gyros-halal
http://www.ciaoitalia.nl/product/pizza-originale/döner-halal
http://www.ciaoitalia.nl/product/pizza-originale/vegetariana
http://www.ciaoitalia.nl/product/pizza-originale/seizoen-pizza-estate
http://www.ciaoitalia.nl/product/pizza-originale/contadina
http://www.ciaoitalia.nl/product/pizza-originale/4-stagioni
http://www.ciaoitalia.nl/product/pizza-originale/shoarma Thanks for any help in the right direction 🙂 | |
| |
| |
| |
| |
| |
| |
| | <colgroup><col style="mso-width-source: userset; mso-width-alt: 17225; width: 353pt;" width="471"></colgroup>
| http://www.ciaoitalia.nl/product/pizza-originale/mediterranea-halal |
| http://www.ciaoitalia.nl/product/pizza-originale/gyros-halal |
| http://www.ciaoitalia.nl/product/pizza-originale/döner-halal |
| http://www.ciaoitalia.nl/product/pizza-originale/vegetariana |
| http://www.ciaoitalia.nl/product/pizza-originale/seizoen-pizza-estate |
| http://www.ciaoitalia.nl/product/pizza-originale/contadina |
| http://www.ciaoitalia.nl/product/pizza-originale/4-stagioni |
| http://www.ciaoitalia.nl/product/pizza-originale/shoarma |0 -
Can you noindex a page, but still index an image on that page?
If a blog is centered around visual images, and we have specific pages with high quality content that we plan to index and drive our traffic, but we have many pages with our images...what is the best way to go about getting these images indexed? We want to noindex all the pages with just images because they are thin content... Can you noindex,follow a page, but still index the images on that page? Please explain how to go about this concept.....
Technical SEO | | WebServiceConsulting.com0 -
How to prevent duplicat content issue and indexing sub domain [ CDN sub domain]?
Hello! I wish to use CDN server to optimize my page loading time ( MaxCDN). I have to use a custom CDN sub domain to use these services. If I added a sub domain, then my blog has two URL (http://www.example.com and http://cdn.example.com) for the same content. I have more than 450 blog posts. I think it will cause duplicate content issues. In this situation, what is the best method (rel=canonical or no-indexing) to prevent duplicate content issue and prevent indexing sub domain? And take the optimum service of the CDN. Thanks!
Technical SEO | | Godad0 -
Tags causing Duplicate page content?
I was looking through the 'Duplicate Page Content' and Too Many On-Page Link' errors and they all seem to be linked to the 'Tags' on my blog pages. Is this really a problem and if so how should I be using tags properly to get the best SEO rewards?
Technical SEO | | zapprabbit1 -
Is there any value to a home page URL adding the /index.html ?
For proper SEO, which version would you prefer? A. www.abccompany.com B. www.abccompany.com/index.html Is there any value or difference with either home page URL??
Technical SEO | | theideapeople0