Best way to handle indexed pages you don't want indexed
-
We've had a lot of pages indexed by google which we didn't want indexed. They relate to a ajax category filter module that works ok for front end customers but under the bonnet google has been following all of the links.
I've put a rule in the robots.txt file to stop google from following any dynamic pages (with a ?) and also any ajax pages but the pages are still indexed on google.
At the moment there is over 5000 pages which have been indexed which I don't want on there and I'm worried is causing issues with my rankings.
Would a redirect rule work or could someone offer any advice?
-
Gavin Since you have added the noindex in the pages, the best way is to let Google crawl those pages, see the noindex and remove them. The other option is to keep everything as is and request these parameter pages via your Google Webmaster Console. Option 1: You never know how long it takes Option 2: This should happen relatively fast I would therefore suggest keeping everything as is and doing a removal request.
-
Right... We think we've been able to get the code noindex code into the dodgy pages. The only way we could think of doing it without breaking the user interface was to put this rule into the PHP.
if(!empty($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest')
{normal code
}
else
{echo '';
echo '';
echo '';
echo '';
echo '';
echo '404';
echo '';
echo '';
}Its rendering ok for us front end, if anyone would like to test... I'm just hopeful it would work for google?
http://www.outdoormegastore.co.uk/cycling/cycling-clothing/protective-clothing.html?ajax=1
One thing I am not sure about is how google is going to revisit the said pages. I have put in various rules to the robots.txt files as well as the url parameter handling in webmaster tools to prevent any future pages from being followed... Would these rules need to be removed?
-
The AJAX URLs are used by the site, though, right (for visitors)? If you 404 them, you may be breaking the functionality and not just impacting Google.
Another problem is that, if these pages are no longer crawlable, and you add a page-level directive (whether it's a 404, 301, canonical, NOINDEX, etc.), Google won't process those new instructions. So, they could get stuck in the index. If that's the case, ti may actually be more effective to block the "ajax=" parameter with parameter handling in Google Webmaster Tools (there's a similar option in Bing).
If you know the path is cut and this isn't a recurrent problem, that could be the fastest short-term solution. You do need to monitor, though, as they can re-enter the index later.
-
Gavin, that's a more generic response. In this scenario, unless you can make a 404 happen, it won't work and therefore is not applicable. Noindex and / or the canonical tag are the choices and I would try and get those going if possible.
-
Thanks for all of the replies... My best option seems to be the meta noindex rule but the nature of the pages that are getting indexed are just one long ajax string with no access to the header are. I hope I have already 'prevented' google from following the links in the future by adding the rules to robots.txt but I'm now desperate to clean up (cure) the existing ones.
My next thought would be to put a rule in htaccess and redirect anything with ajax in the url to a 404 page?
I'm worried that this may have even worse side effects with rankings but its based on this article that google publish: https://support.google.com/webmasters/bin/answer.py?hl=en&answer=59819
"To remove a page or image, you must do one of the following:
- Make sure the content is no longer live on the web. Requests for the page must return an HTTP 404 (not found) or 410 status code
What would your thoughts be on this?
-
Definitely review George's comment as you need to figure out why they're being crawled. As Andrea said, any solution takes time, I'm sorry to say. Robots.txt is not a good solution for getting pages removed that are already indexed, especially in bulk. It's better at prevention than cure.
META NOINDEX can be effective, or you could rel=canonical these pages to the appropriate non-AJAX URL - not sure exactly how the structure is set up. Those are probably the two fastest and more powerful approaches. Google parameter handling (in Webmaster Tools) is another option, but it's a bit unpredictable whether they honor it and how quickly.
You can only do mass removal if everything is in a folder, if I recall. There's no way to bulk remove unless all of the pages are structurally under one root URL.
-
I'm not sure if you're aware or not, but I think I know why Google is indexing these pages.
Right now, you are outputting URLs into your source code of your page in the form of a JavaScript function call similar to the following:
I believe this is because your page (and this function call) is programmatically created. Instead of outputting the whole URL to the page, you could output only what needs to be there.
For example:
Then change the signature of the JavaScript function so that it accepts this new input and builds the URL from your inputs:
function initSlider(price, low, high, category, subcategory, product, store, ajax, ?) {
// build URL
var URL = 'http://www.outdoormegastore.co.uk/' + category + '/' + subcategory + '/' + product + '.html?_' + store + '&' + ajax;
// continue...
}
Right now, because that URL is being outputted to the page, I think Google sees it as a URL it should follow and index. If you build this URL with the function in an external JavaScript file, I don't think it will be indexed.
Your developer(s) should know what I'm talking about.
Hope this helps!
-
If they are already indexed, it's going to take time for Google to recrawl, read the tag and get them to fall out, so patience will be key. It's not a quick thing to undo.
If the pages are all in one location, you can add a disallow robots/text to Webmaster Tools command to prevent that entire folder from being indexed, but again, it's already done so you are going to have to wait for all those pages to fall out.
-
Thanks for the quick reply! I'm desperate to get these removed as soon as possible now. I've got webmaster tools access but requesting over 5,000 pages to be removed one by one will take too long. You can't do page removal in bulk can you?
I'm going to work on the noindex option
-
OMG, that does not look good. I completely understand. The best way in my opinion would be to add a noindex meta tag on these pages and let Google crawl them. Once they re-index them with the noindex, that should take care of the problem. However, be careful since you want to make sure that noindex tag does not appear on your real pages, just the AJAX ones.
Another option might be to consider the canonical tag, but then technically these pages are not duplicate pages, they just should not exist. Are you verified and using the Google Webmaster Console ? If yes, see if you can get some of these pages excluded via the URL removal tool. The best way is to add the noindex tag in my opinion.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I am looking for best way to block a domain from getting indexed ?
We have a website http://www.example.co.uk/ which leads to another domain (https://online.example.co.uk/) when a user clicks,in this case let us assume it to be Apply now button on my website page. We are getting meta data issues in crawler errors from this (https://online.example.co.uk/) domain as we are not targeting any meta content on this particular domain. So we are looking to block this domain from getting indexed to clear this errors & does this effect SERP's of this domain (**https://online.example.co.uk/) **if we use no index tag on this domain.
Technical SEO | | Prasadgotteti0 -
Inner pages of a directory site wont index
I have a business directory site thats been around a long time but has always been split into two parts, a subdomain and the main domain. The subdomain has been used for listings for years but just recently Ive opened up the main domain and started adding listings there. The problem is that none of the listing pages seem to be betting indexed in Google. The main domain is indexed as is the category page and all its pages below that eg /category/travel but the actual business listing pages below that will not index. I can however get them to index if I request Google to crawl them in search console. A few other things: I have nothing blocked in the robots.txt file The site has a DA over 50 and a decent amount of backlinks There is a sitemap setup also any ideas?
Technical SEO | | linklander0 -
Handling 301s: Multiple pages to a single page (consolidation)
Been scouring the interwebs and haven't found much information on redirecting two serparate pages to a single new page. Here is what it boils down to: Let's say a website has two pages, both with good page authority of products that are becoming fazed out. The products, Widget A and Widget B, are still popular search terms, but they are being combined into ONE product, Widget C. While Widget A and Widget B STILL have plenty to do with Widget C, Widget C is now the new page, the main focus page, and the page you want everyone to see and Google to recognize. Now, do I 301 Widget A and Widget B pages to Widget C, ALTHOUGH Widgets A and B previously had nothing to do with one another? (Remember, we want to try and keep some of that authority the two page have had.) OR do we keep Widget A and Widget B pages "alive", take them off the main navigation, and then put a "disclaimer" on the pages announcing they are now part of Widget C and link to Widget C? OR Should Widgets A and B page be canonicalized to Widget C? Again, keep in mind, widgets A and B previously were not similar, but NOW they are and result in Widget C. (If you are confused, we can provide a REAL work example of what we are talkinga about, but decided to not be specific to our industry for this.) Appreciate any and all thoughts on this.
Technical SEO | | JU19850 -
What is the best way to broaden the reach of our website?
We were specifically targeting the Capital Region of New York and environs and now our goal is to broaden the net to reach potential clients. Should I to drop the location terms we already have baked in to the copy and add broader location terms? OR just add newer terms? We're developing a new design that sharpens our focus, but here is what we have now: http://www.behancommunications.com Thanks for any suggestions
Technical SEO | | PatDowd0 -
How Best to Handle 'Site Jacking' (Unauthorized Use of Someone else's Dedicated IP Address)
Anyone can point their domain to any IP address they want. I've found at least two domains (same owner) with two totally unrelated domains (to each other and to us) that are currently pointing their domains to our IP address. The IP address is on our dedicated server (we control the entire physical server) and is exclusive to only that one domain (so it isn't a virtual hosting misconfiguration issue) This has caused Google to index their two domains with duplicate content from our site (found by searching for site:www.theirdomain.com) Their site does not come up in the first 50 results though for any of the keywords we come up for so Google obviously knows THEY are the dupe content, not us (our site has been around for 12 years - much longer than them.) Their registration is private and we have not been able to contact these people. I'm not sure if this is just a mistake on the DNS for the two domains or it is someone doing this intentionally to try to harm our ranking. It has been going on for a while, so it is most likely not a mistake for two live sites as they would have noticed long ago they were pointing to the wrong IP. I can think of a variety of actions to take but I can find no information anywhere regarding what Google officially recommends doing in this situation, assuming you can't get a response. Here's my ideas. a) Approach it as a Digital Copyright Violation and go through the lengthy process of having their site taken down. Pro: Eliminates the issue. Con: Sort of a pain and we could be leaving possibly some link juice on the table? b) Modify .htaccess to do a 301 redirect from any URL not using our domain, to our domain. This means Google is going to see several domains all pointing to the same IP and all except our domain, 301 redirecting to our domain. Not sure if THAT will harm (or help) us? Would we not receive link juice then from any site out there that was linking to these other domains? Con: Google will see the context of the backlinks and their link text will not be related at all to our site. In addition, if any of these other domains pointing to our IP have backlinks from 'bad neighborhoods' I assume it could hurt us? c) Modify .htaccess to do a 404 File Not Found or 403 forbidden error? I posted in other forums and have gotten suggestions that are all over the map. In many cases the posters don't even understand what I'm talking about - thinking they are just normal backlinks. Argh! So I'm taking this to "The Experts" on SEOMoz.
Technical SEO | | jcrist1 -
Removing a site from Google's index
We have a site we'd like to have pulled from Google's index. Back in late June, we disallowed robot access to the site through the robots.txt file and added a robots meta tag with "no index,no follow" commands. The expectation was that Google would eventually crawl the site and remove it from the index in response to those tags. The problem is that Google hasn't come back to crawl the site since late May. Is there a way to speed up this process and communicate to Google that we want the entire site out of the index, or do we just have to wait until it's eventually crawled again?
Technical SEO | | issuebasedmedia0 -
What is the best way of generating links with our exchanging them
I have read that google do not look kindly at sites that exchange links so i am trying to find a way of generating links to a new site. I am building a new site and need to start to get google to index it and build important links to generate traffic. I have looked at link exchange sites but have read that this is not great with google and it is better if you have sites where there is just one way linking. I do not want to buy links and would like to find a way of generating free links which can help build up traffic and the status of my new site. Any help would be great
Technical SEO | | ClaireH-1848860 -
How do I use the Robots.txt "disallow" command properly for folders I don't want indexed?
Today's sitemap webinar made me think about the disallow feature, seems opposite of sitemaps, but it also seems both are kind of ignored in varying ways by the engines. I don't need help semantically, I got that part. I just can't seem to find a contemporary answer about what should be blocked using the robots.txt file. For example, I have folders containing site comps for clients that I really don't want showing up in the SERPS. Is it better to not have these folders on the domain at all? There are also security issues I've heard of that make sense, simply look at a site's robots file to see what they are hiding. It makes it easier to hunt for files when they know the directory the files are contained in. Do I concern myself with this? Another example is a folder I have for my xml sitemap generator. I imagine google isn't going to try to index this or count it as content, so do I need to add folders like this to the disallow list?
Technical SEO | | SpringMountain0