Removing Dynamic "noindex" URL's from Index
-
6 months ago my clients site was overhauled and the user generated searches had an index tag on them. I switched that to noindex but didn't get it fast enough to avoid being 100's of pages indexed in Google.
It's been months since switching to the noindex tag and the pages are still indexed. What would you recommend? Google crawls my site daily - but never the pages that I want removed from the index.
I am trying to avoid submitting hundreds of these dynamic URL's to the removal tool in webmaster tools. Suggestions?
-
Hooray! Usually, I just give my advice and then run away, so it's always nice to hear I was actually right about something Seriously, glad you got it sorted out.
-
Just a follow up to your suggestion.
I created sitemaps for the pages I want removed using the google spreadsheet importXML functions, which saved a lot of time.
It took a couple weeks but all of the pages, and similar pages, have successfully been removed from the index. Even the similar pages I didn't get a chance to put in the sitemap yet (importXML limits the results to 100).
Your suggestion worked!
-
I can't 404 dynamic search pages.
-
There are a mix of search pages and old mobile pages.
The search pages I've been testing out having the canonical point to the default search page. I've seen a slight drop in these pages - but I guess I just have to be more patient.
For the other pages the path is no longer there like you were mentioning. I like the idea of setting up the XML sitemap, I never even thought of making a bad/indexed page sitemap. I will give that a shot! Thankfully this will be a quick job with the importXml function in google spreadsheets! Great tip, hopefully it'll work.
-
Is there a crawl path to them currently? One issue I see a lot is that a bunch of pages get indexed, the path is found and cut off, NOINDEX (canonical, 301, etc.) is added, but then the pages never get re-crawled. Since they don't get recrawled, the page-level directive never gets honored.
If there's a URL parameter involved, you could use parameter-handling in GWT - it's not a perfect solution, but it sometimes seems to work without a re-crawl.
The other option would be to create a new XML sitemap with all of the bad/indexed URLs. This may push Google to re-crawl them and then see the tags to deindex. It's a bit safer than re-opening the crawl paths.
If they are being crawled and Google is just ignoring the NOINDEX for some reason, I'd try to 301 or canonical those pages to a primary search page, if that's feasible (probably canonical, since you don't want the users to 301). Sometimes, if a signal isn't working for that long, you just have to shake Google and try a different signal. Even following their exact recommendations, it rarely works as planned at large scale.
-
Don't use GWMT's removal tool to remove URLs which should not be in the index (unless those expose sensitive information). Best practise is to exclude them in robots.txt and to also ensure that the pages either 404 or have a noindex,noarchive tag.
-
Change the site structure and let the pages 404, Google will deindex them if they are not being linked to.
-
You could try adding the pages you want to remove to your robots.txt file. Since you're not linking to them, and it's very unlikely that Googlebot will index those pages naturally now, this might be a better way of telling it which pages to explicitly not index.
I'm not really sure how quickly this will trigger Google to remove those pages from the index - but they do reference robots.txt on the actual "Remove URLs" page of WMT ---> "Use **robots.txt **to specify how search engines should crawl your site, or request **removal **of URLs from Google's search results ..."
For that technique, you'd want to add something like this for all of the pages you want to remove:
Disallow: /oldpage1toremove.php
That should work. If it doesn't, then I would probably just submit the requests through the "Remove URLs" tool.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel="prev" / "next"
Hi guys, The tech department implemented rel="prev" and rel="next" on this website a long time ago.
Intermediate & Advanced SEO | | AdenaSEO
We also added a canonical tag to the 'own' page. We're talking about the following situation: https://bit.ly/2H3HpRD However we still see a situation where a lot of paginated pages are visible in the SERP.
Is this just a case of rel="prev" and "next" being directives to Google?
And in this specific case, Google deciding to not only show the 1st page in the SERP, but still show most of the paginated pages in the SERP? Please let me know, what you think. Regards,
Tom1 -
'Nofollow' footer links from another site, are they 'bad' links?
Hi everyone,
Intermediate & Advanced SEO | | romanbond
one of my sites has about 1000 'nofollow' links from the footer of another of my sites. Are these in any way hurtful? Any help appreciated..0 -
What to do when all products are one of a kind WYSIWYG and url's are continuously changing. Lots of 404's
Hey Guys, I'm working on a website with WYSIWYG one of a kind products and the url's are continuously changing. There are allot of duplicate page titles (56 currently) but that number is always changing too. Let me give you guys a little background on the website. The site sells different types of live coral. So there may be anywhere from 20 - 150 corals of the same species. Each coral is a unique size, color etc. When the coral gets sold the site owner trashes the product creating a new 404. Sometimes the url gets indexed, other times they don't since the corals get sold within hours/days. I was thinking of optimizing each product with a keyword and re-using the url by having the client update the picture and price but that still leaves allot more products than keywords. Here is an example of the corals with the same title http://austinaquafarms.com/product-category/acans/ Thanks for the help guys. I'm not really sure what to do.
Intermediate & Advanced SEO | | aronwp0 -
What is a "Bad Link" in Google's eyes? Low DA?
Hi there, I'm going through my link profile and I noticed I have a few links that are from <10 DA sites. One has a DA of 6. Should I remove these? Aside from any referral traffic I receive from these links (I know there is none), are these links hurting me?
Intermediate & Advanced SEO | | Travis-W
What should I look out for in a site I may guest post on? Thanks!
Travis0 -
How to do a 301 redirect for url's with this structure?
In an effort to clean up my url's I'm trying to shorten them by using a 301 redirect in my .htaccess file. How would I set up a rule to grab all urls with a specific structure to a new shorter url examples: http://www.yakangler.com/articles/reviews/other-reviews/item/article-title http://www.yakangler.com/reviews/article-title So in the example above dynamically redirect all url's with /articles/reviews/other-reviews/item/ in it to /reviews/ so http://www.yakangler.com/articles/reviews/boat-reviews/item/1550-review-nucanoe-frontier http://www.yakangler.com/articles/reviews/other-reviews/item/1551-review-spyderco-salt http://www.yakangler.com/articles/reviews/fishing-gear-reviews/item/1524-slayer-inc-sinister-swim-tail would be... http://www.yakangler.com/reviews/1550-review-nucanoe-frontier http://www.yakangler.com/reviews/1551-review-spyderco-salt http://www.yakangler.com/reviews/1524-slayer-inc-sinister-swim-tail with one 301 redirect rule in my .htaccess file.
Intermediate & Advanced SEO | | mr_w0 -
What's the best .NET blog solution?
I asked our developers to implement a WordPress blog on our site and they feel that the technology stack that is required to support WP will interfere with a number of different .NET production applications on that server. I can't justify another server just because of a blog either. They want me to find a .NET blog solution. The only thing that looks decent out there is dotnetblogengine.net. Has anyone had any experience with this tool or any others like it? Thanks, Alex
Intermediate & Advanced SEO | | dbuckles1 -
Rewriting dynamic urls to static
We're currently working on an SEO project for http://www.gear-zone.co.uk/. After a crawl of their site, tons of duplicate content issues came up. We think this is largely down to the use of their brand filtering system, which works like this: By clicking on a brand, the site generates a url with the brand keywords in, for example: http://www.gear-zone.co.uk/3-season-synthetic-cid77.html filtered by the brand Mammut becomes: http://www.gear-zone.co.uk/3-season-synthetic-Mammut-cid77.html?filter_brand=48 This was done by a previous SEO agency in order to prevent duplicate content. We suspect that this has made the issue worse though, as by removing the dynamic string from the end of the URL, the same content is displayed as the unfiltered page. For example http://www.gear-zone.co.uk/3-season-synthetic-Mammut-cid77.html shows the same content as: http://www.gear-zone.co.uk/3-season-synthetic-cid77.html Now, if we're right in thinking that Google is unlikely to the crawl the dynamic filter, this would seem to be the root of the duplicate issue. If this is the case, would rewriting the dynamic URLs to static on the server side be the best fix? It's a Windows Server/asp site. I hope that's clear! It's a pretty tricky issue and it would be good to know your thoughts. Thanks!
Intermediate & Advanced SEO | | neooptic0