I'm looking for a bulk way to take off from the Google search results over 600 old and inexisting pages?
-
When I search on Google site:alexanders.co.nz still showing over 900 results.
There are over 600 inexisting pages and the 404/410 errrors aren't not working.
The only way that I can think to do that is doing manually on search console using the "Removing URLs" tool but is going to take ages.
Any idea how I can take down all those zombie pages from the search results?
-
Just here to add some to Will almost complete answer:
The 'site:' often shows results that won't be displayed in Google search results and don't represent the entirely nor precisely the pages that are indexed.I'd suggest to you:
1- If those pages are already serving 404 or 410, then wait for a little. Google won't show them in search results and eventually won't be seen in a site: search. You can check whether those URLs are being shown in searches through search console.
2- There is a script made by a webmaster that helps you using the GSC URL removal tool for a big list of URLs. Please, use it carefully and try it first within a riskless GSC propertyHope it helps.
Best luck.
Gaston -
What is the business issue this is causing? Are you seeing these 404 / 410 pages appearing in actual searches?
If it's just that they remain technically indexed, I'd be tempted not to be too worried about it - they will drop out eventually.
Unfortunately, most of the ways to get pages (re-)indexed are only appropriate for real pages that you want to have remain in the index (e.g.: include in a new sitemap file and submit that) or are better for individual pages which has the same downside as removing them via search console one by one.
You can remove whole folders at a time via search console, if that would speed things up - if the removed pages are grouped neatly into folders?
Otherwise, I would probably consider prioritising the list (using data about which are getting visits or visibility in search) and removing as many as you can be bothered to work through.
Hope that helps.
-
Hi, Thanks for that, the problem is that those pages are really old they are generating 0 traffic so we set up a 404 error page a long time ago but Google is not going to remove those pages because without traffic there is not crawl and without a few crawls Google is not going to know that those pages don't exist anymore. They are literally zombie pages! Any idea?
-
What about creating a load of 301 redirects, from the none existent URLs to the still active ones , &/ or, updating your 404 pages to better inform users what happened to the "missing" pages. regardless Google will just stop indexing them after a short while.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google webcache of product page redirects back to product page
Hi all– I've legitimately never seen this before, in any circumstance. I just went to check the google webcache of a product page on our site (was just grabbing the last indexation date) and was immediately redirected away from google's cached version BACK to the site's standard product page. I ran a status check on the product page itself and it was 200, then ran a status check on the webcache version and sure enough, it registered as redirected. It looks like this is happening for ALL indexed product pages across the site (several thousand), and though organic traffic has not been affected it is starting to worry me a little bit. Has anyone ever encountered this situation before? Why would a google webcache possibly have any reason to redirect? Is there anything to be done on our side? Thanks as always for the help and opinions, y'all!
Intermediate & Advanced SEO | | TukTown1 -
Google Search Console Crawl Errors?
We are using Google Search Console to monitor Crawl Errors. It seems Google is listing errors that are not actual errors. For instance, it shows this as "Not found": https://tapgoods.com/products/tapgoods__8_ft_plastic_tables_11_available So the page does not exist, but we cannot find any pages linking to it. It has a tab that shows Linked From, but if I look at the source of those pages, the link is not there. In this case, it is showing the front page (listed twice, both for http and https). Also, one of the pages it shows as linking to the non-existant page above is a non-existant page. We marked all the errors as fixed last week and then this week they came up again. 2/3 are the same pages we marked as fixed last week. Is this an issue with Google Search Console? Are we getting penalized for a non existant issue?
Intermediate & Advanced SEO | | TapGoods0 -
Viewing search results for 'We possibly have internal links that link to 404 pages. What is the most efficient way to check our sites internal links?
We possibly have internal links on our site that point to 404 pages as well as links that point to old pages. I need to tidy this up as efficiently as possible and would like some advice on the best way to go about this.
Intermediate & Advanced SEO | | andyheath0 -
HTML5: Changing 'section' content to be 'main' for better SEO relevance?
We received an HTML5 recommendation that we should change onpage text copy contained in 'section" to be listed in 'main' instead, because this is supposedly better for SEO. We're questioning the need to ask developers spend time on this purely for a perceived SEO benefit. Sure, maybe content in 'footer' may be seen as less relevant, but calling out 'section' as having less relevance than 'main'? Yes, it's true that engines evaluate where onpage content is located, but this level of granular focus seems unnecessary. That being said, more than happy to be corrected if there is actually a benefit. On a side note, 'main' isn't supported by older versions of IE and could cause browser incompatibilities (http://caniuse.com/#feat=html5semantic). Would love to hear others' feedback about this - thanks! 🙂
Intermediate & Advanced SEO | | mirabile0 -
Why Would This Old Page Be Penalized?
Here's an old page on a trustworthy domain with no apparent negative SEO activity according to OSE and ahrefs: http://www.gptours.com/Monaco-Grand-Prix They went from page 1 to page 13 for "monaco grand prix" within about 4 weeks. Week 2 we pulled out all the duplicate content in the history section. When rank slipped further, we put it back. Yet it's still moving down, while other pages on the website are holding strong. Next steps will be to add some schema.org/Event microformats, but beyond that, do you have any ideas?
Intermediate & Advanced SEO | | stevewiideman0 -
Best practice for removing indexed internal search pages from Google?
Hi Mozzers I know that it’s best practice to block Google from indexing internal search pages, but what’s best practice when “the damage is done”? I have a project where a substantial part of our visitors and income lands on an internal search page, because Google has indexed them (about 3 %). I would like to block Google from indexing the search pages via the meta noindex,follow tag because: Google Guidelines: “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.” http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769 Bad user experience The search pages are (probably) stealing rankings from our real landing pages Webmaster Notification: “Googlebot found an extremely high number of URLs on your site” with links to our internal search results I want to use the meta tag to keep the link juice flowing. Do you recommend using the robots.txt instead? If yes, why? Should we just go dark on the internal search pages, or how shall we proceed with blocking them? I’m looking forward to your answer! Edit: Google have currently indexed several million of our internal search pages.
Intermediate & Advanced SEO | | HrThomsen0 -
Why will google not index my pages?
About 6 weeks ago we moved a subcategory out to becomne a main category using all the same content. We also removed 100's of old products and replaced these with new variation listings to remove duplicate content issues. The problem is google will not index 12 critcal pages and our ranking have slumped for the keywords in the categories. What can i do to entice google to index these pages?
Intermediate & Advanced SEO | | Towelsrus0 -
Google sees redirect when there isn't any?
I've posted a question previously regarding the very strange changes in our search positions here http://www.seomoz.org/q/different-pages-ranking-for-search-terms-often-irrelevant New strange thing I've noticed - and very disturbing thing - seems like Google has somehow glued two pages together. Or, in other words, looks like Google sees a 301 redirect from one page to another. This, actually, happened to several pages, I'll illustrate it with our Flash templates page. URL: http://www.templatemonster.com/flash-templates.php
Intermediate & Advanced SEO | | templatemonster
Has been #3 for 'Flash templates' in Google. Reasons why it looks like redirect:
Reason #1
Now this http://www.templatemonster.com/logo-templates.php page is ranking instead of http://www.templatemonster.com/flash-templates.php
Also, http://www.templatemonster.com/flash-templates.php is not in the index.
That what would typically happen if you had 301 from Flash templates to logo templates page. Reason #2
If you search for cache:http://www.templatemonster.com/flash-templates.php Google will give the cahced version of http://www.templatemonster.com/logo-templates.php!!!
If you search for info:www.templatemonster.com/flash-templates.php you again get info on http://www.templatemonster.com/logo-templates.php instead! Reason #3
In Google Webmaster Tools when I look for the external links to http://www.templatemonster.com/logo-templates.php I see all the links from different sites, which actually point to http://www.templatemonster.com/flash-templates.php listed as "Via this intermediate link: http://www.templatemonster.com/flash-templates.php" As I understand Google makes this "via intermediate link" when there's a redirect? That way, currently Google thinks that all the external links we have for Flash templates are actually pointing to Logo templates? The point is we NEVER had any kind of redirect from http://www.templatemonster.com/flash-templates.php to http://www.templatemonster.com/logo-templates.php I've seen several similar situations on Google Help forums but they were never resolved. So, I wonder if anybody can explain how that could have happened, and what can be done to solve that problem?0