DeIndexing pagination
-
I have a custom made blog with boat loads of undesirable URLs in Google's index like this:
.com/resources?start=150
.com/resources?start=160
.com/resources?start=170I've identified this is a source of duplicate title tags and had my programmer put a no index tag to automatically go on all of these undesirable URLs like this:
However doing a site: search in google shows the URLs to still be indexed even though I've put the tag up a few weeks ago.
How do I get google to remove these URLs from the index? I'm aware that the Search Console has an answer here https://support.google.com/webmasters/topic/4598466?authuser=1&authuser=1&rd=1 but it says that blocking with meta tags should work.
Do I just get google to crawl the URL again so it sees the tag and then deindexes the URLs? Or is there another way I'm missing.
-
Adding a meta noindex tag can mean it takes a few weeks for a page to fall out of the index. These pages probably aren't doing you much harm, so if you wanted to just wait for them to fall out, that's probably fine (although I would update the tag content to "noindex, follow" to help Google crawl to the other noindexed pages). If you really want them out of the index faster, you could use the "Remove URLs" function under Google Index in Google Search Console, which will temporarily remove them from the index while Google is registering the noindex tags, or you can use the Fetch + Render tool and then Submit URLs in Google Search Console, which will cause Google to come back and crawl your pages and find the noindex tag.
-
You could use URL parameter settings in Google Search Console and Bing Webmaster Tools - if all ?start= URLs can be treated the same way by Google.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to solve JavaScript paginated content for SEO
In our blog listings page, we limit the number of blogs that can be seen on the page to 10. However, all of the blogs are loaded in the html of the page and page links are added to the bottom. Example page: https://tulanehealthcare.com/about/newsroom/ When a user clicks the next page, it simply filters the content on the same page for the next group of postings and displays these to the user. Nothing in the html or URL change. This is all done via JavaScript. So the question is, does Google consider this hidden content because all listings are in the html but the listings on page are limited to only a handful of them? Or is Googlebot smart enough to know that the content is being filtered by JavaScript pagination? If this is indeed a problem we have 2 possible solutions: not building the HTML for the next pages until you click on the 'next' page. adding parameters to the URL to show the content has changed. Any other solutions that would be better for SEO?
Intermediate & Advanced SEO | | MJTrevens1 -
Pagination & View all option on Ecommerce site
Hi I want to add a view all option to our category pages as recommended by Google and to make it easier for the customer. I'm not sure how long this will take our developer, but is this an action that is worth doing from an SEO point of view? Will it add value? For me it will have value, but in terms of SEO actions should it be one I prioritise? I have issues with our pagination and think this would be the quickest way to solve it - http://www.key.co.uk/en/key/workbenches Any help appreciated 🙂
Intermediate & Advanced SEO | | BeckyKey0 -
Site deindexed after HTTPS migration + possible penalty due to spammy links
Hi all, we've recently migrated a site from http to https and saw the majority of pages drop out of the index. https://www.relate.org.uk/ One of the most extreme deindexation problems I've ever seen, but there doesn't appear to be anything obvious on-page which is causing the issue. (Unless I've missed something - please tell me if I have!) I had initially discounted any off-page issues due to the lack of a manual action in SC, however after looking into their link profile I spotted 100 spammy porn .xyz sites all linking (see example image). Didn't appear to be any historic disavow files uploaded in the non https SC accounts. Any on-page suggestions, or just play the waiting game with the new disavow file? Hku8I
Intermediate & Advanced SEO | | CTI_Digital0 -
Pagination with rel=“next” and rel=“prev”
Hi Guys, Just wondering can anyone recommend any tools or good ways to check if rel=“next” and rel=“prev” attributes have been implemented properly across a large ecommerce based site? Cheers. rel=“next” and rel=“prev”
Intermediate & Advanced SEO | | jayoliverwright0 -
"No index" page still shows in search results and paginated pages shows page 2 in results
I have "no index, follow" on some pages, which I set 2 weeks ago. Today I see one of these pages showing in Google Search Results. I am using rel=next prev on pages, yet Page 2 of a string of pages showed up in results before Page 1. What could be the issue?
Intermediate & Advanced SEO | | khi50 -
How should I handle pagination on an e-commerce site?
I am looking at one of our category pages and it has 25 additional pages for a total of 26 pages. The url for the first page looks good, then the next one ends with ?SearchText=768&SearchType=Category All additional pages have the same url. My first concern was duplicate content, but after looking no pages after the 1st are even indexed. What is the best way to handle this?
Intermediate & Advanced SEO | | EcommerceSite0 -
Best Practices for Pagination on E-commerce Site
One of my e-commerce clients has a script enabled on their category pages that allows more products to automatically be displayed as you scroll down. They use this instead of page 1, 2, and a view all. I'm trying to decide if I want to insist that they change back to the traditional method of multiple pages with a view all button, and then implement rel="next", rel="prev", etc. I think the current auto method is disorienting for the user, but I can't figure out if it's the same for the spiders. Does anyone have any experience with this, or thoughts? Thanks!
Intermediate & Advanced SEO | | smallbox0 -
Should we deindex duplicate pages?
I work on an education website. We offer programs that are offered up to 6 times per year. At the moment, we have a webpage for each instance of the program, but that's causing duplicate content issues. We're reworking the pages so the majority of the content will be on one page, but we'll still have to keep the application details as separate pages. 90% of the time, application details are going to be nearly identical, so I'm worried that these pages will still be seen as duplicate content. My question is, should we deindex these pages? We don't particularly want people landing on our application page without seeing the other details of the program anyway. But, is there problem with deindexing such a large chunk of your site that I'm not thinking of? Thanks, everyone!
Intermediate & Advanced SEO | | UWPCE0