Problems with to many indexed pages
-
A client of our have not been able to rank very well the last few years. They are a big brand in our country, have more than 100+ offline stores and have plenty of inbound links.
Our main issue has been that they have to many indexed pages. Before we started we they had around 750.000 pages in the Google index. After a bit of work we got it down to 400-450.000. During our latest push we used the robots meta tag with "noindex, nofollow" on all pages we wanted to get out of the index, along with canonical to correct URL - nothing was done to robots.txt to block the crawlers from entering the pages we wanted out.
Our aim is to get it down to roughly 5000+ pages. They just passed 5000 products + 100 categories.
I added this about 10 days ago, but nothing has happened yet. Is there anything I can to do speed up the process of getting all the pages out of index?
The page is vita.no if you want to have a look!
-
Great! Please let us know how it goes so we can all learn more about it.
Thanks!
-
Thanks for that! What you are saying makes sense, so I'm going to go ahead and give it a try.
-
"Google: Do Not No Index Pages With Rel Canonical Tags"
https://www.seroundtable.com/noindex-canonical-google-18274.htmlThis is still being debated by people and I'm not saying it is "definitely" your problem. But if you're trying to figure out why those noindexed pages aren't coming out of the index this could be one thing to look into.
John Mueller (see screenshot below) is a Webmaster Trends Analyst for Google.
Good luck.
-
Isn't the whole point of using canonical to give Google a pointer of what page it is originally meant to be?
So if you have a category on shop.com/sub..
Using filter and/or pagenation you then get:
shop.com/sub?p=1
shop.com/sub?color=blue.. and so on! Both those pages then need canonical and neither do we want them index, so we by using both canonical and noindex tell Google to "don't index this page (noindex), here is the original version of it (canonical)".
Or did I misunderstand something?
-
Hello Inevo,
Most of the time when this happens it's just because Google hasn't gotten around to recrawling the pages and updating their index after seeing the new robots meta tag. It can take several months for this to happen on a large site. Submit an XML sitemap and/or create an HTML sitemap that makes it easy for them to get to these pages if you need it to go faster.
I had a look and see some conflicting instructions that Google could possibly be having a problem with.
The paginated version ( e.g. http://www.vita.no/duft?p=2 ) of the page has a rel canonical tag pointing to the first page (e.g. http://www.vita.no/duft/ ). Yet it also has a noindex tag while the canonical page has an index tag. And each page has its own unique title (Side 2 ... Side 3 | ...) . I would remove the rel canonical tag on the paginated pages since they probably don't have any pagerank worth giving to the canonical page. This way it is even more clear to Google that the canonical page is to be indexed, and the others are not to be - instead of saying they are the same page. The same is true of filter pages: http://www.vita.no/gavesett/herre/filter/price-400-/ .
I don't know if that has anything to do with your issue of index bloat, but it's worth a try. I did find some paginated pages in the index.
There also appears to be about 520 blog tag pages indexed. I typically set those to be noindex,follow.
Also remove all paginated pages and any other page that you don't want indexed from your XML sitemaps if you haven't already.
At least for the filter pages, since /filter/ is its own directory, you can use the URL removal tool in GWT. It does have a directory-level removal feature. Of course there are only 75 of these indexed at this moment.
-
My advice would be to include a fresh sitemap and upload it Google Webmaster tool. Not sure about time but I will second Donna, this will take time for the pages to get out of the Google Index.
There is one hack that I used for one page on my website but not sure if it will work for 1000+ pages.
I actually removed a page on my website using Google’s temporary removal request. It kicked the page out of the index for 90 days and in the mean time I added the link in the robots.txt file so it gone quickly and never returned back in the Google listing.
Hope this helps.
-
Hi lnevo,
I had a similar situation last year and am not aware of a faster way to get pages deindexed. You're feeding WMT an updated sitemap right?
It took 8 months for the excess pages to get dropped off my client's site. I'll be listening to hear if anyone knows a faster way.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Escort directory page indexing issues
Re; escortdirectory-uk.com, escortdirectory-usa.com, escortdirectory-oz.com.au,
Technical SEO | | ZuricoDrexia
Hi, We are an escort directory with 10 years history. We have multiple locations within the following countries, UK, USA, AUS. Although many of our locations (towns and cities) index on page one of Google, just as many do not. Can anyone give us a clue as to why this may be?0 -
[Organization schema] Which Facebook page should be put in "sameAs" if our organization has separate Facebook pages for different countries?
We operate in several countries and have this kind of domain structure:
Technical SEO | | Telsenome
example.com/us
example.com/gb
example.com/au For our schemas we've planned to add an Organization schema on our top domain, and let all pages point to it. This introduces a problem and that is that we have a separate Facebook page for every country. Should we put one Facebook page in the "sameAs" array? Or all of our Facebook pages? Or should we skip it altogether? Only one Facebook page:
{
"@type": "Organization",
"@id": "https://example.com/org/#organization",
"name": "Org name",
"url": "https://example.com/org/",
"sameAs": [
"https://www.linkedin.com/company/xxx",
"https://www.facebook.com/xxx_us"
], All Facebook pages:
{
"@type": "Organization",
"@id": "https://example.com/org/#organization",
"name": "Org name",
"url": "https://example.com/org/",
"sameAs": [
"https://www.linkedin.com/company/xxx",
"https://www.facebook.com/xxx_us"
"https://www.facebook.com/xxx_gb"
"https://www.facebook.com/xxx_au"
], Bonus question: This reasoning springs from the thought that we only should have one Organization schema? Or can we have a multiple sub organizations?0 -
Spam pages being redirected to 404s but sill indexed
Client had a website that was hacked about a year ago. Hackers went in and added a bunch of spam landing pages for various products. This was before the site had installed an SSL certificate. After the hack, the site was purged of the hacked pages and and SLL certificate was implemented. Part of that process involved setting up a rewrite that redirects http pages to the https versions. The trouble is that the spam pages are still being indexed by Google, even months later. If I do a site: search I still see all of those spam pages come up before most of the key "real" landing pages. The thing is, the listing on the SERP are to the http versions, so they're redirecting to the https version before serving a 404. Is there any way I can fix this without removing the rewrite rule?
Technical SEO | | SearchPros1 -
Is site: a reliable method for getting full list of indexed pages?
The site:domain.com search seems to show less pages than it used to (Google and Bing). It doesn't relate to a specific site but all sites. For example, I will get "page 1 of about 3,000 results" but by the time I've paged through the results it will end and change to "page 24 of 201 results". In that example If I look in GSC it shows 1,932 indexed. Should I now accept the "pages" listed in site: is an unreliable metric?
Technical SEO | | bjalc20112 -
Moving Some Content From Page A to Page B
Page A has written content, pictures, videos. The written content from Page A is being moved to Page B. When Google crawls the pages next time around will Page B receive the content credit? Will there not be any issues that this content originally belonged to Page A? Page A is not a page I want to rank for (just have great pictures and videos for users). Can I 301 redirect from Page A to B since the written content from A has been deleted or no need? Again, I intent to keep Page A live because good value for users to see pictures and videos.
Technical SEO | | khi50 -
What is the best practice to re-index the de-indexed pages due to a bad migration
Dear Mozers, We have a Drupal site with more than 200K indexed URLs. Before 6 months a bad website migration happened without proper SEO guidelines. All the high authority URLs got rewritten by the client. Most of them are kept 404 and 302, for last 6 months. Due to this site traffic dropped more than 80%. I found today that around 40K old URLs with good PR and authority are de-indexed from Google (Most of them are 404 and 302). I need to pass all the value from old URLs to new URLs. Example URL Structure
Technical SEO | | riyas_
Before Migration (Old)
http://www.domain.com/2536987
(Page Authority: 65, HTTP Status:404, De-indexed from Google) After Migration (Current)
http://www.domain.com/new-indexed-and-live-url-version Does creating mass 301 redirects helps here without re-indexing the old URLS? Please share your thoughts. Riyas0 -
Google Indexed Only 1 Page
Hi, I'm new and hope this forum can help me. I have recently resubmit my sitemap and Google only Indexed 1 Page. I can still see many of my old indexed pages in the SERP's? I have upgraded my template and graded all my pages to A's on SEOmoz, I have solid backlinks and have been building them over time. I have redirected all my 404 errors in .htaccess and removed /index.php from my url's. I have never done this before but my website runs perfect and all my pages redirect as I hoped. My site: www.FunerallCoverFinder.co.za How do I figure out what the problem is? Thanks in Advance!
Technical SEO | | Klement690 -
Getting Pages Indexed That Are Not In The Main Navigation
Hi All, Hoping you can help me out with a couple of questions I have. I am looking to create SEO friendly landing pages optimized for long tail keywords to increase site traffic and conversions. These pages will not live on the main navigation. I am wondering what the best way to get these pages indexed is? Internal text linking, adding to the sitemap? What have you done in this situation? I know that these pages cannot be orphaned pages and they need to be linked to somewhere. Looking for some tips to do this properly and to ensure that they can become indexed. Thanks! Pat
Technical SEO | | PatBausemer0