URL Parameter Handling In GWT to Treat Overindexation - how aggressive?
-
Hi,
My client recently launched a new site and their index went from about 20K up to about 80K - which is a severe over indexation.
I believe this was caused by parameter handling as some category pages now have 700 pages in the results for "site:domain.com/category1" - and apart from the top result, they are all parameters being indexed.
My question is how active/aggressive should I be in blocking these parameters in Google Webmaster Tools? Currently, everything is set to 'let googlebot decide'.
-
Hi! Did these answers take care of your question, or do you still have some questions?
-
Hey There
I would use a robots meta noindex on them (except for the top page of course) and use rel = prev/next to show they are paginated.
I would prefer to do that than use WMT. Also, WMT crawl settings will stop the crawling, but not remove them from the index. Plus, WMT will only handle Google, not other engines like Bing etc. Not that Bing matters, but always better to have a universal solution.
-Dan
-
Hello Search Guys,
Here is some food for thought taken from: http://www.quora.com/Does-Google-limit-the-number-of-pages-it-indexes-for-a-particular-site
Summary:
"Google says they crawl the web in "roughly decreasing PageRank order" and thus, pages that have not achieved widespread link popularity, particularly on large, deep sites, may not be crawled or indexed."
"Indexation
There is no limit to the number of pages Google may index (meaning available to be served in search results) for a site. But just because your site is crawled doesn't mean it will be indexed.Crawl
The ability, speed and depth for which Google crawls your site and retrieves pages can be dependent on a number of factors: PageRank, XML sitemaps, robots.txt, site architecture, status codes and speed.""For a zero-backlink domain with 80.000+ pages, in conjunction with rel=canonical and an xml-sitemap (You do submit a sitemap, don't you?), after submitting the domain to Google for a crawl, a little less than 10k pages remained in index. A few crawls later this was reduced to a mere 250 (very good job on Google's side).
This leads me to believe the indexation cap for a newer site with low to zero pagerank/authority is around 10k."
Another interesting article: http://searchenginewatch.com/article/2062851/Google-Upping-101K-Page-Index-Limit
Hope this helps, and easy response is to limit crawling to the most needed pages as aggressive as possible to remove the unneeded links leaving only needed ones
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
International Country URL Structure
Hey Guys, We have a www.site.com (gTLD) site, the primary market in Australia. We want to expand to US and UK. For the homepage, we are looking to create 3 new subfolders which are: site.com/au/ site.com/uk/ site.com/us/ Then if someone visits the site.com redirect based on their ip address to to the correct location. We are also looking to setup hreflang tags between the 3 sub-folders and set geo-location targeting in google search console at sub-folder level. Just wondering if this setup sounds ok for international SEO? Cheers.
Intermediate & Advanced SEO | | pladcarl90 -
Product search URLs with parameters and pagination issues - how should I deal with them?
Hello Mozzers - I am looking at a site that deals with URLs that generate parameters (sadly unavoidable in the case of this website, with the resource they have available - none for redevelopment) - they deal with the URLs that include parameters with *robots.txt - e.g. Disallow: /red-wines/? ** Beyond that, they userel=canonical on every PAGINATED parameter page[such as https://wine****.com/red-wines/?region=rhone&minprice=10&pIndex=2] in search results.** I have never used this method on paginated "product results" pages - Surely this is the incorrect use of canonical because these parameter pages are not simply duplicates of the main /red-wines/ page? - perhaps they are using it in case the robots.txt directive isn't followed, as sometimes it isn't - to guard against the indexing of some of the parameter pages??? I note that Rand Fishkin has commented: "“a rel=canonical directive on paginated results pointing back to the top page in an attempt to flow link juice to that URL, because “you'll either misdirect the engines into thinking you have only a single page of results or convince them that your directives aren't worth following (as they find clearly unique content on those pages).” **- yet I see this time again on ecommerce sites, on paginated result - any idea why? ** Now the way I'd deal with this is: Meta robots tags on the parameter pages I don't want indexing (nofollow, noindex - this is not duplicate content so I would nofollow but perhaps I should follow?)
Intermediate & Advanced SEO | | McTaggart
Use rel="next" and rel="prev" links on paginated pages - that should be enough. Look forward to feedback and thanks in advance, Luke0 -
Url design for automobile parts
Hi All, Im designing the url and im confused, need your experts advice engine-oil is a category I will display car truck, bike oils only
Intermediate & Advanced SEO | | Rahim119
Car > in this page I will display engine oils only related to car
Hyundia> in this page I will display engine oils only related to hyundia
i30 > in this page I will display engine oils only related to i30 models
Petrol > in this page I will display engine oils only related to petrol So im planning for www.xyz.com/engine-oil/car/Hyundia/i30/Petrol or should I write like this below xyz.com/c-engine-oil.html
xyz.com/c-car-engine-oil.html
xyz.com/c-hyundia--car-engine-oil.html
xyz.com/c-hyundia-i30-car-engine-oil.html
xyz.com/c-hyundia-i30-Petrol-car-engine-oil.html and also i heard i should keep 3 folders max.. so confused..
i have lot of car parts like engine oil, gear oil, tyres, battery,etc(categories)0 -
URL construction in 2014
Hey guys, I was wondering if you could tell me your thoughts about how a URL is perceived by the algo in 2014? For example: http://www.moneyexpert.com/reviews/credit-cards/amex-platinum/ and lets say http://www.moneyexpert.com/reviews_credit-cards_review_amex-platinum.html In the eyes of google do both different style of url generally help google understand the same result? or will the keyword rich html url have a bigger benefit? I am looking forward to your advice on this matter. I don't plan on doing a lot of SEO but rather letting nature take its course so to speak... so i just wanted to make sure i construct this site with 'best practice'.
Intermediate & Advanced SEO | | irdeto0 -
Tagged URL ranking organically
I've noticed that one of our GA tagged urls are ranking organically & therefore is skewing the referral data. The campaign that we were tracking is no longer active but the link still works, but it's going to an old landing page. I asked our developers if we could redirect it but they said that it didn't work. Does anyone have some advise or a solution for this? Thanks!
Intermediate & Advanced SEO | | Elihn0 -
Google: How to See URLs Blocked by Robots?
Google Webmaster Tools says we have 17K out of 34K URLs that are blocked by our Robots.txt file. How can I see the URLs that are being blocked? Here's our Robots.txt file. User-agent: * Disallow: /swish.cgi Disallow: /demo Disallow: /reviews/review.php/new/ Disallow: /cgi-audiobooksonline/sb/order.cgi Disallow: /cgi-audiobooksonline/sb/productsearch.cgi Disallow: /cgi-audiobooksonline/sb/billing.cgi Disallow: /cgi-audiobooksonline/sb/inv.cgi Disallow: /cgi-audiobooksonline/sb/new_options.cgi Disallow: /cgi-audiobooksonline/sb/registration.cgi Disallow: /cgi-audiobooksonline/sb/tellfriend.cgi Disallow: /*?gdftrk Sitemap: http://www.audiobooksonline.com/google-sitemap.xml
Intermediate & Advanced SEO | | lbohen0 -
Linking to urls with Query Parameters good for SEO?
Hey guys, I am currently buying link ad spots on sites (hardcoded, not using ad networks). I track the each link I buy and the sales they generate with query parameters such as : http://www.mydomain.com/?r=top_menu_nav_on_seomoz My question is : do these links still pass link juice? I have my canonical already set to http://www.mydomain.com Also, in Webmaster tools I have it set to ignore anything after /?r= The way I see it, a link is a link. Naturally I would prefer to send directly to my root domain, however, these links cost a lot of money and I like to track my results. Does anyone have experience with SEO and working with query parameters?
Intermediate & Advanced SEO | | CrakJason0 -
Crawl errors in GWT!
I have been seeing a large number of access denied and not found crawl errors. I have since fixed the issued causing these errors; however, I am still seeing the in webmaster tools. At first I thought the data was outdated, but the data is tracked on a daily basis! Does anyone have experience with this? Does GWT really re-crawl all those pages/links everyday to see if the errors still exist? Thanks in advance for any help/advice.
Intermediate & Advanced SEO | | inhouseseo0