Removing massive number of no index follow page that are not crawled
-
Hi,
We have stackable filters on some of our pages (ie: ?filter1=a&filter2=b&etc.). Those stacked filters pages are "noindex, follow". They were created in order to facilitate the indexation of the item listed in them.
After analysing the logs we know that the search engines do not crawl those stacked filter pages.
Does blocking those pages (by loading their link in AJAX for example) would help our crawl rate or not? In order words does removing links that are already not crawled help the crawl rate of the rest of our pages?
My assumption here is that SE see those links but discard them because those pages are too deep in our architecture and by removing them we would help SE focus on the rest of our page. We don't want to waste our efforts removing those links if there will be no impact.
Thanks
-
Personally I don't agree with setting internal filter URLs to nofollow. I set noindex as you have done and add the filter attributes to the Search Console > Crawl > URL Parameters.
For the option "Which URLs with this parameter should Googlebot crawl?" you can set "No URLs" (if the filters are uniform throughout the site).
"No URLs: Googlebot won't crawl any URLs containing this parameter. This is useful if your site uses many parameters to filter content. For example, telling Googlebot not to crawl URLs with less significant parameters such as
pricefrom
andpriceto
(likehttp://www.examples.com/search?category=shoe&brand=nike&color=red&size=5&pricefrom=10&priceto=1000
) can prevent the unnecessary crawling of content already available from a page without those parameters (likehttp://www.examples.com/search?category=shoe&brand=nike&color=red&size=5)"
-
noindex means that crawlers can still visit the page (using crawl budget). You would need to link to those pages using a nofollow tag + block via robots.txt to prevent crawlers from accessing them.
Overall, if those pages aren't being crawled currently, then they aren't affecting your crawl budget since they aren't being visited. However, if you build more authority to your website, your crawl budget will grow so crawlers might start visiting those pages again.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to identify number of internal links to page?
Hi Guys, Besides OSE & screaming frog - are there any tools which can check internal links to a page? I know ahrefs, majestic cannot. Cheers.
Intermediate & Advanced SEO | | wozniak650 -
Best way to link to 1000 city landing pages from index page in a way that google follows/crawls these links (without building country pages)?
Currently we have direct links to the top 100 country and city landing pages on our index page of the root domain.
Intermediate & Advanced SEO | | lcourse
I would like to add in the index page for each country a link "more cities" which then loads dynamically (without reloading the page and without redirecting to another page) a list with links to all cities in this country.
I do not want to dillute "link juice" to my top 100 country and city landing pages on the index page.
I would still like google to be able to crawl and follow these links to cities that I load dynamically later. In this particular case typical site hiearchy of country pages with links to all cities is not an option. Any recommendations on how best to implement?0 -
Will Reducing Number of Low Page Authority Page Increase Domain Authority?
Our commercial real estate site (www.nyc-officespace-leader.com) contains about 800 URLs. Since 2012 the domain authority has dropped from 35 to about 20. Ranking and traffic dropped significantly since then. The site has about 791 URLs. Many are set to noindex. A large percentage of these pages have a Moz page authority of only "1". It is puzzling that some pages that have similar content to "1" page rank pages rank much better, in some cases "15". If we remove or consolidate the poorly ranked pages will the overall page authority and ranking of the site improve? Would taking the following steps help?: 1. Remove or consolidate poorly ranking unnecessary URLs?
Intermediate & Advanced SEO | | Kingalan1
2. Update content on poorly ranking URLs that are important?
3. Create internal text links (as opposed to links from menus) to critical pages? A MOZ crawl of our site's URLs is visible at the link below. I am wondering if the structure of the site is just not optimized for ranking and what can be done to improve it. THANKS. https://www.dropbox.com/s/oqchfqveelm1q11/CRAWL www.nyc-officespace-leader.com (1).csv?dl=0 Thanks,
Alan0 -
Best way to remove full demo (staging server) website from Google index
I've recently taken over an in-house role at a property auction company, they have a main site on the top-level domain (TLD) and 400+ agency sub domains! company.com agency1.company.com agency2.company.com... I recently found that the web development team have a demo domain per site, which is found on a subdomain of the original domain - mirroring the site. The problem is that they have all been found and indexed by Google: demo.company.com demo.agency1.company.com demo.agency2.company.com... Obviously this is a problem as it is duplicate content and so on, so my question is... what is the best way to remove the demo domain / sub domains from Google's index? We are taking action to add a noindex tag into the header (of all pages) on the individual domains but this isn't going to get it removed any time soon! Or is it? I was also going to add a robots.txt file into the root of each domain, just as a precaution! Within this file I had intended to disallow all. The final course of action (which I'm holding off in the hope someone comes up with a better solution) is to add each demo domain / sub domain into Google Webmaster and remove the URLs individually. Or would it be better to go down the canonical route?
Intermediate & Advanced SEO | | iam-sold0 -
We are switching our CMS local pages from a subdomain approach to a subfolder approach. What's the best way to handle this? Should we redirect every local subdomain page to its new subfolder page?
We are looking to create a new subfolder approach within our website versus our current subdomain approach. How should we go about handling this politely as to not lose everything we've worked on up to this point using the subdomain approach? Do we need to redirect every subdomain URL to the new subfolder page? Our current local pages subdomain set up: stores.websitename.com How we plan on adding our new local subfolder set-up: websitename.com/stores/state/city/storelocation Any and all help is appreciated.
Intermediate & Advanced SEO | | SEO.CIC0 -
How Long Does it Take for Rel Canonical to De-Index / Re-Index a Page?
Hi Mozzers, We have 2 e-commerce websites, Website A and Website B, sharing thousands of pages with duplicate product descriptions. Currently only the product pages on Website B are indexing, and we want Website A indexed instead. We added the rel canonical tag on each of Website B's product pages with a link towards the matching product on Page A. How long until Website B gets de-indexed and Website A gets indexed instead? Did we add the rel canonical tag correctly? Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
E Commerce product page canonical and indexing + URL parameters
Hi, I'm having some issues on the best way to handle site structure. The technical side of SEO isn't my strong point so I thought I'd ask the question before I make the decision. Two examples for you to look at. This is a new site http://www.tester.co.uk/electrical/multimeters/digital. By selecting another page to see more products you get this url string where/p/2. This page also has the canonical tag relating to this page and not the original page. Now if say for example I exclude this parameter (where) in webmaster tools will I be stopping Google indexing the products on the other pages where/p/2, 3, 4 etc. and the same if I make the canonical point to multimeters/digital/ instead of multimeters/digital/where/p/2 etc.? I have the same question applied to the older site http://www.pat-services.co.uk/digital-multimeters-26.html. which no longer has an canonical tags at all. The only real difference is Google is indexing http://www.pat-services.co.uk/digital-multimeters-26.html?page=2 but not http://www.tester.co.uk/electrical/multimeters/digital/where/p/2 Thanks for help in advance
Intermediate & Advanced SEO | | PASSLtd0 -
Duplicate content on index.htm page
How do I avoid duplicate content on the index.htm page . I need to redirect the spider from the /index.htm file to the main root of http://www.manandhisvan.com.au and hence avoid duplicate content. Does anyone know of a foolproof way of achieving this without me buggering up the complete site Cheers Freddy
Intermediate & Advanced SEO | | Fatfreddy0