Crawl budget
-
I am a believer in this concept, showing google less pages will increase their importance.
here is my question: I manage a website with millions of pages, high organic traffic (lower than before).
I do believe that too many pages are crawled. there are pages that I do not need google to crawl and followed. noindex follow does not save on the mentioned crawl budget. deleting those pages is not possible. any advice will be appreciated. If I disallow those pages I am missing on pages that help my important pages.
-
I just wrote a better reply so forgive me the file was deleted by hit post. But I do think this information will help you get acquainted much better with a crawl budget
One thing that may not incorporate your domain however is that is not centered a external link is a content delivery or CDN URL that is not rewritten to the traditional CDN.example.com Content delivery network URLs can look like
- https://example.cachefly.net/images/i-love-cats.jpg
- https://example.global.prod.fastly.net/images/i-love-cats-more.png
So you may see URLs like one shown here should be considered part of your domain and treated like a domain even if they look like this but these are just two examples of literally thousands CDN variants.
Examples for this would be the cost of incorporating your hostname to a HTTPS encrypted content delivery network can be very expensive.
Crawl budget information
- http://www.stateofdigital.com/guide-crawling-indexing-ranking/
- https://www.deepcrawl.com/case-studies/elephate-fixing-deep-indexation-issues/
- https://builtvisible.com/log-file-analysis/
- https://www.deepcrawl.com/knowledge/best-practice/the-seo-ruler/ ( image below)
Tools for Crawl budget
I hope is more helpful,
Thomas
-
Nope, having more or less external links will have no effect on the crawl budget spent on your site. Your crawl budget is only spent on yourdomain.com. I just meant that, Google's crawler will follow external links, but it won't until it's spent its crawl budget on your site.
Google isn't going to give you a metric showing your crawl budget, but you can assume how much it is by going to Google Search Console > Crawl > Crawl Stats. That'll show you how many pages Googlebot has crawled per day.
-
I see, thank you.
I'm guessing if we have these external links across the site, it'll cause more harm than good for us if they use up some crawl budget on every page?
Is there anyway to find out what the crawl budget is?
-
To be clear, Googlebot will find those external links and leave your site, but not until they've used their crawl budget up on your site.
-
Nope.
-
I'd like to find out if this crawl budget applies to external sites - we link to sister companies in the footer.
Will Google bot find these external links and leave our site to go and crawl these external sites?
Thanks!
-
Using Google's parameter tools you can also reduce crawl budget issues.
https://www.google.com/webmasters/tools/crawl-url-parameters
http://www.blindfiveyearold.com/crawl-optimization
http://searchenginewatch.com/sew/news/2064349/crawl-index-rank-repeat-a-tactical-seo-framework
http://searchengineland.com/how-i-think-crawl-budget-works-sort-of-59768
-
What's the background of your conclusion? What's the logic?
I am asking because my understanding about crawl budget is that if you waste it you risk to 1) slow down recrawl frequency on a per page basis and 2) risk google crawler gives up crawling the website before to have crawled all pages.
Are you splitting your sitemap into sub sitemaps? That's a good way to spot groups/categories of pages being ignored by google crawler.
-
If you want to test this, identify a section of your site that you can disallow via robots.txt and then measure the corresponding changes. Proceed section by section based on results. There are so many variables at play that I don't think you'll get an answer that's anywhere as precise as testing your specific situation.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Client has an inexplicable jump in crawled pages being reported in Google Search Console
Recently a client of mine noticed an inexplicable jump in crawled pages being reported in Google Search Console. We researched the following culprits and found nothing: Rel=canonicals are put in place No SSL/non SSL duplication We used a tool to extrapolate search query page data from Google Search Insights; nothing unusual No dynamic pages being made on the website All necessary landing pages are in the XML sitemap Could this be a glitch in GSC? We are wondering what the heck is going on. 7eaeS
Intermediate & Advanced SEO | | BigChad20 -
Best way to link to 1000 city landing pages from index page in a way that google follows/crawls these links (without building country pages)?
Currently we have direct links to the top 100 country and city landing pages on our index page of the root domain.
Intermediate & Advanced SEO | | lcourse
I would like to add in the index page for each country a link "more cities" which then loads dynamically (without reloading the page and without redirecting to another page) a list with links to all cities in this country.
I do not want to dillute "link juice" to my top 100 country and city landing pages on the index page.
I would still like google to be able to crawl and follow these links to cities that I load dynamically later. In this particular case typical site hiearchy of country pages with links to all cities is not an option. Any recommendations on how best to implement?0 -
Crawl rate drop
Hi Guys, I have a crawl rate drop in webmastertools and can't figure out way. In the last month I removed a lot o duplicate pages that we don't need anymore, there were at least 1.5 million pages. Can this be a motive? D7O5x1l
Intermediate & Advanced SEO | | Silviu0 -
URL Parameter Being Improperly Crawled & Indexed by Google
Hi All, We just discovered that Google is indexing a subset of our URL’s embedded with our analytics tracking parameter. For the search “dresses” we are appearing in position 11 (page 2, rank 1) with the following URL: www.anthropologie.com/anthro/category/dresses/clothes-dresses.jsp?cm_mmc=Email--Anthro_12--070612_Dress_Anthro-_-shop You’ll note that “cm_mmc=Email” is appended. This is causing our analytics (CoreMetrics) to mis-attribute this traffic and revenue to Email vs. SEO. A few questions: 1) Why is this happening? This is an email from June 2012 and we don’t have an email specific landing page embedded with this parameter. Somehow Google found and indexed this page with these tracking parameters. Has anyone else seen something similar happening?
Intermediate & Advanced SEO | | kevin_reyes
2) What is the recommended method of “politely” telling Google to index the version without the tracking parameters? Some thoughts on this:
a. Implement a self-referencing canonical on the page.
- This is done, but we have some technical issues with the canonical due to our ecommerce platform (ATG). Even though page source code looks correct, Googlebot is seeing the canonical with a JSession ID.
b. Resubmit both URL’s in WMT Fetch feature hoping that Google recognizes the canonical.
- We did this, but given the canonical issue it won’t be effective until we can fix it.
c. URL handling change in WMT
- We made this change, but it didn’t seem to fix the problem
d. 301 or No Index the version with the email tracking parameters
- This seems drastic and I’m concerned that we’d lose ranking on this very strategic keyword Thoughts? Thanks in advance, Kevin0 -
Best way to fix 404 crawl errors caused by Private blog posts in WordPress?
Going over Moz Crawl error report and WMT's Crawl errors for a new client site... I found 44 High Priority Crawl Errors = 404 Not Found I found that those 44 blog pages were set to Private Mode (WordPress theme), causing the 404 issue.
Intermediate & Advanced SEO | | SEOEND
I was reviewing the blog content for those 44 pages to see why those 2010 blog posts, were set to private mode. Well, I noticed that all those 44 blog posts were pretty much copied from other external blog posts. So i'm thinking previous agency placed those pages under private mode, to avoid getting hit for duplicate content issues. All other blog posts posted after 2011 looked like unique content, non scraped. So my question to all is: What is the best way to fix the issue caused by these 44 pages? A. Remove those 44 blog posts that used verbatim scraped content from other external blogs.
B. Update the content on each of those 44 blog posts, then set to Public mode, instead of Private.
C. ? (open to recommendations) I didn't find any external links pointing to any of those 44 blog pages, so I was considering in removing those blog posts. However not sure if that will affect site in anyway. Open to recommendations before making a decision...
Thanks0 -
Google Crawl Rate and Cached version - not updated yet :(
Hi, Ive noticed that Google is not recognizing/crawling the latest changes on pages in my site - last update when viewing Cached version in Google Results is over 2 months ago. So, do I Fetch as Googlebot to force an update? Or do I remove the page's cached version in GWT remove urls? Thanks, B
Intermediate & Advanced SEO | | bjs20100 -
How to Disallow Specific Folders and Sub Folders for Crawling?
Today, I have checked indexing for my website in Google. I found very interesting result over there. You can check that result by following result of Google. Google Search Result I aware about use of robots.txt file and can disallow images folder to solve this issue. But, It may block my images to get appear in Google image search. So, How can I fix this issue?
Intermediate & Advanced SEO | | CommercePundit0 -
How to prevent Google from crawling our product filter?
Hi All, We have a crawler problem on one of our sites www.sneakerskoopjeonline.nl. On this site, visitors can specify criteria to filter available products. These filters are passed as http/get arguments. The number of possible filter urls is virtually limitless. In order to prevent duplicate content, or an insane amount of pages in the search indices, our software automatically adds noindex, nofollow and noarchive directives to these filter result pages. However, we’re unable to explain to crawlers (Google in particular) to ignore these urls. We’ve already changed the on page filter html to javascript, hoping this would cause the crawler to ignore it. However, it seems that Googlebot executes the javascript and crawls the generated urls anyway. What can we do to prevent Google from crawling all the filter options? Thanks in advance for the help. Kind regards, Gerwin
Intermediate & Advanced SEO | | footsteps0