Increase in pages crawled per day
-
What does it mean when GWT abruptly jump from 15k to 30k pages crawled per day?
I am used to see spikes, like 10k average and a couple of time per month 50k pages crawled.
But in this case 10 days ago moved from 15k to 30k per day and it's staying there. I know it's a good sign, the crawler is crawling more pages per day, so it's picking up changes more often, but I have no idea of why is doing it, what good signals usually drive google crawler to choose to increase the number of pages crawled per day?
Anyone knows?
-
Nice find Ryan.
-
Agreed. Especially since Google's own Gary Illyes respond to the following with:
How long is the delay between making it mobile friendly and it being reflected in the search results?
Illyes says “As soon as we discover it is mobile friendly, on a URL by URL basis, it will be updated.
Sounds like when you went responsive they double checked each URL to confirm. From: http://www.thesempost.com/googles-gary-illyes-qa-upcoming-mobile-ranking-signal-change/. Cheers!
-
I usually analyze backlinks with both gwt and ahrefs, and ahrefs also doesn't show any abnormally high DA backlink either.
Agree the responsive change is the most probable candidate, I have a couple of other websites I want to turn responsive before April 21st, that's an opportunity to test and see if that is the reason.
-
Ah, the responsive change could be a big part of it. You're probably getting crawls from the mobile crawler. GWT wouldn't be the best source for the recency on backlinks. I'd actually look for spikes via referrers in Analytics. GWT isn't always that responsive when reporting links. Still, it looks like the responsive redesign is a likely candidate for this, especially with Google's looming April 21st deadline.
-
Tw things I forgot to mention are:
- something like 2 weeks ago we turned the website responsive, could it be google mobile crawler is increasing the number of crawled pages, I have to analyze the logs to see if the requests are coming from google mobile crawler
- the total number of indexed pages didn't change, which make me wonder if a rise in the number of crawled pages per day is all that relevant
-
Hi Ryan,
- GWT (Search Traffic->Search Queries) shows a drop of 6% in impressions for brand based searches (google trends shows a similar pattern).
- GWT is not showing any recent backlink with an abnormally high DA.
- we actually had a couple of unusually high traffic from Facebook thanks to a couple of particularly successful post, but we are talking about a couple of spikes of just 5k visits and they both started after the rise of pages crawled per day.
If you have any other idea it's more than welcome, I wish I could understand the source of that change to be able to replicate it on other websites.
-
I am not sure I understand what you mean, that website has a total of 35k pages submitted through sitemap to GWT, of which only 8k are indexed. The total number of pages indexed have always been slowly increasing through time, it moved from 6k to 8k in the last couple of months, slowly with no spikes.
That's not the total number of pages served by the site, since dynamics search results page amount to around 150k total pages, we do not submit all of them in the sitemap on purpose, and GWT shows 70k pages as the total number of indexed pages.
I analyzed Google crawler activity through server logs in the past, it does pick a set of (apparently) random pages every night and does crawl them. I actually never analyzed what percentage of those pages are in the sitemap or not.
Internal link structure was built on purpose to try to favor ranking of pages we considered more important.
The point is we didn't change anything in the website structure recently. User generated content have been lowering duplicate pages count, slowly, through time, without any recent spike. We have a PR campaign which is increasing backlinks with an average rate of around 3 links per week, and we didn't have any high DA backlinks appearing in the last few weeks.
So I am wondering what made google crawler start crawling much more pages per day.
-
yes, I updated to parameters just before you posted
-
When you say URL variables do you mean query string variables like ?key=value
That is really good advice. You can check in your GWT. If you let google crawl and it runs in to a loop it will not index that section of your site. It would be costly for them.
-
I would also check you have not got a spike of URL parameters becoming available. I recently had a similar issue and although I had these set up in GWT the crawler was actively wasting its time on them. Once I added to robots the crawl level went back to 'normal'.
-
There could be several factors... maybe your brand based search is prompting Google to capture more of your site. Maybe you got a link from a very high authority site that prompts higher crawl volumes. Queries that prompt freshness related to your site could also spur on Google. It is a lot of guesswork, but can be whittled down some by a close look at Analytics and perhaps tomorrows OSE update (Fresh Web Explorer might provide some clue's in the meantime.) At least you're moving in the right direction. Cheers!
-
There are two variables in play and you are picking up on one.
If there are 1,000 pages on your website then Google may index all 1,000 if they are aware of all the pages. As you indicated, it is also Google's decision how many of your pages to index.
The second factor which is most likely the case in your situation is that Google only has two ways to index your pages. One is to submit a sitemap in GWT to all of your known pages. So Google would then have a choice to index all 1,000 as it would then be aware of their existence. However, it sounds like your website is relying on links. If you have 1,000 pages and a home page with one link leading to an about us page then Google is only aware of two pages on your entire website. Your website has to have a internal link structure that Google can crawl.
Imagine your website like a tree root structure. For Google to get to every page and index it then it has to have clear, defined, and easy access. Websites with a home page that links to a page A that then links to page B that then links to page C that then links to page D that then links to 500 pages can easily lose 500 pages if there is an obstruction between any of the pages that lead to page D. Because google can't crawl to page D to see all the pages on it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My site Metrics are not as per they should
Hi, I am regularly making links on my site to improve its metrics but i am confused how other people fastly improve their DA/PA and my DA/PA is not improving with that site. The same happened with spam score. It has been a month i disavow my links having spam score but instead of decrease in it, my spam score increased. Please advice. Is there any special way to use that help moz crawler to check site and update accordingly? Please help
Technical SEO | | AzadSeo37310 -
My pages are being crawled, but not indexed according to Search Console
According to Google Search Console, my pages are being crawled by not indexed. We use Shopify and about two weeks ago I selected that Traffic from all our domains redirects to our primary domain. So everything from www.url.com and https://url.com and so on, would all redirect to one url. Have added an attached image from Search Console. 6fzEQg8
Technical SEO | | HariOmHemp0 -
What should I do with all these 404 pages?
I have a website that Im currently working on that has been fairly dormant for a while and has just been given a face lift and brought back to life. I have some questions below about dealing with 404 pages. In Google WMT/search console there are reports of thousands of 404 pages going back some years. It says there are over 5k in total but I am only able to download 1k or so from WMT it seems. I ran a crawl test with Moz and the report it sent back only had a few hundred 404s in, why is that? Im not sure what to do with all the 404 pages also, I know that both Google and Moz recommend a mixture of leaving some as 404s and redirect others and Id like to know what the community here suggests. The 404s are a mix of the following: Blog posts and articles that have disappeared (some of these have good back-links too) Urls that look like they used to belong to users (the site used to have a forum) which where deleted when the forum was removed, some of them look like they were removed for spam reasons too eg /user/buy-cheap-meds-online and others like that Other urls like this /node/4455 (or some other random number) Im thinking I should permanently redirect the blog posts to the homepage or the blog but Im not sure what to do about all the others? Surely having so many 404s like this is hurting my crawl rate?
Technical SEO | | linklander0 -
Big page of clients - links to individual client pages with light content - not sure if canonical or no-follow - HELP
Not sure what best practice here is: http://www.5wpr.com/clients/ Is this is a situation where I'm best off adding canonical tags back to the main clients page, or to the practice area each client falls under? No-following all these links and adding canonical? No-follow/No-index all client pages? need some advice here...
Technical SEO | | simplycary0 -
Duplicate page issue
Hi, i have a serious duplicate page issue and not sure how it happened and i am not sure if anyone will be able to help as my site was built in joomla, it has been done through k2, i have never come across this issue before i am seem to have lots of duplicate pages under author names, example http://www.in2town.co.uk/blog/diane-walker this page is showing the full articles which is not great for seo and it is also showing that there are hundreds more articles at the bottom on the semoz tool i am using, it is showing these as duplicates although there are hundreds of them and it is causing google to see lots of duplicate pages. Diane Walker
Technical SEO | | ClaireH-184886
http://www.in2town.co.uk/blog/diane-walker/Page-2 5 1 0
Diane Walker
http://www.in2town.co.uk/blog/diane-walker/Page-210 1 1 0
Diane Walker
http://www.in2town.co.uk/blog/diane-walker/Page-297 1 1 0
Diane Walker
http://www.in2town.co.uk/blog/diane-walker/Page-3 5 1 0
Diane Walker can anyone please help me to sort this important issue out.0 -
Page that appears on SERPs is not the page that has been optimized for users
This may seem like a pretty newbie question, but I haven't been able to find any answers to it (I may not be looking correctly). My site used to rank decently for the KW "Gold name necklace" with this page in the search results:http://www.mynamenecklace.co.uk/Products.aspx?p=302This was the page that I was working on optimizing for user experience (load time, image quality, ease of use, etc.) since this page was were users were getting to via search. A couple months ago the Google SERP's started showing this page for the same query (also ranked a little lower, but not important for this specific question):http://www.mynamenecklace.co.uk/Products.aspx?p=314Which is a white gold version of the necklaces. This is not what most users have in mind (when searching for gold name necklace) so it's much less effective and engaging.How do I tell Google to go back to old page/ give preference to older page / tell them that we have a better version of the page / etc. without having to noindex any of the content? Both of these pages have value and are for different queries, so I can't canonical them to a single page. As far as external links go, more links are pointing to the Yellow gold version and not the white gold one.Any ideas on how to remedy this?Thanks.
Technical SEO | | Don340 -
Indexed pages and current pages - Big difference?
Our website shows ~22k pages in the sitemap but ~56k are showing indexed on Google through the "site:" command. Firstly, how much attention should we paying to the discrepancy? If we should be worried what's the best way to find the cause of the difference? The domain canonical is set so can't really figure out if we've got a problem or not?
Technical SEO | | Nathan.Smith0 -
No. of links on a page
Is it true that If there is a huge number of links from the source page then each link will provide very little value in terms of passing link juice ?
Technical SEO | | seoug_20050