Crawl rate dropped to zero
-
Hello, I recently moved my site in godaddy from cpanel to managed wordpress. I bought this transfer directly from GoDaddy customer service. in this process they accidentally changed my domain from www to non www. I changed it back after the migration, but as a result of this sites craw rate from search console fell to zero and has not risen at all since then.
In addition to this website does not display any other errors, i can ask google manually fetch my pages and it works as before, only the crawl rates seems to be dropped permanently. GoDaddy customer service also claims that do not see any errors but I think, however, that in some way they caused this during the migration when the url changed since the timing match perfectly. also when they accidentally removed the www, crawl rate of my sites non www version got up but fell back to zero when I changed it back to www version. Now the crawl rate of both www and non www version is zero. How do I get it to rise again? Customer service also said that the problem may be related to ftp-data of search console? But they were not able to help any more than .Would someone from here be able to help me with this in anyway please?
-
Hello, asnwers to the questions bolded:
- At this rate, how long would it take Google to crawl all of your pages, (maybe it feels 10-15 is fast enough)? Over 50 days, i still cannot believe that it would be just a coincidence that crawl rate dropped so suddenly only because google suddenly thinks that my page should not be crawled that often. After all, amount of new content, quality of new links and all the other factors are much better all the time on my site, and before the drop, crawl rate increased steadily. It has to be some technical issue?
- Has the average response time increased? If so, maybe Google feels it's overloading the server & backing off. No, it has actually went down a little bit (not much though)
-
Interesting. I have 2 more thoughts:
- At this rate, how long would it take Google to crawl all of your pages, (maybe it feels 10-15 is fast enough)?
- Has the average response time increased? If so, maybe Google feels it's overloading the server & backing off.
-
Crawl rate still is extremely slow, average 10-15 per day except when i sent pages to be manually crawled, then it crawls those page. Before the drop the crawl rate was never under 200 per day and it was usually over 1000. anything more I can do? It seems to have no effect my rankings or anything else as l can see, but I still would like this be fixed. It has be something to do with the fact that i changed my hosting to godaddy managed wordpress hosting. but they have no clue about what could cause this. robot.txt file change seemed to have no effect or very minimum effect
-
Not that I'm aware of, unfortunately. Patience is an important skill when dealing with Google
-
Thanks! I will try that. I see that search console shows crawl rates with few days delay, is there somewhere i could check if it works instantly?
-
I thought of one other possibility: Your sitemap.xml is probably auto-generated, so this shouldn't be a problem, but check to make sure that the URLs in the sitemap.xml have the www.
Other than that I'm out of ideas - I would wait a few days to see what happens, but maybe someone else with more experience watching Google will have seen this before. If it does resolve, I'd like to know what worked.
-
I'm not convinced that robots.txt is causing your problem, but it can't hurt to change it back. In fact, while looking for instructions on how to change it I came across this blog post by Joost de Valk, (aka Yoast), that pretty much says you should remove everything that's currently in your robots.txt - and his arguments are right for everything:
- Blocking wp-content/plugins will stop Google from loading JS and/or CSS resources that it might need to render the page properly.
- Blocking wp-admin is redundant, because the wp-admin if it's linked it can still be found, and important pages already have an X-Robots HTTP header that says not to index them.
If you're using Yoast SEO, here are instructions on how to change the robots.txt file.
-
Hi, one more thing. Are you 100% sure tht robot.txt file hs nothing to do with this? It changed at the sime time when the problems started to occur. It used to be :
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.phpBut now it is :
User-agent: *
Crawl-delay: 1
Disallow: /wp-content/plugins/
Disallow: /wp-admin/At the sime time "blocked resources" notifications started to occur in search console.
Blocked Resources > Rendering without certain resources can impair the indexing of your web pages. Learn more.Status: 3/19/16152 Pages with blocked resources
This has to have something to do with it right?
-
Thank you for your answer, my answers bolded here below
- Do you see any crawl errors in the Google Search Console? **Nothing new after the crawl rate dropped, just some old soft 404 errors and old not found errors. **
- If you search for your site on Google, what do you see, (does your snippet look normal)? Yes everything looks perfectly normal, just like before when the crawl rate dropped
- How many pages does Google say it has indexed? Is it possible it's indexed everything and is taking a break, (does it even do that?) I dont thin this is possible, since the cralw rate dropped lmost instantly from average 400 to zero after the site migration.
One theory is: When you moved to the non-www version of the site, Google started getting 301s redirecting it from www to non-www, and now that you've gone back to www it's getting 301s redirecting it from from non-www to www, so it's got a circular redirect. If this is the problem, how should i start to get it fixed?
Here's what I would do to try to kick-start indexing, if you haven't already:
- Make sure you have the "Preferred Domain" set to the www version of your site in_ both the www and non-www versions of your site_ in Google Search Console. Yes that is how it has been all the time
- In the Search Console for the www-version of your site, re-submit your sitemap. Done
- In the Search Console for the www-version of your site, do a Fetch as Google on your homepage, and maybe a couple of other pages, and when the Fetch is done use the option to submit those pages for indexing, (there's a monthly limit on how much of this you can do). I have done this many times since i noticed the problem, fetch as google works normally without any issues
Is there anything more i can do? If i want hire someone to fix this, is there any recommendations? I am not a tech guy so this is quite difficult task for me
-
I don't know why this is happening, but this is what I would check:
- Do you see any crawl errors in the Google Search Console?
- If you search for your site on Google, what do you see, (does your snippet look normal)?
- How many pages does Google say it has indexed? Is it possible it's indexed everything and is taking a break, (does it even do that?)
One theory is: When you moved to the non-www version of the site, Google started getting 301s redirecting it from www to non-www, and now that you've gone back to www it's getting 301s redirecting it from from non-www to www, so it's got a circular redirect.
Here's what I would do to try to kick-start indexing, if you haven't already:
- Make sure you have the "Preferred Domain" set to the www version of your site in both the www and non-www versions of your site in Google Search Console.
- In the Search Console for the www-version of your site, re-submit your sitemap.
- In the Search Console for the www-version of your site, do a Fetch as Google on your homepage, and maybe a couple of other pages, and when the Fetch is done use the option to submit those pages for indexing, (there's a monthly limit on how much of this you can do).
Good luck!
-
That's not so horrible - it just says not to crawl the plugins directory or the admin, and to delay a second between requests. You probably don't want your plugins or admin directories being indexed, and according to this old forum post Google ignores the crawl-delay directive, so the robots.txt isn't the problem.
-
Hi, my robot.txt file looks like this:
User-agent: * Crawl-delay: 1 Disallow: /wp-content/plugins/ Disallow: /wp-admin/ This is not how it suppose to look like, right? could this cause the problem?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Big drop in organic traffic after new site launched
Hi There, I have had a drop of around 40% in site traffic since we migrated our site from Magento to Woocommerce. The products were migrated across and kept the same title tags, meta descriptions, copy etc... I set up 301's on the top 100 landing pages and submitted a new site map using Google Web Master Tools. It looked like the traffic was coming back to where it was but the gap has widened again. Can anyone advise me on what I may have missed or how to go about diagnosing the problem and fixing it
Technical SEO | | JonesBros0 -
When rogerbot tried to crawl my site it gets a 404\. Why?
When rogerbot tries to craw my site it tries http://website.com. My website then tries to redirect to http://www.website.com and is throwing a 404 and ends up not getting crawled. It also throws a 404 when trying to read my robots.txt file for some reason. We allow rogerbot user agent so unsure whats happening here. Is there something weird going on when trying to access my site without the 'www' that is causing the 404? Any insight is helpful here. Thanks,
Technical SEO | | BlakeBooth0 -
Google has deindexed 40% of my site because it's having problems crawling it
Hi Last week i got my fifth email saying 'Google can't access your site'. The first one i got in early November. Since then my site has gone from almost 80k pages indexed to less than 45k pages and the number is lowering even though we post daily about 100 new articles (it's a online newspaper). The site i'm talking about is http://www.gazetaexpress.com/ We have to deal with DDoS attacks most of the time, so our server guy has implemented a firewall to protect the site from these attacks. We suspect that it's the firewall that is blocking google bots to crawl and index our site. But then things get more interesting, some parts of the site are being crawled regularly and some others not at all. If the firewall was to stop google bots from crawling the site, why some parts of the site are being crawled with no problems and others aren't? In the screenshot attached to this post you will see how Google Webmasters is reporting these errors. In this link, it says that if 'Error' status happens again you should contact Google Webmaster support because something is preventing Google to fetch the site. I used the Feedback form in Google Webmasters to report this error about two months ago but haven't heard from them. Did i use the wrong form to contact them, if yes how can i reach them and tell about my problem? If you need more details feel free to ask. I will appreciate any help. Thank you in advance C43svbv.png?1
Technical SEO | | Bajram.Kurtishaj1 -
Roger bot taking a long time to crawl site
Hi all, I've noticed Roger bot is taking a long time to crawl my new site. It started on the 28th Feb 2013 and is still going. There aren't many pages at the moment. Any ideas please? thanks a lot, Mark.
Technical SEO | | caterfor1 -
At what point is the canonical tag crawled
Do search engines (specifically Google) crawl the url in the canonical tag as it loads or do they load the whole page before crawling it? Thanks,
Technical SEO | | ao.com0 -
New URL structure caused a HUGE drop?
I have started working with a client who did an upgrade on their e-commerce sive in May of last year. It totally changed the URL structure and they didn't redirect old URLs or do any of the things they should have. Not unexpectedly they they went from about 300 visitors a day to 0 for then rose up to maybe 50 and have remained there ever since. There were some major onsite issues including about 15000 internal links that 302 back to the site. In any case I have fixed most of the onsite problems and worked on a little better categorization + content optimization, etc. We have only been working on this for about 30 days and organic traffic is up and they are ranking for much better keywords, but I expected a little quicker rise. Here is a screenshot out of GA of their descent. Its pretty rapid. I dont think it makes sense to redirect their old URLs at this point since most of them have been deindexed for 10+ months. Anyone have any suggestions on how to get back to their previous level. The domain actually has decent authority and link profile, etc. Is this just going to be a slow climb back? Any thoughts? Fxz9Y.png
Technical SEO | | BlinkWeb0 -
Product ratings causing 302 redirect problem
I am working on an ecommerce site and my crawl report came back with 7000+ 302 redirects and maxed out at 10,000 pages because of all the redirects. The site really only has maybe 1500 pages (dynamic content aside). After looking into it a little more I see it is because of the product rating system. They have a star rating system that kinda looks like amazons. The only problem is that each star is a link to a dynamic address that records the vote and then 302's back to the original page the vote was cast from. So virtually every page on this site links out anywhere from 15 to 45 times and 302's back to itself, losing virtually all of its PR. Am I correct in that assumption or am I missing something? I don't see the links being blocked by robots.txt or noindex, nofollowed. Also it is an anonymous rating system where a rating can be cast from any category page displaying a product or any product page. To make matters worse every page links to a printable version which duplicates the issue by repeating the whole thing over again. So assuming I am correct that is site has a major PR leak on virtually every page, what is the best recommendation to fix this. 1. Block all of those links in robots.txt, 2. no index, nofollow these links or 3. put the rating system behind a submit button or disallow anon ratings 4. something else??? Looking at their product ratings on the site virtually everything is between 2-3 starts out of 5 and has about the same number of votes except less votes on deeper pages. I dont believe this is real at all since this site gets almost no traffic and maybe 1 sale a week, there is no way that any product has been rated 50 times. I think the crawler is voting as it crawls and doing it 5 times for every product which is why everything is rated 2.5 out of 5. This is an x-cart site in case anyone cares. Any suggestions?
Technical SEO | | BlinkWeb0