Crawl rate dropped to zero
-
Hello, I recently moved my site in godaddy from cpanel to managed wordpress. I bought this transfer directly from GoDaddy customer service. in this process they accidentally changed my domain from www to non www. I changed it back after the migration, but as a result of this sites craw rate from search console fell to zero and has not risen at all since then.
In addition to this website does not display any other errors, i can ask google manually fetch my pages and it works as before, only the crawl rates seems to be dropped permanently. GoDaddy customer service also claims that do not see any errors but I think, however, that in some way they caused this during the migration when the url changed since the timing match perfectly. also when they accidentally removed the www, crawl rate of my sites non www version got up but fell back to zero when I changed it back to www version. Now the crawl rate of both www and non www version is zero. How do I get it to rise again? Customer service also said that the problem may be related to ftp-data of search console? But they were not able to help any more than .Would someone from here be able to help me with this in anyway please?
-
Hello, asnwers to the questions bolded:
- At this rate, how long would it take Google to crawl all of your pages, (maybe it feels 10-15 is fast enough)? Over 50 days, i still cannot believe that it would be just a coincidence that crawl rate dropped so suddenly only because google suddenly thinks that my page should not be crawled that often. After all, amount of new content, quality of new links and all the other factors are much better all the time on my site, and before the drop, crawl rate increased steadily. It has to be some technical issue?
- Has the average response time increased? If so, maybe Google feels it's overloading the server & backing off. No, it has actually went down a little bit (not much though)
-
Interesting. I have 2 more thoughts:
- At this rate, how long would it take Google to crawl all of your pages, (maybe it feels 10-15 is fast enough)?
- Has the average response time increased? If so, maybe Google feels it's overloading the server & backing off.
-
Crawl rate still is extremely slow, average 10-15 per day except when i sent pages to be manually crawled, then it crawls those page. Before the drop the crawl rate was never under 200 per day and it was usually over 1000. anything more I can do? It seems to have no effect my rankings or anything else as l can see, but I still would like this be fixed. It has be something to do with the fact that i changed my hosting to godaddy managed wordpress hosting. but they have no clue about what could cause this. robot.txt file change seemed to have no effect or very minimum effect
-
Not that I'm aware of, unfortunately. Patience is an important skill when dealing with Google
-
Thanks! I will try that. I see that search console shows crawl rates with few days delay, is there somewhere i could check if it works instantly?
-
I thought of one other possibility: Your sitemap.xml is probably auto-generated, so this shouldn't be a problem, but check to make sure that the URLs in the sitemap.xml have the www.
Other than that I'm out of ideas - I would wait a few days to see what happens, but maybe someone else with more experience watching Google will have seen this before. If it does resolve, I'd like to know what worked.
-
I'm not convinced that robots.txt is causing your problem, but it can't hurt to change it back. In fact, while looking for instructions on how to change it I came across this blog post by Joost de Valk, (aka Yoast), that pretty much says you should remove everything that's currently in your robots.txt - and his arguments are right for everything:
- Blocking wp-content/plugins will stop Google from loading JS and/or CSS resources that it might need to render the page properly.
- Blocking wp-admin is redundant, because the wp-admin if it's linked it can still be found, and important pages already have an X-Robots HTTP header that says not to index them.
If you're using Yoast SEO, here are instructions on how to change the robots.txt file.
-
Hi, one more thing. Are you 100% sure tht robot.txt file hs nothing to do with this? It changed at the sime time when the problems started to occur. It used to be :
User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.phpBut now it is :
User-agent: *
Crawl-delay: 1
Disallow: /wp-content/plugins/
Disallow: /wp-admin/At the sime time "blocked resources" notifications started to occur in search console.
Blocked Resources > Rendering without certain resources can impair the indexing of your web pages. Learn more.Status: 3/19/16152 Pages with blocked resources
This has to have something to do with it right?
-
Thank you for your answer, my answers bolded here below
- Do you see any crawl errors in the Google Search Console? **Nothing new after the crawl rate dropped, just some old soft 404 errors and old not found errors. **
- If you search for your site on Google, what do you see, (does your snippet look normal)? Yes everything looks perfectly normal, just like before when the crawl rate dropped
- How many pages does Google say it has indexed? Is it possible it's indexed everything and is taking a break, (does it even do that?) I dont thin this is possible, since the cralw rate dropped lmost instantly from average 400 to zero after the site migration.
One theory is: When you moved to the non-www version of the site, Google started getting 301s redirecting it from www to non-www, and now that you've gone back to www it's getting 301s redirecting it from from non-www to www, so it's got a circular redirect. If this is the problem, how should i start to get it fixed?
Here's what I would do to try to kick-start indexing, if you haven't already:
- Make sure you have the "Preferred Domain" set to the www version of your site in_ both the www and non-www versions of your site_ in Google Search Console. Yes that is how it has been all the time
- In the Search Console for the www-version of your site, re-submit your sitemap. Done
- In the Search Console for the www-version of your site, do a Fetch as Google on your homepage, and maybe a couple of other pages, and when the Fetch is done use the option to submit those pages for indexing, (there's a monthly limit on how much of this you can do). I have done this many times since i noticed the problem, fetch as google works normally without any issues
Is there anything more i can do? If i want hire someone to fix this, is there any recommendations? I am not a tech guy so this is quite difficult task for me
-
I don't know why this is happening, but this is what I would check:
- Do you see any crawl errors in the Google Search Console?
- If you search for your site on Google, what do you see, (does your snippet look normal)?
- How many pages does Google say it has indexed? Is it possible it's indexed everything and is taking a break, (does it even do that?)
One theory is: When you moved to the non-www version of the site, Google started getting 301s redirecting it from www to non-www, and now that you've gone back to www it's getting 301s redirecting it from from non-www to www, so it's got a circular redirect.
Here's what I would do to try to kick-start indexing, if you haven't already:
- Make sure you have the "Preferred Domain" set to the www version of your site in both the www and non-www versions of your site in Google Search Console.
- In the Search Console for the www-version of your site, re-submit your sitemap.
- In the Search Console for the www-version of your site, do a Fetch as Google on your homepage, and maybe a couple of other pages, and when the Fetch is done use the option to submit those pages for indexing, (there's a monthly limit on how much of this you can do).
Good luck!
-
That's not so horrible - it just says not to crawl the plugins directory or the admin, and to delay a second between requests. You probably don't want your plugins or admin directories being indexed, and according to this old forum post Google ignores the crawl-delay directive, so the robots.txt isn't the problem.
-
Hi, my robot.txt file looks like this:
User-agent: * Crawl-delay: 1 Disallow: /wp-content/plugins/ Disallow: /wp-admin/ This is not how it suppose to look like, right? could this cause the problem?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My pages are being crawled, but not indexed according to Search Console
According to Google Search Console, my pages are being crawled by not indexed. We use Shopify and about two weeks ago I selected that Traffic from all our domains redirects to our primary domain. So everything from www.url.com and https://url.com and so on, would all redirect to one url. Have added an attached image from Search Console. 6fzEQg8
Technical SEO | | HariOmHemp0 -
Sudden site drop google, not banned or penalised?
Hi all, We've been working on our site weldingmart.com for a while. 4 weeks ago we got a sudden drop from google rankings even with our own brand name. No clear cause found, and decided to walk through all technicalities of the SEO fundament. thus we did the following
Technical SEO | | jkossel
-> Setup google webmasters tools
no issues found, a few 404's, few 410's sure thats all ok,
-> Setup robots.txt to only index homepages, lister pages, content and product detail pages (disabled all filters and search queries) Also we banned russian and spammy bots for performance-sake.
-> Added sitemaps, and around 14k pages seemed to be already indexed.
-> When searching for "site:weldingmart.com" i can find 14k pages.
-> we had a low 33/100 page speed score and improved this to 76/100 So we did a lot of clean up and improved a lot of items. but still 2-weeks in. we still have no ranking improvements. Before we went down we had around 100 clicks a day from google. now 5 avg. by the way i think a main issue is the low link count of course but still googling your own name should return us in top3 right. Is there something we are missing, do we need more time. I just want to verify that we do not mis anything!1 -
Huge Drop in External Links
Hello, My total external links dropped from 900 to zero on a site that I've had running for about 10 years. What could possibly cause this? Thanks, Joe
Technical SEO | | Joe20130 -
Do YouTube videos in iFrames get crawled?
There seems to be quite a few articles out there that say iframes cause problems with organic search and that the various bots can't/won't crawl them. Most of the articles are a few years old (including Moz's video sitemap article). I'm wondering if this is still the case with YouTube/Vimeo/etc videos, all of which only offer iFrames as an embed option. I have a hard time believing that a Google property (YT) would offer an embed option that it's own bot couldn't crawl. However, let me know if that is in fact the case. Thanks! Jim
Technical SEO | | DigitalAnarchy0 -
Hamburger nav causing drop in organic traffic?
This may sound crazy but is it possible a hamburger nav used on a desktop responsive site could lead to a drop in organic traffic? Our hamburger nav requires between 3-4 clicks to arrive on a subpage. Here's an example of necessary clicks for a given page: click hamburger menu2) resorts (expands resorts nav) city (expands various cities where our resorts are located) actual resort (this opens up resort menu) resort overview (first clickable link) Is there any way we're getting penalized for excessive number of clicks? All our old pages were 301 redirected and content is relatively the same on the new redesigned website. Thanks
Technical SEO | | SoulSurfer80 -
Rel Canonical Crawl Notices
Hello, Within the Moz report from the crawl of my site, it shows that I had 89 Rel Canonical notices. I noticed that all the pages on my site have a rel canonical tag back to the same page the tag is on. Specific example from my site is as follows: http://www.automation-intl.com/resistance-welding-equipment has a Rel Canonical tag <link rel="<a class="attribute-value">canonical</a>" href="http://www.automation-intl.com/resistance-welding-equipment" />. Is this self reference harmless and if so why does it create a notice in the crawl? Thanks in advance.
Technical SEO | | TopFloor0 -
Crawl Diagnostics and Duplicate Page Title
SOMOZ crawl our web site and say we have no duplicate page title but Google Webmaster Tool says we have 641 duplicate page titles, Which one is right?
Technical SEO | | iskq0 -
Product ratings causing 302 redirect problem
I am working on an ecommerce site and my crawl report came back with 7000+ 302 redirects and maxed out at 10,000 pages because of all the redirects. The site really only has maybe 1500 pages (dynamic content aside). After looking into it a little more I see it is because of the product rating system. They have a star rating system that kinda looks like amazons. The only problem is that each star is a link to a dynamic address that records the vote and then 302's back to the original page the vote was cast from. So virtually every page on this site links out anywhere from 15 to 45 times and 302's back to itself, losing virtually all of its PR. Am I correct in that assumption or am I missing something? I don't see the links being blocked by robots.txt or noindex, nofollowed. Also it is an anonymous rating system where a rating can be cast from any category page displaying a product or any product page. To make matters worse every page links to a printable version which duplicates the issue by repeating the whole thing over again. So assuming I am correct that is site has a major PR leak on virtually every page, what is the best recommendation to fix this. 1. Block all of those links in robots.txt, 2. no index, nofollow these links or 3. put the rating system behind a submit button or disallow anon ratings 4. something else??? Looking at their product ratings on the site virtually everything is between 2-3 starts out of 5 and has about the same number of votes except less votes on deeper pages. I dont believe this is real at all since this site gets almost no traffic and maybe 1 sale a week, there is no way that any product has been rated 50 times. I think the crawler is voting as it crawls and doing it 5 times for every product which is why everything is rated 2.5 out of 5. This is an x-cart site in case anyone cares. Any suggestions?
Technical SEO | | BlinkWeb0