Crawl started on the 18th of December and still hasn't completed!
-
As the title states I started a crawl over 2 weeks ago now and it still hasn't completed. Does anyone know why this could be?
Thanks
-
Thanks I'll give that a go!
-
Hi Laurance,
Thanks for writing in and sorry that your crawl never completed. I'm afraid the issue is that all crawlers are being blocked in the robots.txt file of the site and we aren't able to fully access the site to complete a crawl. If you were to update the robots.txt file to allow our crawler to access the site, we would than be able to get the crawl completed for you. You can use the user-agent rogerbot to specify our crawler or you can remove the disallow: / directive from the robots.txt file to allow us access. Once you've done that, let me know and I will have our engineers complete the crawl for you.
I look forward to hearing back soon.
-Chiaryn
-
Hi Laurence,
What do you mean the crawl still hasn't be completed? Do you mean not all your sites are indexed?
I suggest you checking up on your Google Webmaster Tools and see how your site is doing. Once you log in, click on your website then click on the Health tab on the left hand side. You can see all your crawl stats and errors there. Furthermore, you can use the "Fetch as Google" tool and ask Google to crawl your pages again.
Another thing is that if you have a lot of pages and your pages are not interlinking correctly, it may take weeks for Google to crawl all your pages. There is not set time limit on how long it takes. Maybe submit a sitemap to speed up the process of finding all your pages?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Warnings, Notices, and Errors- don't know how to correct these
I have been watching my Notices, Warnings and Errors increase since I added a blog to our WordPress site. Is this effecting our SEO? We now have the following: 2 4XX errors. 1 is for a page that we changed the title and nav for in mid March. And one for a page we removed. The nav on the site is working as far as I can see. This seems like a cache issue, but who knows? 20 warnings for “missing meta description tag”. These are all blog archive and author pages. Some have resulted from pagination and are “Part 2, Part 3, Part 4” etc. Others are the first page for authors. And there is one called “new page” that I can’t locate in our Pages admin and have no idea what it is. 5 warnings for “title element too long”. These are also archive pages that have the blog name and so are pages I can’t access through the admin to control page title plus “part 2’s and so on. 71 Notices for “Rel Cononical”. The rel cononicals are all being generated automatically and are for pages of all sorts. Some are for a content pages within the site, a bunch are blog posts, and archive pages for date, blog category and pagination archive pages 6 are 301’s. These are split between blog pagination, author and a couple of site content pages- contact and portfolio. Can’t imagine why these are here. 8 meta-robot nofollow. These are blog articles but only some of the posts. Don’t know why we are generating this for some and not all. And half of them are for the exact same page so there are really only 4 originals on this list. The others are dupes. 8 Blocked my meta-robots. And are also for the same 4 blog posts but duplicated twice each. We use All in One SEO. There is an option to use noindex for archives, categories that I do not have enabled. And also to autogenerate descriptions which I do not have enabled. I wasn’t concerned about these at first, but I read these (below) questions yesterday, and think I'd better do something as these are mounting up. I’m wondering if I should be asking our team for some code changes but not sure what exactly would be best. http://www.seomoz.org/q/pages-i-dont-want-customers-to-see http://www.robotstxt.org/meta.html Our site is http://www.fateyes.com Thanks so much for any assistance on this!
Moz Pro | | gfiedel0 -
1 week has passed: Crawled pages still N/A
Roughly one week ago I went pro, and then I created a campaing for the smallish webshop that I'm employed at, however it doesn't seem to crawl. I've check our visitors log and while we find other bots such as google, bing, yandex and so fourth, seomoz bot hasn't been visible. Perhaps I'm looking for a normal useragent, ohwell, onwards. While I thought it might take time, as a small test I added a domain that I've owned for sometime but don't really use, that target site is only 17 pages, now this site was crawled almost within the hour, and I realised that our ~5000pages on the main campaing would take some time, but wouldn't the initial 250 pages be crawled by now? I should add, that I didn't add http:// to the original Campaing, but the one that got crawled I did. I cannot seem to change this myself inorder to spot if that's the problem or not. Anyone has any ideas, should I just wait or is there something I can activly do to force it to start rolling?
Moz Pro | | Hultin0 -
Crawl diagnostics taking too long
I started a crawl 2 days ago and it was still going after almost 48 hours so I deleted the entire campaign and resubmitted it. It's been 13 hours and still going. What happened to getting initial results in 2 hours? I've never had this problem and have run several campaign crawls here. Just wondering if there is a known issue I just can't seem to find? Thank you
Moz Pro | | LisaS130 -
You've recently updated your brand rules. We're fetching your new data, and we should have it ready for you within the hour.
Why do i always see this message when entering a certain campaign? "You've recently updated your brand rules. We're fetching your new data, and we should have it ready for you within the hour." I didnt change a thing since i started this campaign two-three weeks ago ...
Moz Pro | | alsvik0 -
Crawl Diagnostics Error Spike
With the last crawl update to one of my sites there was a huge spike in errors reported. The errors jumped by 16,659 -- majority of which are under the duplicate title and duplicate content category. When I look at the specific issues it seems that the crawler is crawling a ton of blank pages on the sites blog through pagination. The odd thing is that the site has not been updated in a while and prior to this crawl on Jun 4th there were no reports of these blank pages. Is this something that can be an error on the crawler side of things? Any suggestions on next steps would be greatly appreciated. I'm adding an image of the error spike Xovep.jpg?1 Xovep.jpg?1
Moz Pro | | VanadiumInteractive1 -
What's Happened To OSE's External Links For My Site?
Hi, I'm just taking my first steps with Open Site Explorer. I've hit a problem that I'd be really grateful for some help with. I'm running a website (that I didn't create) and want to get a clear picture of the inbound links. When I enter the URL into the OSE search bar, there are no external domains listed under the 'Inbound Links' and 'Linking Domains' tabs. Only internal site links are registered. Setting the filters, "only external" + "pages on this root domain", under the Inbound Links tab, does bring up a list of external sites. But I haven't had to enter filters just to see a list of inbound links with any other sites. Also, the breakdown under the Linking Domains tab remains the same - only links within my site are shown. Does this ring a bell? Any ideas what I might be doing wrong, or what might be wrong with my site to cause this problem? Cheers for helping, Josh
Moz Pro | | JoshAustin460 -
Crawl test tool from SEOmoz - which URLs does it actually crawl?
I am using for the first time the crawl test tool from SEOmoz and I do not really understand which URLs the tool is going to crawl. First, it says "enter any subdomain" --> why can´t I do the crawl for the root domain? Second it says "we'll crawl up to 3,000 linked-to pages" --> does that mean that the tool crawls all internal links that it can find on the given domain? Thanks for your help!
Moz Pro | | Elke.GetApp0 -
Why won't scheduled crawl of my site begin?
I currently have a campaign running on SEOMoz for over a month. It has been showing that a crawl was scheduled to start on 12/21. Now it's 12/23 and there has not been a new crawl, and it still says scheduled for 12/21.. Anyone know why this is happening or how to fix it? Thanks
Moz Pro | | Prime850