Only Crawling 1 page?
-
Hi Guys,
Any advice much appreciated on this!
Recently set up a new campaign on my dashboard with just 5 keywords. The domain is brammer.co.uk and a quick Google site:brammer.co.uk shows a good amount of indexed pages.
However - first seomoz tool crawl has only crawled 1 url!!
"Last Crawl Completed: Apr. 12th, 2011 Next Crawl Starts: Apr. 17th, 2011"
Any ideas what's stopping the tool crawl anymore of the site??
Cheers in advance..
J
-
Agreed. I've passed this to the devs.
You've been most helpful today - Thanks for your time, very much appreciated.
J
-
Well I don't think we can hold this against SEOmoz particularly, if something as basic as Xenu can't crawl it and W3C can't view it's source code I think it's somewhat fair to blame the site. Even if Google can see it I would imagine if the matter of fixed you might see a boost regardless.
Google needs to be able to see sites no matter what their state because you or I can do the same, they have the resources to implement that as well. Smaller operations (everything else) have to make do with figuring it out the old fashioned way through the source code.
I think it's the encoding simply because that is the first port of call on the page and it's broken. If it was anything further down we would at least be seeing some page data cropping up.
The only other thing it could be (because I can't find a robots.txt) is something server side, and that's something it's very difficult to establish without direct access.
-
Interesting. Do you think that could be it? Googlebot seems to find it's way around it though. I'd have thought if G could do it then SEOmoz tools would, otherwise I imagined getting an inaccessibility error or similar from moz.
I'll get that changed and see if it makes a difference..
Thanks again for looking at it!
-
Xenu doesn't like it either, only indexes the one page.
Ran a w3c validation check and it threw up the fact there is no character encoding specified, which may well be the whole of the problem.
If you look at the source code that w3c displays you can see it's essentially an empty document.
-
Any ideas? got a report due on the 19th and the next crawl is due on the 17th. Would be great to remove any blockers before then if poss. thanks!
-
Definately. Cheers.
-
Worth a stab.
Probably worthwhile setting that forwarding up in the meantime anyway.
-
Thanks Tom, but no, it's setup as www.brammer.co.uk so it's not that.
-
Well if you've just put in http://brammer.co.uk it may well be falling over because the domain isn't forwarding to http://www.brammer.co.uk
That's my guess. You just need to forward the domain and it should all be sorted.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Magento Dynamic Pages Being Indexed
Hi there, I have about 50k Moz medium priority errors in my Crawl Diagnostic report. The bulk of them are classified as "Temporary Redirect" problems. Then if you drill into those further, I can see that the problem urls all kinda are center around: mysite.com/catalogsearch/result.. mysite.com/wishlist.. mysite.com/catalog.. Is this something I should disallow in my Robstxt file? And if so how specific do I get with it.. Disallow /catalogsearch/result/?q= Will listing the /catalogsearch be enough to cover anything after it? thanks
Moz Pro | | Shop-Sq0 -
Codeigniter - Controller and duplicate pages
Hi there, I use Codeigniter as framework and I have a question about the duplicate page. Actually, for default, the typical page in a CodeIgniter framework is something like this: http://www.domain.com/site/contact where site is the controller containing the contact function that point to the contact.html view... To have a better URL I use a trick with the "routes" that redirect any http://www.domain.com/contact to the original http://www.domain.com/site/contact Of course the both are valid and the both are... crawled! So I get the duplicate page. Is this something I have to manage, maybe with .htaccess? Any idea would be very appreciated. Thanks for you precious time guys! Shella
Moz Pro | | CarloShellaMascella0 -
1 page crawled - again
Just had to let you know that it happend again. So right now we are at 2 out of the last 4 crawls. Uptime here is 99,8% for the last 30 days, with a small downtime due to an update process at the 18/5 from around 2:30 to 4:30 GMT In relation to: http://moz.com/community/q/1-page-crawled-and-other-errors
Moz Pro | | alsvik0 -
Crawl test from tools
Hi, I notice that the crawl test which is from the Research Tools doesn't really get a new crawl even though there is 2 crawl per day. It will only provide the data which was acquire from the crawl diagnostics in my pro account. There is no point for me to get the data which I get from my crawl diagnostic isn't it? Even seomoz provided with more than 2 crawl per day also useless in this case. This whole thing doesn't make sense as the crawl diagnostics will only perform a full crawl test once every week. but even the crawl test also not helping any thing out for me.
Moz Pro | | hanzoz0 -
Google Hiding Indexed Pages from SERPS?
Trying to troubleshoot an issue with one of our websites and noticed a weird discrepancy. Our site should only have 3 pages in the index. The main landing page with a contact form and two policy pages, yet google reports over 1,100 pages (that part is not a mystery, I know where they are coming from.....multi site installations of popular CMS's leave much to be desired in actually separating websites) Here is a screen shot showing the results of the site command: http://www.diigo.com/item/image/2jing/oseh I have set my search settings to show 100 (the max number of results) results per page. Everything is fine until I get to page three where I get the standard "In order to show you the most relevant results, we have omitted some entries very similar to the 122 already displayed." But wait a second, I clicked on page three, now there are only two pages of results and the number of results reported has dropped to 122 http://www.diigo.com/item/image/2jing/r8c9 When I click on the "show omitted results" I do get some more results, and the returned results jumps back up to 1,100. However I only get three pages of results. And when I click on the last page the number of results returned changes to 205 http://www.diigo.com/item/image/2jing/jd4h Is this a difference between indexes (same thing happens when I turn instant search back on, Shows over 1,100 results but when I get to the last page of results it changes to 205). Any other way of getting this info? I am trying to go in and identify how these pages are being generated, but I have to know what ones are showing up in the index for that to happen. Only being able to access 1/5th of the pages indexed is not cool. Anyone have any idea about this or experience with it? For reference I was going through with SEOmoz's excellent toolbar and exporting the results to csv (using the Mozilla plugin). I guess google doesn't like people doing that so maybe this is a way to protect against scraping by only showing limited results in the Site: command. Thanks!
Moz Pro | | prima-2535090 -
Crawl Errors Confusing Me
The SEOMoz crawl tool is telling me that I have a slew of crawl errors on the blog of one domain. All are related to the MSNbot. And related to trackbacks (which we do want to block, right?) and attachments (makes sense to block those, too) ... any idea why these are crawl issues with MSNbot and not Google? My robots.txt is here: http://www.wevegotthekeys.com/robots.txt. Thanks, MJ
Moz Pro | | mjtaylor0 -
Canonical tags and SEOmoz crawls
Hi there. Recently, we've made some changes to http://www.gear-zone.co.uk/ to implement canonical tags to some dynamically generated pages to stop duplicate content issues. Previously, these were blocked with robots.txt. In Webmaster Tools, everything looks great - pages crawled has shot up, and overall traffic and sales has seen a positive increase. However the SEOmoz crawl report is now showing a huge increase in duplicate content issues. What I'd like to know is whether SEOmoz registers a canonical tag as preventing a piece of duplicate content, or just adds to it the notices report. That is, if I have 10 pages of duplicate content all with correct canonical tags, will I still see 10 errors in the crawl, but also 10 notices showing a canonical has been found? Or, should it be 0 duplicate content errors, but 10 notices of canonicals? I know it's a small point, but it could potentially have a big difference. Thanks!
Moz Pro | | neooptic0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0