Only Crawling 1 page?
-
Hi Guys,
Any advice much appreciated on this!
Recently set up a new campaign on my dashboard with just 5 keywords. The domain is brammer.co.uk and a quick Google site:brammer.co.uk shows a good amount of indexed pages.
However - first seomoz tool crawl has only crawled 1 url!!
"Last Crawl Completed: Apr. 12th, 2011 Next Crawl Starts: Apr. 17th, 2011"
Any ideas what's stopping the tool crawl anymore of the site??
Cheers in advance..
J
-
Agreed. I've passed this to the devs.
You've been most helpful today - Thanks for your time, very much appreciated.
J
-
Well I don't think we can hold this against SEOmoz particularly, if something as basic as Xenu can't crawl it and W3C can't view it's source code I think it's somewhat fair to blame the site. Even if Google can see it I would imagine if the matter of fixed you might see a boost regardless.
Google needs to be able to see sites no matter what their state because you or I can do the same, they have the resources to implement that as well. Smaller operations (everything else) have to make do with figuring it out the old fashioned way through the source code.
I think it's the encoding simply because that is the first port of call on the page and it's broken. If it was anything further down we would at least be seeing some page data cropping up.
The only other thing it could be (because I can't find a robots.txt) is something server side, and that's something it's very difficult to establish without direct access.
-
Interesting. Do you think that could be it? Googlebot seems to find it's way around it though. I'd have thought if G could do it then SEOmoz tools would, otherwise I imagined getting an inaccessibility error or similar from moz.
I'll get that changed and see if it makes a difference..
Thanks again for looking at it!
-
Xenu doesn't like it either, only indexes the one page.
Ran a w3c validation check and it threw up the fact there is no character encoding specified, which may well be the whole of the problem.
If you look at the source code that w3c displays you can see it's essentially an empty document.
-
Any ideas? got a report due on the 19th and the next crawl is due on the 17th. Would be great to remove any blockers before then if poss. thanks!
-
Definately. Cheers.
-
Worth a stab.
Probably worthwhile setting that forwarding up in the meantime anyway.
-
Thanks Tom, but no, it's setup as www.brammer.co.uk so it's not that.
-
Well if you've just put in http://brammer.co.uk it may well be falling over because the domain isn't forwarding to http://www.brammer.co.uk
That's my guess. You just need to forward the domain and it should all be sorted.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz crawl duplicate pages issues
Hi According to the moz crawl on my website I have in the region of 800 pages which are considered internal duplicates. I'm a little puzzled by this, even more so as some of the pages it lists as being duplicate of another are not. For example, the moz crawler considers page B to be a duplicate of page A in the urls below: Not sure on the live link policy so ive put a space in the urls to 'unlive' them. Page A http:// nuchic.co.uk/index.php/jeans/straight-jeans.html?manufacturer=3751 Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/accessories/id/92/?cat=97&manufacturer=3603 One is a filter page for Curvety Jeans and the other a filter page for Charles Clinkard Accessories. The page titles are different, the page content is different so Ive no idea why these would be considered duplicate. Thin maybe, but not duplicate. Like wise, pages B and C are considered a duplicate of page A in the following Page A http:// nuchic.co.uk/index.php/bags.html?dir=desc&manufacturer=4050&order=price Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/purses/id/98/?manufacturer=4001 Page C http:// nuchic.co.uk/index.php/coats/waistcoats.html?manufacturer=4053 Again, these are product filter pages which the crawler would have found using the site filtering system, but, again, I cannot find what makes pages B and C a duplicate of A. Page A is a filtered result for Great Plains Bags (filtered from the general bags collection). Page B is the filtered results for Chic Look Purses from the Purses section and Page C is the filtered results for Apricot Waistcoats from the Waistcoat section. I'm keen to fix the duplicate content errors on the site before it goes properly live at the end of this month - that's why anyone kind enough to check the links will see a few design issues with the site - however in order to fix the problem I first need to work out what it is and I can't in this case. Can anyone else see how these pages could be considered a duplicate of each other please? Checking ive not gone mad!! Thanks, Carl
Moz Pro | | daedriccarl0 -
Crawl diagnostics up to date after Magento ecommerce site crawl?
Howdy Mozzers, I have a Magento ecommerce website and I was wondering if the data (errors/warnings) from the Crawl diagnostics are up to date. My Magento website has 2.439 errors, mainly 1.325 duplicate page content and 1.111 duplicate page title issues. I already implemented the Yoast meta data plugin that should fix these issues, however I still see there errors appearing in the crawl diagnostics, but when going to the mentioned URL in the crawl diagnostics for e.g.: http://domain.com/babyroom/productname.html?dir=desc&targetaudience=64&order=name and checking the source code and searching for 'canonical' I do see: http://domain.com/babyroom/productname.html" />. Even I checked the google serp for url: http://domain.com/babyroom/productname.html?dir=desc&targetaudience=64&order=name and I couldn't find the url indexed in Google. So it basically means the Yoast meta plugin actually worked. So what I was wondering is why I still see the error counted in the crawl diagnostics? My goal is to remove all the errors and bring it all to zero in the crawl diagnostics. And now I am still struggling with the "overly-dynamic URL" (1.025) and "too many on-page links" (9.000+) I want to measure whether I can bring the warnings down after implementing an AJAX-based layered navigation. But if it's not updating it here crawl diagnostics I have no idea how to measure the success of eliminating the warnings. Thanks for reading and hopefully you all can give me some feedback.
Moz Pro | | videomarketingboys0 -
Why is my MOZ report only crawling 1 page?
just got this weeks MOZ report and it states that it have only crawled: Pages Crawled: 1 | Limit: 10,000 it was over 1000 a couple of weeks ago, we have moved servers recently but is there anything i have done wrong here? indigocarhire.co.uk thanks
Moz Pro | | RGOnline0 -
Is it possible to submit a page to the seomoz index?
We recently got added to dmoz and botw and would like to see those links considered in our domain authority as we are tracking our progress and comparing ourselves to other sites. Is it possible to submit links to the seomoz index manually to have those tracked? (If I am even understanding this correctly)
Moz Pro | | hyperthalamus0 -
Test page ranking in search engine
Is there a tool on SEOMOZ that allows me to get search engine ranking(s) for a specific page with a specific set of keywords?
Moz Pro | | Webxtrakt0 -
Is it possible to override the 10k pages crawl limit on PRO?
Hi There, Just signed up for PRO and I love it! We have a particularly large website (tons of content) and the 10,000 page limit is holding us back from getting really exhaustive analysis. Is there any way to up the limit for a single crawl? Thanks!
Moz Pro | | Richline_Digital0 -
20000 site errors and 10000 pages crawled.
I have recently built an e-commerce website for the company I work at. Its built on opencart. Say for example we have a chair for sale. The url will be: www.domain.com/best-offers/cool-chair Thats fine, seomoz is crawling them all fine and reporting any errors under them url great. On each product listing we have several options and zoom options (allows the user to zoom in to the image to get a more detailed look). When a different zoom type is selected it adds on to the url, so for example: www.domain.com/best-offers/cool-chair?zoom=1 and there are 3 different zoom types. So effectively its taking for urls as different when in fact they are all one url. and Seomoz has interpreted it this way, and crawled 10000 pages(it thinks exist because of this) and thrown up 20000 errors. Does anyone have any idea how to solve this?
Moz Pro | | CompleteOffice0 -
Can you change crawl day of week?
Can I somehow sync the day of the week for each of my campaigns' crawls, so that all campaigns are updated on the same day?
Moz Pro | | ATShock0