Only Crawling 1 page?
-
Hi Guys,
Any advice much appreciated on this!
Recently set up a new campaign on my dashboard with just 5 keywords. The domain is brammer.co.uk and a quick Google site:brammer.co.uk shows a good amount of indexed pages.
However - first seomoz tool crawl has only crawled 1 url!!
"Last Crawl Completed: Apr. 12th, 2011 Next Crawl Starts: Apr. 17th, 2011"
Any ideas what's stopping the tool crawl anymore of the site??
Cheers in advance..
J
-
Agreed. I've passed this to the devs.
You've been most helpful today - Thanks for your time, very much appreciated.
J
-
Well I don't think we can hold this against SEOmoz particularly, if something as basic as Xenu can't crawl it and W3C can't view it's source code I think it's somewhat fair to blame the site. Even if Google can see it I would imagine if the matter of fixed you might see a boost regardless.
Google needs to be able to see sites no matter what their state because you or I can do the same, they have the resources to implement that as well. Smaller operations (everything else) have to make do with figuring it out the old fashioned way through the source code.
I think it's the encoding simply because that is the first port of call on the page and it's broken. If it was anything further down we would at least be seeing some page data cropping up.
The only other thing it could be (because I can't find a robots.txt) is something server side, and that's something it's very difficult to establish without direct access.
-
Interesting. Do you think that could be it? Googlebot seems to find it's way around it though. I'd have thought if G could do it then SEOmoz tools would, otherwise I imagined getting an inaccessibility error or similar from moz.
I'll get that changed and see if it makes a difference..
Thanks again for looking at it!
-
Xenu doesn't like it either, only indexes the one page.
Ran a w3c validation check and it threw up the fact there is no character encoding specified, which may well be the whole of the problem.
If you look at the source code that w3c displays you can see it's essentially an empty document.
-
Any ideas? got a report due on the 19th and the next crawl is due on the 17th. Would be great to remove any blockers before then if poss. thanks!
-
Definately. Cheers.
-
Worth a stab.
Probably worthwhile setting that forwarding up in the meantime anyway.
-
Thanks Tom, but no, it's setup as www.brammer.co.uk so it's not that.
-
Well if you've just put in http://brammer.co.uk it may well be falling over because the domain isn't forwarding to http://www.brammer.co.uk
That's my guess. You just need to forward the domain and it should all be sorted.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unable to get into top 20 even when pages are optimized and most crawl issues resolved
I have a few keyword phrases I've been trying to rank in the top 20 for (starting place). I have optimized for a few different phrases, ranging in keyword difficulty, but no matter what I do I can't seem to get in. In many cases, the exact same results show up for many different variations of the phrases I'd like to rank for. I've read about how google tries to match user intent and so if it decides those results are more relevant then it will always show them, but does that mean that no matter what I do I will always be behind them? The main question I have is: how should I proceed? Should I stop optimizing pages and focus on link acquisition? Or go through and make sure there isn't a single crawl issue? Or focus on optimizing for longer tail keyword phrases? It just feels like I've done so much of what the moz tools have recommended and I'm seeing very little movement over the past couple of months, if anything I see dips in performance after optimization. Thanks in advance!
Moz Pro | | Dynata_panel_marketing1 -
Since July 1, we've had a HUGE jump in errors on our weekly crawl. We don't think anything has changed on our website. Has MOZ changed something that would account for a large leap in duplicate content and duplicate title errors?
Our error report went from 1,900 to 18,000 in one swoop, starting right around the first of July. The errors are duplicate content and duplicate title, as if it does not see our 301 redirects. Any insights?
Moz Pro | | KristyFord0 -
Duplicate Page content
I found these URLs in Issue: Duplicate Page Content | http://www.decoparty.fr/Products.asp?SubCatID=4612&CatID=139 1 0 10 1 http://www.decoparty.fr/Products.asp?SubCatID=4195&CatID=280 1 0 10 1 http://www.decoparty.fr/Catproducts.asp?CatID=124 | 28 | 0 | 12 | 1 |
Moz Pro | | partyrama0 -
Still Cant Crawl My Site
I've removed all blocks but two from our htaccess. They are for amazonaws.com to block amazon from crawling us. I did a fetch as google in our WM tools on our robots txt with success. SEOMoz crawler here hit's our site and gets a 403. I've looks in our blocked request logs and amazon is the only one in there. What is going on here?
Moz Pro | | martJ0 -
Where do these error 404 pages come from
Hi, I've got a list of about 12 url's in our 404 section on here which I'm confused about. The url's relate to Christmas so they have not been active for 9 months. Can anyone answer where the SeoMoz crawler found these url's as they are not linked to on the website. Thanks
Moz Pro | | SimmoSimmo0 -
I have corrected the Problems in Crawl Diagnostics. When would it refresh/ re-crawl my site ?
I have corrected most of the problems shown in crawl diagnostics and changed the meta desc. , titles etc. When will SEOMOZ recrawl those pages and show that Its correct now ?
Moz Pro | | VarunBansal0 -
Crawl Diagnostics finding pages that dont exist. Will Rel Canon Help?
I have recently set up a campaign for www.completeoffice.co.uk. Im the in-house developer there. When the crawl diagnostics completed, i went to check the results, and to my surprise, it had well over 100 missing or empty title tags. I then clicked it to see what pages, and nearly all the pages it say have missing or empty title tags, DO NOT EXIST. This has really confused me and need help figuring out how to solve this. Can anyone help? Attached image is a screen shot of some of the links it showed me on crawl diagnostics, nearly all of these do not exist. Will the relation Canonical tag in the head section of the actual pages help? For example, The actual page that exist is: www.completeoffice.co.uk/Products.php Whereas, when crawled it actually showed www.completeoffice.co.uk/Products/Products.php Will have the rel can tag in the header of the real products.php solve this?
Moz Pro | | CompleteOffice0 -
How do I get the Page Authority of individual URLs in my exported (CSV) crawl reports?
I need to prioritize fixes somehow. It seems the best way to do this would be to filter my exported crawl report by the Page Authority of each URL with an error/issue. However, Page Authority doesn't seem to be included in the crawl report's CSV file. Am I missing something?
Moz Pro | | Twilio0