Only Crawling 1 page?
-
Hi Guys,
Any advice much appreciated on this!
Recently set up a new campaign on my dashboard with just 5 keywords. The domain is brammer.co.uk and a quick Google site:brammer.co.uk shows a good amount of indexed pages.
However - first seomoz tool crawl has only crawled 1 url!!
"Last Crawl Completed: Apr. 12th, 2011 Next Crawl Starts: Apr. 17th, 2011"
Any ideas what's stopping the tool crawl anymore of the site??
Cheers in advance..
J
-
Agreed. I've passed this to the devs.
You've been most helpful today - Thanks for your time, very much appreciated.
J
-
Well I don't think we can hold this against SEOmoz particularly, if something as basic as Xenu can't crawl it and W3C can't view it's source code I think it's somewhat fair to blame the site. Even if Google can see it I would imagine if the matter of fixed you might see a boost regardless.
Google needs to be able to see sites no matter what their state because you or I can do the same, they have the resources to implement that as well. Smaller operations (everything else) have to make do with figuring it out the old fashioned way through the source code.
I think it's the encoding simply because that is the first port of call on the page and it's broken. If it was anything further down we would at least be seeing some page data cropping up.
The only other thing it could be (because I can't find a robots.txt) is something server side, and that's something it's very difficult to establish without direct access.
-
Interesting. Do you think that could be it? Googlebot seems to find it's way around it though. I'd have thought if G could do it then SEOmoz tools would, otherwise I imagined getting an inaccessibility error or similar from moz.
I'll get that changed and see if it makes a difference..
Thanks again for looking at it!
-
Xenu doesn't like it either, only indexes the one page.
Ran a w3c validation check and it threw up the fact there is no character encoding specified, which may well be the whole of the problem.
If you look at the source code that w3c displays you can see it's essentially an empty document.
-
Any ideas? got a report due on the 19th and the next crawl is due on the 17th. Would be great to remove any blockers before then if poss. thanks!
-
Definately. Cheers.
-
Worth a stab.
Probably worthwhile setting that forwarding up in the meantime anyway.
-
Thanks Tom, but no, it's setup as www.brammer.co.uk so it's not that.
-
Well if you've just put in http://brammer.co.uk it may well be falling over because the domain isn't forwarding to http://www.brammer.co.uk
That's my guess. You just need to forward the domain and it should all be sorted.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Filter Pages
Howdy Moz Forum!! I have a headache of a job over here in the UK and I'd welcome any advice! - It's sunny today, only 1 of 5 days in a year and i'm stuck on this! I have a client that currently has 22,000 pages indexed to Google with almost 4000 showing as duplicate content. The site has a "jobs" and "candidates" list. This can cause all sorts of variations such as job title, language, location etc. The filter pages all seem to be indexed. Plus the static pages are indexed. For example if there were 100 jobs at Moz being advertised, it is displaying the jobs on the following URL structure - /moz
Moz Pro | | Slumberjac
/moz/moz-jobs
/moz/moz-jobs/page/2
/moz/moz-jobs/page/3
/moz/moz-jobs/page/4
/moz/moz-jobs/page/5 ETC ETC Imagine this with some going up to page/250 I have checked GA data and can see that although there are tons of pages indexed this way, non of them past the "/moz/moz-jobs" URL get any sort of organic traffic. So, my first question! - Should I use rel-canonical tags on all the /page/2 & /page/3 etc results and point them all at the /moz/moz-jobs parent page?? The reason for this is these pages have the same title and content and fall very close to "duplicate" content even though it does pull in different jobs... I hope i'm making sense? There is also a lot of pages indexed in a way such as- https://www.examplesite.co.uk/moz-jobs/search/page/9/?candidate_search_type=seo-consulant&candidate_search_language=blank-language These are filter pages... and as far as I'm concerned shouldn't really be indexed? Second question! - Should I "no follow" everything after /page in this instance? To keep things tidy? I don't want all the variations indexed! Any help or general thoughts would be much appreciated! Thanks.0 -
How do a run a MOZ crawl of my site before waiting for the scheduled weekly crawl?
Greetings: I have just updated my site and would like to run a crawl immediately. How can I do so before waiting for the next MOZ crawl? Thanks,
Moz Pro | | Kingalan1
Alan0 -
On-Page Report Card Questions
First post here, a couple questions as I work through some of the SEOMOZ tool reporting, specifically the On-page Report Card. I've just received the report results yesterday, so working through the data now. There are two issues categorized as critical by the tool: (1) The grader is stating I don't have any instances of the target keyword in my page title, yet it's there. (The page title is too long, but I'm in the process of hacking the blog software to fix this, it's auto-generated by the CMS.) (2) It's also saying under "Broad Keyword Usage in Document" that I have zero instances of the keyword in the body text, and while I certainly don't have enough, there is at least one instance at the bottom of the blog post. All the text is contained with tags. (3) Related to #2, what's the difference between "Appropriate Keyword Usage in Document" under "High Importance Factors" and "Broad Keyword Usage in Document" under "Critical Factors"
Moz Pro | | webranger0 -
Does crawling help in optimisation.?
the website is as it was last week. no optimisation from my side for 10 days now. i was ranked 5 with my keyword not much competition there. however 2 days ago i registrred at seomoz and created a campaign for my website with my keywords that were ranked 5 in search. today i see that my rank has gone up to 2. i have nt done any optimisation neither have ii created any backlinks. so how and why did i climb up? i just created a campaign and let seomoz crawl my website for 2days. am i to assume seomoz crawl optimises website? if that is the case then can i create a campaign crawl pages, climb up in searches, delete the campaign after a week, create it again crawl pages and climb up and so on ? please advise?
Moz Pro | | wahin10 -
Dynamic URL pages in Crawl Diagnostics
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages. Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site. The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site. These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories. So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
Moz Pro | | Visually0 -
About the rankings report in the Pro Dashboard, does it track the ranking of every page on a root domain, or just the home page or whichever page you set up the campaign with?
I noticed that one of the pages on my root domain has a #5 rank for a keyword, yet the ranking report says that there are no results in the top 50. So I am assuming it is only tracking the home page. That is one thing I liked about the Rank Tracker, that it would find any page that was ranking on a root domain. Thanks, Lara
Moz Pro | | larahill0 -
On-Page Keyword Optimization Question
First let me say I want to improve the text of the site I am working on focusing on the site visitor in the first instance. I run the "On-Page Keyword Optimization" The page fails on "Avoid Keyword Stuffing in Document... ...Occurrences of Keyword 48" well over the limit of 15. The occurrence include those in the site navigation and strapline, but it was my understanding that Google was aware of nav areas/areas common to most other pages on the site and that keywords in these areas weren't viewed as being part of the page content. The keyword is the main keyword for the company, and the page is the home page i.e. "acme widgets" the others are "acme widgets for the home"... well you get the idea: The page breaks down as follows: 5 instances in primary nav 1 instance strapline 3 instances secondary nav Remainder in page body I am told by the tool to reduce to 15 instances, so should I? Have 9 instances in the nav and other areas and 6 or so on the page Have 9 instances in the nav and other areas and 15 or so on the page
Moz Pro | | GrouchyKids0