SeoMoz robot is not able to crawl my website.
-
Hi,
SeoMoz robot crawls only two web pages of my website. I contacts seomoz team and they told me that the problem is because of Javascript use. What is the solution to this? Should I contact my webdesign company and ask them to remove Javascript code?
-
Hi Maria,
Your menu is not optimized, but the real problem is not coming from there. Your home http://www.medixschool.ca/ has a meta refresh leading to http://www.medixschool.ca/Home/index.php instead of a 301 redirect.
You should remove this meta refresh and replace it by a 301 redirection.
Best regards,
Guillaume Voyer. -
Thanks Maximise,
I am in contact with my web developing company and looking forward for a response. Mean while it will be great if you can let me know more about it and how to go ahead with this.
thanks
-
Hi Maria,
I've just had a very brief look and it seems that the menu is quite reliant on JavaScript (try disabling Javascript and you will see that you can't access most of your pages). Crawlers do not use JavaScript so this could be a problem.
The ideal way to do this sort of thing is by showing and hiding menus with CSS, not JavaScript. I can have a better look and provide you with more info tomorrow if you need it.
-
Hi,
Thanks for the quick response. My website address is
http://www.medixschool.ca/Home/index.php
I checked the code and I can see the links.
-
You'll need to post the URL of your website for us to give you a definite answer. Try browsing to your website then right click and select 'view source'. if you can't see the links in the source code then crawlers can't see them either.
Sometimes menus are built badly with JavaScript, you will probably need to ask your developers to change your menu.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO for Parallax Website
Hi, Are there any implications of having a parallax website and the URL not changing as you scroll down the page? So basically the whole site is under the same URL? However, when you click on the menu the URL does change? Cheers
Technical SEO | | National-Homebuyers0 -
Changes to website haven't been crawled in over a month
We redesigned our website at http://www.aptinting.com a few months ago. We were fully expecting the crawl frequency to be very low because we had redesigned the website from a format that had been very static, and that probably has something to do with the problem we're currently having. We made some important changes to our homepage about a month ago, and the cached version of that page is still from April 2nd. Yet, whenever we create new pages, they get indexed within days. We've made a point to create lots of new blog articles and case studies to send a message to Google that the website should be crawled at a greater rate. We've also created new links to the homepage through press releases, guest blog articles, and by posting to social media, hoping that all of these things would send a message to Google saying that the homepage should be "reevaluated". However, we seem to be stuck with the April 2nd version of the homepage, which is severely lacking. Any suggestions would be greatly appreciated. Thanks!
Technical SEO | | Lemmons0 -
While SEOMoz currently can tell us the number of linking c-blocks, can SEOMoz tell us what the specific c-blocks are?
I know it is important to have a diverse set of c-blocks, but I don't know how it is possible to have a diverse set if I can't find out what the c-blocks are in the first place. Also, is there a standard for domain linking c-blocks? For instance, I'm not sure if a certain amount is considered "average" or "above-average."
Technical SEO | | Todd_Kendrick0 -
Does Bing ignore robots txt files?
Bonjour from "Its a miracle is not raining" Wetherby Uk 🙂 Ok here goes... Why despite a robots text file excluding indexing to site http://lewispr.netconstruct-preview.co.uk/ is the site url being indexed in Bing bit not Google? Does bing ignore robots text files or is there something missing from http://lewispr.netconstruct-preview.co.uk/robots.txt I need to add to stop bing indexing a preview site as illustrated below. http://i216.photobucket.com/albums/cc53/zymurgy_bucket/preview-bing-indexed.jpg Any insights welcome 🙂
Technical SEO | | Nightwing0 -
Redirecting website page to another
Hi there one of my old pages on my site is currently ranking for a phrases that I want to rank for on a new page I created. My old page from 1 year ago is ranking for 'Property Management Training' (it's a blog post dating 2011) I have cretaed a new main Page on my site and would like to rank for 'Property Management' as it's more relevant. What is the best suggestion to keep my ranking but have people go to my new page? 301 redirect old page to new page? Thanks,
Technical SEO | | daracreative1 -
Quick robots.txt check
We're working on an SEO update for http://www.gear-zone.co.uk at the moment, and I was wondering if someone could take a quick look at the new robots file (http://gearzone.affinitynewmedia.com/robots.txt) to make sure we haven't missed anything? Thanks
Technical SEO | | neooptic0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0