SeoMoz robot is not able to crawl my website.
-
Hi,
SeoMoz robot crawls only two web pages of my website. I contacts seomoz team and they told me that the problem is because of Javascript use. What is the solution to this? Should I contact my webdesign company and ask them to remove Javascript code?
-
Hi Maria,
Your menu is not optimized, but the real problem is not coming from there. Your home http://www.medixschool.ca/ has a meta refresh leading to http://www.medixschool.ca/Home/index.php instead of a 301 redirect.
You should remove this meta refresh and replace it by a 301 redirection.
Best regards,
Guillaume Voyer. -
Thanks Maximise,
I am in contact with my web developing company and looking forward for a response. Mean while it will be great if you can let me know more about it and how to go ahead with this.
thanks
-
Hi Maria,
I've just had a very brief look and it seems that the menu is quite reliant on JavaScript (try disabling Javascript and you will see that you can't access most of your pages). Crawlers do not use JavaScript so this could be a problem.
The ideal way to do this sort of thing is by showing and hiding menus with CSS, not JavaScript. I can have a better look and provide you with more info tomorrow if you need it.
-
Hi,
Thanks for the quick response. My website address is
http://www.medixschool.ca/Home/index.php
I checked the code and I can see the links.
-
You'll need to post the URL of your website for us to give you a definite answer. Try browsing to your website then right click and select 'view source'. if you can't see the links in the source code then crawlers can't see them either.
Sometimes menus are built badly with JavaScript, you will probably need to ask your developers to change your menu.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Setting Up A Website For Redirects
I've got an old defunct domain with a lot of backlinks to individual pages. I'd like to use these backlinks for link juice by redirecting them to individual pages on the new domain (both sites belong to the same company). What is the best way to set this up? I presume I need some kind of hosting & site, even if it's just a default Wordpress install, which I can then use to set up the redirects? Would it be best done using .htaccess file for 301 redirects or some other way?
Technical SEO | | abisti20 -
I can't crawl the archive of this website with Screaming Frog
Hi I'm trying to crawl this website (http://zeri.info/) with Screaming Frog but because of some technical issue with their site (i can't find what is causing it) i'm able to crawl only the first page of each category (ex. http://zeri.info/sport/) and then it will go to crawl each page of their archive (hundreds of thousands of pages) but it won't crawl the links inside these pages. Thanks a lot!
Technical SEO | | gjergjshala0 -
Robots file set up
The robots file looks like it has been set up in a very messy way.
Technical SEO | | mcwork
I understand the # will comment out a line, does this mean the sitemap would
not be picked up?
Disallow: /js/ should this be allowed like /*.js$
Disallow: /media/wysiwyg/ - this seems to be causing alerts in webmaster tools as it can not access
the images within.
Can anyone help me clean this up please #Sitemap: https://examplesite.com/sitemap.xml Crawlers Setup User-agent: *
Crawl-delay: 10 Allowable Index Mind that Allow is not an official standard Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Allow: /catalogsearch/result/ Allow: /media/catalog/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /errors/
Disallow: /includes/
Disallow: /js/
Disallow: /lib/
Disallow: /magento/ Disallow: /media/ Disallow: /media/captcha/ Disallow: /media/catalog/ #Disallow: /media/css/
#Disallow: /media/css_secure/
Disallow: /media/customer/
Disallow: /media/dhl/
Disallow: /media/downloadable/
Disallow: /media/import/
#Disallow: /media/js/
Disallow: /media/pdf/
Disallow: /media/sales/
Disallow: /media/tmp/
Disallow: /media/wysiwyg/
Disallow: /media/xmlconnect/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /scripts/
Disallow: /shell/
#Disallow: /skin/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalog/product/gallery/
Disallow: */catalog/product/upload/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt
Disallow: /get.php # Magento 1.5+ Paths (no clean URLs) #Disallow: /.js$
#Disallow: /.css$
Disallow: /.php$
Disallow: /?SID=
Disallow: /rss*
Disallow: /*PHPSESSID Disallow: /:
Disallow: /😘 User-agent: Fatbot
Disallow: / User-agent: TwengaBot-2.0
Disallow: /0 -
Pagination/Crawl Errors
Hi, Ive only just joined SEO moz and after they crawled my site they came up with 3600 crawl errors mostly being duplicate content and duplicate urls. After researching this it soon became clear it was due to on page pagination and after speaking with Abe from SEO mozhe advised me to take action by getting our developers to implement rel=”next” & rel=”prev” to review. soon after our developers implemented this code ( I have no understanding of this what so ever) 90% of my keywords I had been ranking for in the top 10 have dropped out the top 50! Can anyone explain this or help me with this? Thanks Andy
Technical SEO | | beck3980 -
Website redesign launch
Hello everyone, I am in the process of having my consulting website redesigned and have a question about how this may impact SEO. I will be using the same URL as I did before, just simply replacing an old website with a new website. Obviously the URL structure will change slightly since I am changing navigation names. Page titles will also change. Do I need to do anything special to ensure that all of the pages from the old website are redirected to the new website? For example, should I do a page level redirect for each page that remains the same? So that the old "services" page is pointed to the new "services" page? Or can I simply do a redirect at the index page level? Thank you in advance for any advice! Best, Linda
Technical SEO | | LindaSchumacher0 -
How best to optimise a website for more than one location?
I have a client who is a acupuncturist and operates clinics both in Chester and Knutsford in Cheshire the site performs well for Chester based terms such as "Chester acupuncture" this is the primary location the client wishes to focus efforts on but would also like to improve rankings for the Knutsford clinic and area. I have setup local places pages for each clinic and registered each on different local directories. Both clinic addresses are placed on each page of the website and have a map to each on the contact page. Most of the on-page SEO elements such as page titles, descriptions and on-page keywords mainly focus on the term "Chester" over "Knutsford" is it advisable to target both locations in these page elements or will local search have an effect on this and will reduce/ dilute overall rankings for Chester clinic? I haven't setup and separate page for each clinic location as this might help in terms of SEO for improving ranking for both locations but from a user point of view it would just duplicate the same content but for a different location and also would create duplicate content issues. Any advice/ experience on this matter would be greatly appreciated.
Technical SEO | | Bristolweb0 -
Not sure to see the real value of SeoMoz!
Still one week left for my trial. I did not get the result I wanted to have on my ranking, I know it takes time and patience to get there. Even if I consider myself as tech savvy, I have the impression that even if you have the best tools to monitor what is going on that you still have to spend to much time to get better ranking. I would prefer to give the contract to someone else rather than spending time trying to figure out what is going on. I am in some sort of a catch22. I need to increase my ranking, I know my competitors have more backlinks then I can possibly reach ( we have 55 and they have around to 78000). I am wondering how they got all these backlinks in just 2 years. We've been in business much longer.. I could confirmed that my On-page SEO is very good, it's really on my backlinks that I have problems. I see already some of you saying that I have to create rich content, but for a B2B companiy, it's not as easy to generate the proper content and get the backlinks needed. Is there a very quick way to increase backlinks very quickly ?
Technical SEO | | processia0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0