Robots.txt - blocking JavaScript and CSS, best practice for Magento
-
Hi Mozzers,
I'm looking for some feedback regarding best practices for setting up Robots.txt file in Magento.
I'm concerned we are blocking bots from crawling essential information for page rank.
My main concern comes with blocking JavaScript and CSS, are you supposed to block JavaScript and CSS or not?
You can view our robots.txt file here
Thanks,
Blake
-
As Joost said, you should not block access to files with help in the reading / rendering of the page.
Looking at your Robots file, I would look at the following two exclusions. Do they block anything else that runs on a live page that Google should be seeing?
Disallow: /includes/ Disallow: /scripts/ -Andy
-
Best practice is not to block access to JS / CSS anymore, to allow google to properly understand the website and give determine mobile-friendliness.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google
I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found.
Intermediate & Advanced SEO | | andyheath0 -
Magento E-Commerce Crawl Issues
Hi Guys, First post here! I am responsible for a Magento e-commerce store and there are a few crawl issues and potential solutions that I am working and would like to get some advice to see if you agree with my approach. Old Product Pages - The majority of our stock is seasonal, therefore when a product sells out, it is not usually going to come back into stock. However the approach for Magento websites is to leave the page present but take the product off the category pages, so users can still find these pages from the search engines and they are orphaned pages as not linked to from elsewhere and not totally clear products are out of stock (just doesn't show the size pulldown or 'Add to Basket' button). There is no process in place to 301 redirect these pages either. My solution to this problem is to: 1. Change design of these pages so a clear message is shown to users that the product is out of stock and suggest related products to reduce bounce rates. I was also planning on having a link from an 'Out of Stock' page on the site to these products so they are orphaned but is this required do you think? 2. When I know for sure (e.g. over a month) that the product will not be returned (e.g. refund) by the user, then 301 redirect the product pages back to category page. How do other users 301 redirect their pages in Magento, I would like an easy to use system. Crawl Errors Identified in Google Webmaster Tools It seems in the last 2 weeks there has been a sharp increase in the number of soft 404 pages identified on the website. When I inspect these pages they seem to be categories and sub categories that no longer have any products in them. However, I don't want to delete these pages as new products might come in and go onto these category pages, therefore how should I approach this? A suggestion I have thought of is to put related products on to these pages? Any better ideas? Thanks, Graeme
Intermediate & Advanced SEO | | graeme19940 -
Meta canonical or simply robots.txt other domain names with same content?
Hi, I'm working with a new client who has a main product website. This client has representatives who also sells the same products but all those reps have a copy of the same website on another domain name. The best thing would probably be to shut down the other (same) websites and redirect 301 them to the main, but that's impossible in the minding of the client. First choice : Implement a conical meta for all the URL on all the other domain names. Second choice : Robots.txt with disallow for all the other websites. Third choice : I'm really open to other suggestions 😉 Thank you very much! 🙂
Intermediate & Advanced SEO | | Louis-Philippe_Dea0 -
What is the best approach to a keyword that has multiple abbreviations?
I have a site for which the primary keyword has multiple abbreviations. The site is for the computer game "Football Manager", each iteration is often referred to as FM2012, FM12 or Football Manager 2012, the first two can also be used with or without spaces inbetween. While this is only 3 keywords to target, it means that every key phrase such as "FM2012 Tactics", must also be targeted in 3 ways. Is there a recommended approach to make sure that all 3 are targeted? At present I use the full title "Football Manager" in the the title and try to use the shorter abbreviations in the page, I also make sure the title tags always have an alternative e.g FM2012 Tactics Two specific questions as well as general tips: Does the <abbr>HTML tag help very much?</abbr> Are results likely to differ much for searches for "FM 2012" and "FM2012" i.e. without the space.
Intermediate & Advanced SEO | | freezedriedmedia1 -
Not using a robot command meta tag
Hi SEOmoz peeps. Was doing some research on robot commands and found a couple major sites that are not using them. If you check out the code for these: http://www.amazon.com http://www.zappos.com http://www.zappos.com/product/7787787/color/92100 http://www.altrec.com/ You fill not find a meta robot command line. Of course you need the line for any noindex, nofollow, noarchive pages. However for pages you want crawled and indexed, is there any benefit for not having the line at all? Thanks!
Intermediate & Advanced SEO | | STPseo0 -
Category Pages - Canonical, Robots.txt, Changing Page Attributes
A site has category pages as such: www.domain.com/category.html, www.domain.com/category-page2.html, etc... This is producing duplicate meta descriptions (page titles have page numbers in them so they are not duplicate). Below are the options that we've been thinking about: a. Keep meta descriptions the same except for adding a page number (this would keep internal juice flowing to products that are listed on subsequent pages). All pages have unique product listings. b. Use canonical tags on subsequent pages and point them back to the main category page. c. Robots.txt on subsequent pages. d. ? Options b and c will orphan or french fry some of our product pages. Any help on this would be much appreciated. Thank you.
Intermediate & Advanced SEO | | Troyville0 -
Google, Links and Javascript
So today I was taking a look at http://www.seomoz.org/top500 page and saw that the AddThis page is currently at the position 19. I think the main reason for that is because their plugin create, through javascript, linkbacks to their page where their share buttons reside. So any page with AddThis installed would easily have 4/5 linbacks to their site, creating that huge amount of linkbacks they have. Ok, that pretty much shows that Google doesn´t care if the link is created in the HTML (on the backend) or through Javascript (frontend). But heres the catch. If someones create a free plugin for wordpress/drupal or any other huge cms platform out there with a feature that linkbacks to the page of the creator of the plugin (thats pretty common, I know) but instead of inserting the link in the plugin source code they put it somewhere else, wich then is loaded with a javascript code (exactly how AddThis works). This would allow the owner of the plugin to change the link showed at anytime he wants. The main reason for that would be, dont know, an URL address update for his blog or businness or something. However that could easily be used to link to whatever tha hell the owner of the plugin wants to. What your thoughts about this, I think this could be easily classified as White or Black hat depending on what the owners do. However, would google think the same way about it?
Intermediate & Advanced SEO | | bemcapaz0 -
What is the best method for segmenting HTML sitemaps?
Sitemaps create a Table of Contents for web crawlers and users alike. Understanding how PageRank is passed, HTML sitemaps play a critical role in how Googlebot and other crawlers spider and catalog content. I get asked this question a lot and, in most cases, it's easy to categorize sitemaps and create 2-3 category-based maps that can be linked to from the global footer. However, what do you do when a client has 40 categories with 200+ pages of content under each category? How do you segment your HTML sitemap in a case like this?
Intermediate & Advanced SEO | | stevewiideman0