Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Disallow: /404/ - Best Practice?
-
Hello Moz Community,
My developer has added this to my robots.txt file: Disallow: /404/
Is this considered good practice in the world of SEO? Would you do it with your clients?
I feel he has great development knowledge but isn't too well versed in SEO.
Thank you in advanced,
Nico.
-
Thank you Lesley.
This really helps a lot. I appreciate it very much. This is my site by the way: http://devilswink.com/
Thanks.
Nico.
-
This comes down to personal preference in my opinion. I think honestly it is neither here nor there. The instances that your 404 page could come up in the SERP's is more than likely pretty low and at the same time it really does not offer any useful content. So disallowing it would not really be any loss. One reason why it might be disallowed is that you have an e-commerce site that rotates products. When a product is deleted, the developer has a 301 to the 404 page, then with the robots.txt saying not to index the 404 page, the other page will drop out of search engines. If this is the case I would rethink that strategy. If you notice a lot of sites like amazon and other big sites leave the page in the index even if the product is no longer for sale. The thought is traffic is traffic, the hardest part in the whole equation is getting someone to your site, if the page is ranking, why delete it.
The only time I can think that I would specifically allow it and optimize it is if you have a cool 404 page. Some companies actually spend a bit of time on their pages and it gets them a little pop of viral traffic from social sharing sites like reddit. If you do have one that is funny or unique I would allow it and actually optimize it for a term like "funny 404 page" or something like that.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Good to use disallow or noindex for these?
Hello everyone, I am reaching out to seek your expert advice on a few technical SEO aspects related to my website. I highly value your expertise in this field and would greatly appreciate your insights.
Technical SEO | | williamhuynh
Below are the specific areas I would like to discuss: a. Double and Triple filter pages: I have identified certain URLs on my website that have a canonical tag pointing to the main /quick-ship page. These URLs are as follows: https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black
https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black+fabric Considering the need to optimize my crawl budget, I would like to seek your advice on whether it would be advisable to disallow or noindex these pages. My understanding is that by disallowing or noindexing these URLs, search engines can avoid wasting resources on crawling and indexing duplicate or filtered content. I would greatly appreciate your guidance on this matter. b. Page URLs with parameters: I have noticed that some of my page URLs include parameters such as ?variant and ?limit. Although these URLs already have canonical tags in place, I would like to understand whether it is still recommended to disallow or noindex them to further conserve crawl budget. My understanding is that by doing so, search engines can prevent the unnecessary expenditure of resources on indexing redundant variations of the same content. I would be grateful for your expert opinion on this matter. Additionally, I would be delighted if you could provide any suggestions regarding internal linking strategies tailored to my website's structure and content. Any insights or recommendations you can offer would be highly valuable to me. Thank you in advance for your time and expertise in addressing these concerns. I genuinely appreciate your assistance. If you require any further information or clarification, please let me know. I look forward to hearing from you. Cheers!0 -
URL Structure On Site - Currently it's domain/product-name NOT domain/category/product name is this bad?
I have a eCommerce site and the site structure is domain/product-name rather than domain/product-category/product-name Do you think this will have a negative impact SEO Wise? I have seen that some of my individual product pages do get better rankings than my categories.
Technical SEO | | the-gate-films0 -
Miss meta description on 404 page
Hi, My 404 page did not have meta description. Is it an error? Because I run report and seomoz said that a problem. Thanks!
Technical SEO | | JohnHuynh0 -
/~username
Hello, The utility on this site that crawls your site and highlights what it sees as potential problems reported an issue with /~username access seeing it as duplicate content i.e. mydomain.com/file.htm is the same as mydomain.com~/username/file.htm so I went to my server hosts and they disabled it using mod_userdir but GWT now gives loads of 404 errors. Have I gone about this the wrong way or was it not really a problem in the first place or have I fixed something that wasn't broken and made things worse? Thanks, Ian
Technical SEO | | jwdl0 -
How to find and fix 404 and broken links?
Hi, My campaign is showing me many 404 problems and other tools are also showing me broken links, but the links they show me dose work and I cant seem to find the broken links or the cause of the 404. Can you help?
Technical SEO | | Joseph-Green-SEO0 -
Mass 404 Checker?
Hi all, I'm currently looking after a collection of old newspaper sites that have had various developments during their time. The problem is there are so many 404 pages all over the place and the sites are bleeding link juice everywhere so I'm looking for a tool where I can check a lot of URLs at once. For example from an OSE report I have done a random sampling of the target URLs and some of them 404 (eek!) but there are too many to check manually to know which ones are still live and which ones have 404'd or are redirecting. Is there a tool anyone uses for this or a way one of the SEOMoz tools can do this? Also I've asked a few people personally how to check this and they've suggested Xenu, Xenu won't work as it only checks current site navigation. Thanks in advance!
Technical SEO | | thisisOllie0 -
ECommerce: Best Practice for expired product pages
I'm optimizing a pet supplies site (http://www.qualipet.ch/) and have a question about the best practice for expired product pages. We have thousands of products and hundreds of our offers just exist for a few months. Currently, when a product is no longer available, the site just returns a 404. Now I'm wondering what a better solution could be: 1. When a product disappears, a 301 redirect is established to the category page it in (i.e. leash would redirect to dog accessories). 2. After a product disappers, a customized 404 page appears, listing similar products (but the server returns a 404) I prefer solution 1, but am afraid that having hundreds of new redirects each month might look strange. But then again, returning lots of 404s to search engines is also not the best option. Do you know the best practice for large ecommerce sites where they have hundreds or even thousands of products that appear/disappear on a frequent basis? What should be done with those obsolete URLs?
Technical SEO | | zeepartner1 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0