Do you add 404 page into robot file or just add no index tag?
-
Hi,
got different opinion on this so i wanted to double check with your comment is.
We've got /404.html page and I was wondering if you would add this page to robot text so it wouldn't be indexed or would you just add no index tag? What would be the best approach?
Thanks!
-
Hello Rubix,
Saijo gave you some great advice, but I'm concerned about the fact that you have that page in the first place, and that it produces those URL parameters. It suggests to me that instead of showing a 404 error on the contact-office.aspx page (assuming that pages doesn't exist on that URL) you are redirecting the user who tries to access that URL to the /404.html page (e.g. /404.html?aspxerrorpath=/contact-office.aspx).
Typically you want the 404 http status code to show on the URL the user is trying to unsuccessfully access. In this case instead of redirecting them to your "404 page URL" you would want to show your customized 404 message (and ensure it returns a 404 status code, use this tool) on www.yourdomain.com/contact-office.aspx.
I hope this makes sense to you. If not, feel free to ask for clarification.
-
404 are OK on your site just make sure you send the proper 404 header response for the 404 page ... Google does NOT index 404 pages ( as long as it sends the 404 header response ) , so you don't need to block them via robots.txt or meta robots.
Infact GWT warns you about these if they are able to crawl the so called 404 pages that doesn't send a 404 header response , so I think its a good idea NOT to noindex them you will get the warning if something is wrong.
Google will only index your 404 if you don't do that..they call it soft 404 : https://support.google.com/webmasters/answer/181708?hl=en
worth reading : http://moz.com/learn/seo/http-status-codes
-
Thanks Martijn,
I actually want to know what would you do for the 404 page itself. It is something like:
www.mainurl.com/404.html and for some reason this started to create some other links such as
www.mainrul.com/404.html?aspxerrorpath=/contact-office.aspx
Do you think I should add 404 page and subpages to Robot.txt ?
Thanks!
-
Hi Sida,
I would add a noindex to the page and as you also will return the 404 status code this is enough data for Google to tell not to index the page itself.
Hope this answers your question.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots blocked by pages webmasters tools
a mistake made in software. How can I solve the problem quickly? help me. XTRjH
Intermediate & Advanced SEO | | mihoreis0 -
How long to re-index a page after being blocked
Morning all! I am doing some research at the moment and am trying to find out, just roughly, how long you have ever had to wait to have a page re-indexed by Google. For this purpose, say you had blocked a page via meta noindex or disallowed access by robots.txt, and then opened it back up. No right or wrong answers, just after a few numbers 🙂 Cheers, -Andy
Intermediate & Advanced SEO | | Andy.Drinkwater0 -
Pages are being dropped from index after a few days - AngularJS site serving "_escaped_fragment_"
My URL is: https://plentific.com/ Hi guys, About us: We are running an AngularJS SPA for property search.
Intermediate & Advanced SEO | | emre.kazan
Being an SPA and an entirely JavaScript application has proven to be an SEO nightmare, as you can imagine.
We are currently implementing the approach and serving an "escaped_fragment" version using PhantomJS.
Unfortunately, pre-rendering of the pages takes some time and even worse, on separate occasions the pre-rendering fails and the page appears to be empty. The problem: When I manually submit pages to Google, using the Fetch as Google tool, they get indexed and actually rank quite well for a few days and after that they just get dropped from the index.
Not getting lower in the rankings but totally dropped.
Even the Google cache returns a 404. The question: 1.) Could this be because of the whole serving an "escaped_fragment" version to the bots? (have in mind it is identical to the user visible one)? or 2.) Could this be because we are using an API to get our results leads to be considered "duplicate content" and that's why? And shouldn't this just result in lowering the SERP position instead of a drop? and 3.) Could this be a technical problem with us serving the content, or just Google does not trust sites served this way? Thank you very much! Pavel Velinov
SEO at Plentific.com1 -
Is it a problem to use a 301 redirect to a 404 error page, instead of serving directly a 404 page?
We are building URLs dynamically with apache rewrite.
Intermediate & Advanced SEO | | lcourse
When we detect that an URL is matching some valid patterns, we serve a script which then may detect that the combination of parameters in the URL does not exist. If this happens we produce a 301 redirect to another URL which serves a 404 error page, So my doubt is the following: Do I have to worry about not serving directly an 404, but redirecting (301) to a 404 page? Will this lead to the erroneous original URL staying longer in the google index than if I would serve directly a 404? Some context. It is a site with about 200.000 web pages and we have currently 90.000 404 errors reported in webmaster tools (even though only 600 detected last month).0 -
PR Dilution and Number of Pages Indexed
Hi Mozzers, My client is really pushing for me to get thousands, if not millions of pages indexed through the use of long-tail keywords. I know that I can probably get quite a few of them into Google, but will this dilute the PR on my site? These pages would be worthwhile in that if anyone actually visits them, there is a solid chance they will convert to a lead do to the nature of the long-tail keywords. My suggestion is to run all the keywords for these thousands of pages through adwords to check the number of queries and only create pages for the ones which actually receive searches. What do you guys think? I know that the content needs to have value and can't be scraped/low-quality and pulling these pages out of my butt won't end well, but I need solid evidence to make a case either for or against it to my clients.
Intermediate & Advanced SEO | | Travis-W0 -
If I only Link to Page via Sitemap, can it still get indexed?
Hi there! I am creating a ton of content for specific geographies. Is it possible for these pages to get indexed if I only put them in my sitemap and don't link to them through my actual site (though the pages will be live). Thanks!
Intermediate & Advanced SEO | | Travis-W
Travis0 -
404 Error on Blog Pages that Look Like Loading Fine
There was recently a huge increase in 404 errors on Yandex Webmasters corresponding with a drop in rankings. Most of the pages seem to be from my blog (which was updated around the same time). When I click on the links from Yandex the page looks like it is loading normal, expect that it has the following message from the Facebook plugin I am using for commenting Any ideas about what the problem is or how to fix it? Critical Errors That Must Be Fixed | Bad Response Code: | URL returned a bad HTTP response code. | Open Graph Warnings That Should Be Fixed | Inferred Property: | The 'og:url' property should be explicitly provided, even if a value can be inferred from other tags. |
Intermediate & Advanced SEO | | theLotter
| Inferred Property: | The 'og:title' property should be explicitly provided, even if a value can be inferred from other tags. |
| Small og:image: | All the images referenced by og:image should be at least 200px in both dimensions. Please check all the images with tag og:image in the given url and ensure that it meets the recommended specification. |0 -
301 Redirect or Canonical Tag or Leave Them Alone? Different Pages - Similar Content
We currently have 3 different versions of our State Business-for-Sale listings pages - the versions are: **Version 1 -- Preferred Version: ** http://www.businessbroker.net/State/California-Businesses_For_Sale.aspx Title = California Business for Sale Ads - California Businesses for Sale & Business Brokers - Sell a Business on Business Broker Version 2: http://www.businessbroker.net/Businesses_For_Sale-State-California.aspx Title = California Business for Sale | 3124 California Businesses for Sale | BusinessBroker.net Version 3: http://www.businessbroker.net/listings/business_for_sale_california.ihtml Title = California Businesses for Sale at BusinessBroker.net - California Business for Sale While the page titles and meta data are a bit different, the bulk of the page content (which is the listings rendered) are identical. We were wondering if it would make good sense to either (A) 301 redirect Versions 2 and 3 to the preferred Version 1 page or (B) put Canonical Tags on Versions 2 and 3 labeling Version 1 as the preferred version. We have this issue for all 50 U.S. States -- I've mentioned California here but the same applies for Alabama through Wyoming - same issue. Given that there are 3 different flavors and all are showing up in the Search Results -- some on the same 1st page of results -- which probably is a good thing for now -- should we do a 301 redirect or a Canonical Tag on Versions 2 and 3? Seems like with Google cracking down on duplicate content, it might be wise to be proactive. Any thoughts or suggestions would be greatly appreciated! Thanks. Matt M
Intermediate & Advanced SEO | | MWM37720