Do I need to redirect soft 404s that I got from Google Webmaster Tools?
-
Hi guys,
I got almost 1000+ soft 404s from GWT. All of the soft 404s produce 200 HTTP status code but the URLs are something like the following:
http://www.example.com/search/house-for-rent
(query used: house for rent)
http://www.example.com/search/-----------rent
(query used:-------rent)
There are no listings that match these queries and there is an advanced search that is visible in these pages.
Here are my questions:
1. Do I need to redirect each page to its appropriate landing page?
2. Do I need to add user sitemap or a list of URLs where they can search for other properties?
Any suggestions would help.
-
Thanks guys for your inputs. By the way, this issue is already resolved last year. Thanks again!
-
It depends what you want to achieve. If the 404s are pages which no longer exist than it will be the fastest to use the GWMT removal tool to remove the page pattern and also add a noindex in robots.txt. In addition obviously returning a 404.
The soft 404 is a case where content is not found but HTTP-status 200 is returned - this needs to change if you currently serve non-existing pages.
We generally do the following:
- Content which we know does not exist anymore (i.e. a deleted product page or a deleted product category) is served with a SC_GONE (410) and we provide cross-selling information (i.e. display products from related categories). This works great and we have seen a boost in indexed content.
- URLs which don't exist will go through a standard 404 - this is intentional as our monitoring will pick this up. If it is a legitimate 404 put of SEO value, we will do a redirect if it makes sense, or just let Google drop it over time (takes sometimes up to 4 weeks)
You can have multiple versions of 404 pages, but this would need to be coded out - i.e. in your application server you would define 404page which then programmatically would display content depending on what you want to do.
-
I know I am way late to the party, but MagicDude4Eva, have you had success just putting a noindex header on the soft 404 pages?
That sounds like the easiest way to deal with this problem, if it works, especially since a lot of sites use dynamic URLs for product search that you don't want to de-index.
Can you have multiple 404 pages? Otherwise redirecting an empty search results page to your 404 page could be quite confusing..
-
Hi mate,
I already added the following syntax to my website's robots.txt:
User-agent: *
Disallow: /search/
I have checked the dynamic pages or URLs produced by search box (ex.http://www.domain.com/search/jhjehfjehfefe) but they are still showing in Google.com and there's still 1000+ soft 404s in my Google webmaster tools account.
I appreciate your help.
Thanks man!
-
I think if it is done carefully it adds quite a lot of value. A proper site taxonomy is obviously always better and more predictable.
-
I would never index or let google crawl search pages - very dangerous ground.
-
I would do the following:
- For valid searches returned create a proper canoncial URL (and then decide if you want to do a index,follow or a noindex,follow on the result pages). You might not necessarily want to index search results, but rather a structure of items/pages on your site.
- I would generally not index search results (rather have your pages being crawled through category structures, sitemaps and RSS feeds)
- It does sound though that the way you implemented the search is wrong - it should not result in a soft 404 - it could be as easy as making the canonical for your search just "/search" (without any search terms) and if no results are found display options to the user for search refinements
The only time I have seen soft 404s with us is in cases where we removed product pages and then displayed a generic "product not available" page with some upselling options. In this case we set a status of 410 (GONE) which resolved the soft 404 issue.
The advantage of the 410 is that your application makes the decision that a page is gone, whereas a 404 could really be just a wrong linked URL.
-
Yes Customize 404 whenever your database don't have have search results for user query then you can redirect them to that page.
Have you considered of blocking "search" results directory in Robots.txt because those pages are dynamic, they are not actually physical page so its better you block them.
-
What do you mean by default page? Is it a customized 404 page?
Thanks a lot man! I appreciate it.
-
Hi,
As per your URL, I think best solution is to block "search" directory in Robots.txt, then Google will not able to to access those pages so no error in GWT. OR you can also create default page for query which don't have any result in database.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Indexing Multi-Store Best Practice
Hi Guys, We currently have a main store view and a uk store view setup with a Litespeed Redirect for our website, redirecting UK IP Customers to the UK Store. We recently noticed that we were running into some issues with Google indexing pages from the uk site as well as the main store view. With trying to avoid duplicate content, my question being: What is the best practice for google indexing the UK and Main store views? Any advice would be greatly appreciated. Thanks.
Web Design | | centurysafety0 -
Https Implementation - Weird Redirection After Putting 's' in http://
Hi Mozers, I have come across some websites with their https version going to a totally different website. For example, http://www.samplesite1.com will load fine but when the protocol is changed to https (https://www.samplesite1.com) it will go to a total different domain say, https://www.samplesite2.com How does this happen, in technical sense? In the warning from browser, it says the the security certificate is from the other website but I would like to understand how this happens and how it impacts SEO. I seem to be not able to understand the relationship of this error and SEO impact. Thanks in advance for your response. Malika
Web Design | | Malika10 -
Google tag manager on blocked beta site - will it phone home to Google and cause site to get indexed?
We want to develop a beta site, in a directory with the robots.txt blocking bots. We want to include the Google Tag Manager tags and event layer tracking code on this beta site. My question is that by including the Google Tag Manager code, that phones home to Google, will it cause Google to index this beta site when we don't want it indexed?
Web Design | | CFSSEO0 -
Google pagespeed / lazy image load
Hi, we are using the apache module of google pagespeed. It works really great, helps a lot. But today I've asked me one question: Does the "lazy load" feature for images harm the ranking? The module reworks the page to load the images only if the are visible at the screen. Is this behavior also triggered by the google bot? Or are the images invisible for google? Any expirience about that? Best wishes, Georg.
Web Design | | GeorgFranz0 -
Is it ok to redirect an old URL to new URL with anchor tag?
Ex. OLD URL - http://www.mysite.com/shoes/red/description NEW URL - http://www.mysite.com/shoes/red#desc Thanks in advance!
Web Design | | esiow20130 -
Redirect From .aspx to .html if already indexed - Website Redesign
Hi Guys I would like to know if somebody could possibly shed some light on this for me. We are in the process of re-designing our site, but we are keeping all of our content in terms of site structure, internal linking etc. the same. Now we were wondering if it would be a SEO best practice for us to change our pages' extension from .aspx to .html and just put a re-direct from the aspx to the html pages. Or should we keep everything as is, and maybe just revise our on-page seo efforts as well as do some more link-building. I just have to note that we are currently ranking very well for top positions and obviously all these pages are already nicely indexed. And then another question I have is with regards to our mobi site of this same website.Our dev team created it using Responsive Web Design, but they decided to implement techniques that show and hide content based on what device you are viewing it on. So when viewing it on your desktop, it will show content as per normal, but when viewing it on a mobile device it will hide this content and show the content formatted for that specific mobile device. So we are obviously sitting with a case of dup content here.Is this technique acceptable, or is there a workaround/different way of implementing this? Thanks In Advance Dave
Web Design | | DavidZA10 -
Does disabling the "View Source" functionality prevent Google from crawling a website?
I know Google uses a lot of variables when crawling a website. I wasn't sure if disabling the "View Source" option hindered anything.
Web Design | | innovationsimple0 -
Google Bot cannot see the content of my pages
When I go to Google Webmaster tools and I type in any URL from the site http://www.ccisolutions.com in the "Fetch as Google Bot" feature, and then I click the link that says "success," Google bot is seeing my pages like this: <code>HTTP/1.1 200 OK Date: Tue, 26 Apr 2011 19:11:50 GMT Server: Apache/2.2.6 (Unix) mod_ssl/2.2.6 OpenSSL/0.9.7a DAV/2 PHP/5.2.4 mod_jk/1.2.25 Set-Cookie: CCISolutions-UT-Status=66.249.72.55.1303845110495128; path=/; expires=Thu, 25-Apr-13 19:11:50 GMT; domain=.ccisolutions.com Last-Modified: Tue, 28 Oct 2008 14:36:45 GMT ETag: "314b26-5a-2d421940" Accept-Ranges: bytes Content-Length: 90 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html Any clue as to why this could be happening?</code>
Web Design | | danatanseo0