Do I need to redirect soft 404s that I got from Google Webmaster Tools?
-
Hi guys,
I got almost 1000+ soft 404s from GWT. All of the soft 404s produce 200 HTTP status code but the URLs are something like the following:
http://www.example.com/search/house-for-rent
(query used: house for rent)
http://www.example.com/search/-----------rent
(query used:-------rent)
There are no listings that match these queries and there is an advanced search that is visible in these pages.
Here are my questions:
1. Do I need to redirect each page to its appropriate landing page?
2. Do I need to add user sitemap or a list of URLs where they can search for other properties?
Any suggestions would help.
-
Thanks guys for your inputs. By the way, this issue is already resolved last year. Thanks again!
-
It depends what you want to achieve. If the 404s are pages which no longer exist than it will be the fastest to use the GWMT removal tool to remove the page pattern and also add a noindex in robots.txt. In addition obviously returning a 404.
The soft 404 is a case where content is not found but HTTP-status 200 is returned - this needs to change if you currently serve non-existing pages.
We generally do the following:
- Content which we know does not exist anymore (i.e. a deleted product page or a deleted product category) is served with a SC_GONE (410) and we provide cross-selling information (i.e. display products from related categories). This works great and we have seen a boost in indexed content.
- URLs which don't exist will go through a standard 404 - this is intentional as our monitoring will pick this up. If it is a legitimate 404 put of SEO value, we will do a redirect if it makes sense, or just let Google drop it over time (takes sometimes up to 4 weeks)
You can have multiple versions of 404 pages, but this would need to be coded out - i.e. in your application server you would define 404page which then programmatically would display content depending on what you want to do.
-
I know I am way late to the party, but MagicDude4Eva, have you had success just putting a noindex header on the soft 404 pages?
That sounds like the easiest way to deal with this problem, if it works, especially since a lot of sites use dynamic URLs for product search that you don't want to de-index.
Can you have multiple 404 pages? Otherwise redirecting an empty search results page to your 404 page could be quite confusing..
-
Hi mate,
I already added the following syntax to my website's robots.txt:
User-agent: *
Disallow: /search/
I have checked the dynamic pages or URLs produced by search box (ex.http://www.domain.com/search/jhjehfjehfefe) but they are still showing in Google.com and there's still 1000+ soft 404s in my Google webmaster tools account.
I appreciate your help.
Thanks man!
-
I think if it is done carefully it adds quite a lot of value. A proper site taxonomy is obviously always better and more predictable.
-
I would never index or let google crawl search pages - very dangerous ground.
-
I would do the following:
- For valid searches returned create a proper canoncial URL (and then decide if you want to do a index,follow or a noindex,follow on the result pages). You might not necessarily want to index search results, but rather a structure of items/pages on your site.
- I would generally not index search results (rather have your pages being crawled through category structures, sitemaps and RSS feeds)
- It does sound though that the way you implemented the search is wrong - it should not result in a soft 404 - it could be as easy as making the canonical for your search just "/search" (without any search terms) and if no results are found display options to the user for search refinements
The only time I have seen soft 404s with us is in cases where we removed product pages and then displayed a generic "product not available" page with some upselling options. In this case we set a status of 410 (GONE) which resolved the soft 404 issue.
The advantage of the 410 is that your application makes the decision that a page is gone, whereas a 404 could really be just a wrong linked URL.
-
Yes Customize 404 whenever your database don't have have search results for user query then you can redirect them to that page.
Have you considered of blocking "search" results directory in Robots.txt because those pages are dynamic, they are not actually physical page so its better you block them.
-
What do you mean by default page? Is it a customized 404 page?
Thanks a lot man! I appreciate it.
-
Hi,
As per your URL, I think best solution is to block "search" directory in Robots.txt, then Google will not able to to access those pages so no error in GWT. OR you can also create default page for query which don't have any result in database.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Trailing slash redirects not working in https - working in http: What might be the culprit?
Hi, Our WP website redirects without slash to with slash are not working. When we tried with http mode, they are working. So, not working on https mode. Correct code given at .htaccess file. What might be causing the issue? Thanks
Web Design | | vtmoz0 -
Multiple sites using same text - how to avoid Google duplicate content penalty?
Hi Mozers, my client located in Colorado is opening a similar (but not identical) clinic in California. Will Google penalize the new California site if we use text from our website that features his Colorado office? He runs the clinic in CO and will be a partner of the clinic in CA, so the CA clinic has his "permission" to use his original text. Eventually he hopes to go national, with multiple sites utilizing essentially the same text. Will Google penalize the new CA site for plagiarism and/or duplicate content? Or is there a way to tell Google, "hey Google, this new clinic is not ripping off my text"?
Web Design | | CalamityJane770 -
Why is Google displaying meta descriptions for pages that are nowhere contained in said page metas?
Certain search keywords are pulling up incorrect page titles and meta descriptions for our site. I've looked through our code, and the text used by Google in the search results is nowhere found inside our site. I've also looked at previous iterations of our site from over a decade ago and still haven't found it. I then searched specifically for the exact phrased incorrect meta descriptions and found a long list of spammy sites linking to our domain with the exact, incorrect meta description. Is this why Google is displaying the incorrect data, and how do I get Google to use the meta descriptions from my actual site?
Web Design | | Closetstogo0 -
Why are there lots of 404s after setting up CDN?
I just setup Cloudfront CDN through W3 Total Cache. Everything looks good but there is one problem that I have encountered: After activating the CDN none of the images are available at the older image URLs and they are throwing a 404 error. Let me give you an example for this: 1. Before I setup the CDN, let's say an image was available at http://example.com/wp-content/uploads/2015/03/leap-of-faith.jpg 2. After I setup the CDN, the image is available at http://cdn.example.com/wp-content/uploads/2015/03/leap-of-faith.jpg and the good part is the URLs in the blog posts where this image was attached is updated to reflect the above mentioned URL. But the problem is that when visit the older URL of the image (which is what Google has crawled earlier, I get a 404 error). Can you help me how to avoid this problem? Ravi C
Web Design | | stj0 -
Do I need to 301 redirect www.domain.com/index.html to www.domain.com/ ?
So, interestingly enough, the Moz crawler picked up my index.html file (homepage) and reported duplicate content, of course. But, Google hasn't seemed to index the www.domain.com/index.html version of my homepage, just the www.domain.com version. However, it looks like I do have links going specifically to www.domain.com/index.html and I want to make sure those are getting counted towards my overall domain strength. Is it necessary to 301 redirect in the scenario described above?
Web Design | | Small_Business_SEO0 -
Google search issue with exact domain
We had a site from Feb-2011 to Nov-2011 at the domain amcoexterminating.com. The site was pure HTML/CSS and the daily unique visitors steadily increased over that time. So all was fine. We then moved the site to a CMS (Joomla) on Dec. 6th. From that day forward, the daily visitors went into the tank. Before the move, if you typed "amcoexterminating.com" or "amco exterminating" into Google search, the site would be the first result (as you'd expect since those are the words that make up the actua domain). But we tried this yesterday and the site did not come up at all. NOT GOOD. It would work in Yahoo or Bing, but not in Google. So obviously, the problem with Google search directly affected the daily visitors. We just checked Webmaster tools yesterday (yes, this should have been done sooner, lesson learned) and it said "Site has severe health issues - Important page blocked by robots.txt". It listed the "important" page URL and it was just a link to an image. Regardless, I wiped out the Joomla created robots.txt file and added a new one and made it just say... User-agent: *Allow: / About 14 hours later, after the new robots.txt file was recognized by Google, the "severe health" message went away. However if I search in Google for "amcoexterminating.com", it still doesn't show up and the client is concerned (as they should be). Do you think the search engines just need more time to refresh? If so, once it refreshes, should the site show up first again right away? Or is it possible the robots.txt file had nothing to do with the issue? If so, what other things could I check into that might cause Google search to not find a site even if you search for exact domain name? Please share any and all things I should look into as I need to get this site showing in Google search again (as it was before moving to the CMS). Thanks!
Web Design | | MarathonMS0 -
Optimzing a new ecommerce site, Need help with URL
Hi We are putting up a new ecommerce website and for product description, our tech team indicates that they must have the skun numbers in the URL. Which one of the following URL structure do you find the most SEO freindly? 1. http://www.Site.com/SKUNumber/ProductDescription/ or 2. http://www.Site.com/ProductDescription/SKUNumber/ My personal opinion is that most relevant content should be on load page so I like option 1. Thanks
Web Design | | CookingCom0 -
Google News we were dropped and need help finding ot why
Hi i have a site called in2town lifestyle magazine http://www.in2town.co.uk/ and up until two months ago we were with google news and for a long time. But then all of a sudden we were dropped which left us with no confidence about our site and led us to make changes to the site, some good and some bad to try and find out what was wrong with our site and why we were dropped. We have now been concentrating on sorting the site out which has led in a drop in traffic due to not updating it as we should because we are more concerned in trying to make it a quality lifestyle magazine and get back in google as well as making it a good experience for our readers.. I would like your help and finding out what you feel is wrong with our site so we can then work on it and change it and try and find out what went wrong with google news. we have spent years on the site but now we have gone in the wrong direction because we were more worried about google news. If you can advise us on how we should change the site and sort the site out and make it into the professional site it was once more then that would be great.
Web Design | | ClaireH-1848860