Usage of HTTP Status Code 303
-
Hello,
is there anybody who has got some experience with 303 HTTP Status Code?
Our software development would like to use 303 "See Others" instead of 301 for redirecting old product-links to the site-root, instead of showing 404 errors.
What is the best practise for redirecting old product links which are gone in online-shop context?
Best regards
Steffen
-
I would recommend using a 301 redirect to the home page as this will pass link juice. If they can be redirected to the specific product category that would be useful.
An alternative would be to still serve up the old page so it results in a 200 code or a 301 to a product suggestion page. Having a products like this suggestion page and or a search for products page would likely convert better than just a blanket 301 redirect to the home page.
Another thing you could do is create an intelligent "catch" page that uses the search parameter (if there is one) or the title of the page referring the site and use that as a parameter for searching your products database and serving up some relevant products.
-
It probably will not pass your link juice if any. Zhis is the difference: 301 status codes are passing on 90% of the link juice the inbound links are giving to your pages.
For users it is good to redirect them to semothing else. The fact that a products period is over does not mean that it will not be searched anymore. Keeping old pages at least in the sitemap will not blow your pages at all. I would do that, however technically if there are no inbound links pointing to the pages that you want to 303 redirect, it will not hurt your seo.
-
Hi,
our old content is definitly gone away. We have a lot of volatile content which has got a lifetime from 6-12 month and somtimes shorter. I believe keeping old URLs will blow-up indexed pages.
But my general question was about 303 code. Do you have some experience about the difference between 301/303?
BR
-
Hello,
Are your products gone forever for sure? If you place 301 or 303 the visitors clicking your pages from the serps will see a new content instead of a 404 eror page that is for sure, so it has its user side benefits. However if you are ranking for these products in google and these words are bringing in traffic to your side i would think twice to delete those pages. I you delete the actual content you are ranking with the useres and the engines will see a totally new content, so if you lose your product specific pages you will also lose your rankings sooner or later.
I would leave those pages but do a little reorganizatin on the landing page. I would push the current content a bit downwords and place a one-two line convincing text why you have finished to sell those products (why users should not serych for them longer) and give an alternate better solution for the product type they are searching. So like we have finished selling lithium batteries as the new xy technique has longer 2x life period, and has half the time to charge. You can look at these astonishing products here
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does link position matter in the content/html code
My question is that if I have several links going to different landing pages will the one at the top of the content pass more value than ones at the bottom. Assuming that there are not more than 1 of the same link in the content. The ultimate question is whether or not link position in the content/html code make a difference if it passes more value. This question comes in response to this whiteboard Friday https://www.youtube.com/watch?v=xAH762AqUTU Rand talks about how if there are 2 links going to the same URL from the same content page then google will only inherit the value of the anchor text from the first link on the page and not the both of them. Meaning that google will treat that second link as if it doesn’t exist. There are lots of resources that shows this was true but there isn’t much content newer than 2010 that say this is still true, We all know that things have changed a lot since then Does that make sense?
Intermediate & Advanced SEO | | 97th_Floor0 -
Should I redirect my HTTP to my HTTPS ?
I am about to make a domain name change for my online shop. I have heard that redirecting my HTTP to my https is a good SEO Practice. I have www, non-www, as well as https-www and https-non-www declared in Search console. Both have non-www set as preferred domain. Is the redirect rule from HTTP to https really usefull ? Thanks
Intermediate & Advanced SEO | | Kepass0 -
Https vs Http Link Equity
Hi Guys, So basically have a site which has both HTTPs and HTTP versions of each page. We want to consolidate them due to potential duplicate content issues with the search engines. Most of the HTTP pages naturally have most of the links and more authority then the HTTPs pages since they have been around longer. E.g. the normal http hompage has 50 linking root domains while the https version has 5. So we are a bit concerned of adding a rel canonical tag & telling the search engines that the preferred page is the https page not the http page (where most of the link equity and social signals are). Could there potentially be a ranking loss if we do this, what would be best practice in this case? Thanks, Chris
Intermediate & Advanced SEO | | jayoliverwright0 -
Worth Modifying Code to Have Text Appear Near Top
Our site uses Wordpress. The code is somewhat heavy. The text to code ratio for the home page is only 16%. Our developer suggests that we modify the code so that the important text appears at the top of the page (without changing the design) so that Google can index it more easily. My developer feels this would be more beneficial for SEO. He believes that reducing the code would create HTML errors. The home page is www.nyc-officespace-leader.com Is this approach sound? My developer describes it in the following manner: | Let me say that I don’t believe the text to code ratio has a significant impact on SEO per se but of course that reducing code, it will reduce page weight therefore it may help to improve ranking. See Homepage for example, this is the top landing page of your site, therefore it is very relevant to optimize. You can see the first block, from attached it has very little content and too many code. There is almost nothing to do about it, visually that is a very good block, in terms of SEO it isn't. I do not recommend to take it off just for SEO, that will make all pages with lot of text, lack of images and people may go away. On the other hand, most of the cases we want to improve text code ratio, there is an impact on unexpected BUGs because the code is being changed and this may affect functionality. I would suggest to spend time on improve the sort-order of the important content inside the code, so we may have similar text code ratio at the end but the important code we need Google to index will be at the very top in the source code, in terms of a very technical approach Google will find the key content faster and that should help to improve the crawling process as search engines read HTML code linearly. This change do not necessarily will affect the HTML, we can achieve it by using style sheet (CSS code) instead, reducing the chance of major BUGs. Either is our choice, we need to evaluate potential problems, code issues and content impact and also we need to apply changes and wait at least 3-4 weeks to start seeing results. It is a long task. Let me know your thought about this, we will estimate a task to improve code without affect web design |
Intermediate & Advanced SEO | | Kingalan10 -
From HTTP to HTTPS
Hi We have implemented HTTPS to the our website. Do we need to now redirect the whole site to HTTPS in the HTTACCESS file? Because when you enter the site via a google search or enter the domain directly the site is set to HTTP once we click on a URL on the page it sets it to HTTPS so do we require to redirect from the start? thanks
Intermediate & Advanced SEO | | Direct_Ram
E0 -
HTTP Header Canonical Tags
I want to be able to add canonical tags to http headers of individual URL's using .htacess, but I can't find any examples for how to do this. The only example I found was when specifying a file: http://www.seomoz.org/blog/how-to-advanced-relcanonical-http-headers N.B. It's not possible to add regular canonical tags to the of my pages as they're dynamically generated. I was trying to add the following to the .htaccess in order to add a canonical tag in the header of the page http://frugal-father.com/is-finance-in-the-uk-too-london-centric/, but I've checked with Live HTTP headers and the canonical line isn't showing : <files "is-finance-in-the-uk-too-london-centric="" "="">Header add Link "<http: frugal-father.com="">; rel="canonical"'</http:></files> Any ideas?
Intermediate & Advanced SEO | | AndrewAkesson0 -
Code to change country in URL for locale results
How do I change the code in my URL to search in Google by specific location?
Intermediate & Advanced SEO | | theLotter0 -
Proper use and coding of rel = "canonical" tag
I'm working on a site that has pages for many wedding vendors. There are essentially 3 variations of the page for each vendor with only slightly different content, so they're showing up as "duplicate content" in my SEOmoz Campaign. Here's an example of the 3 variations: http://www.weddingreportsma.com/MA-wedding.cfm/vendorID/4161 http://www.weddingreportsma.com/MA-wedding.cfm?vendorID=4161&action=messageWrite http://www.weddingreportsma.com/MA-wedding.cfm?vendorID=4161&action=writeReview Because of this, we placed a rel="canoncial" tag in the second 2 pages to try to fix the problem. However, the coding does not seem to validate in the w3 html validator. I can't say I understand html well enough to understand the error the validator is pointing out. We also added a the following to the second 2 types of pages <meta name="robots" content="noindex"> Am I employing this tag correctly in this case? Here is a snippet of the code below. <html> <head> <title>Reviews on Astonishing Event, Inc from Somerset MAtitle> <link rel="stylesheet" type="text/css" href="[/includes/style.css](view-source:http://www.weddingreportsma.com/includes/style.css)"> <link href="[http://www.weddingreportsma.com/MA-wedding.cfm/vendorID/4161](view-source:http://www.weddingreportsma.com/MA-wedding.cfm/vendorID/4161)" rel="canonical" /> <meta name="robots" content="noindex">
Intermediate & Advanced SEO | | jeffreytrull1
<meta name="keywords" content="Astonishing Event, Inc, Somerset Massachusetts, Massachusetts Wedding Wedding Planners Directory, Massachusetts weddings, wedding Massachusetts ">
<meta name="description" content="Get information and read reviews on Astonishing Event, Inc from Somerset MA. Astonishing Event, Inc appears in the directory of Somerset MA wedding Wedding Planners on WeddingReportsMA.com."> <script src="[http://www.google-analytics.com/urchin.js](view-source:http://www.google-analytics.com/urchin.js)" type="text/javascript">script> <script type="text/javascript"> _uacct = "UA-173959-2"; urchinTracker(); script> head>0