Question on 301s
-
Hi Everyone,
I have a questions on 301 redirects, i hope someone can give me some help on this.
There was some 301 redirects made on some of the URLs at the beginning of the year, however we are now re-structuring the whole website, which means the URLs which had been given a 301 redirect are now getting another 301.
The question is, should i delete the first 301 redirect from the htaccess file?
Kind Regards
-
Ryan your analogy is fantastic. I totally understand this now and it really makes sense to do it this way.
Thanks for being patient with me
Again thanks all for your feedback on this.
Kind Regards
-
Every URL which is no longer active would require a 301 redirect to the proper page. In the situation you describe:
/a should redirect to /abc
/ab should redirect to /abc
I recognize this seems confusing so forget it's a website for a moment. Think of it as mail after you move.
You lived at 100 Main Street. That is where you received your mail. Now you move to 200 Elm Street. You put in a forward order with the post office (a real world equivalent to a 301 redirect). Now any mail addressed to 100 Main Street will be received at 200 Elm Street.
Now you move again to 300 Wall Street. You would put in another forwarding order so your mail from 200 Elm Street gets delivered to your new address. This solution is fine BUT, your mail from 100 Main Street would be delayed. First it would get forwarded to the 200 Elm Street post office, who would then have to forward it to 300 Wall Street. This process is inefficient (in seo terms, you lose link juice).
You want to change your 100 Main Street forward order to direct your mail to the 300 Wall Street address. Now all of your mail is taken to the proper location in a single hop.
I hope this analogy helps!
-
What happens to the URL
If there are external backlinks going to the URL, are these not going to get lost?
Because as we have mentioned on these 301s, there has been 3 URLs in question.
Hope that makes sense.
-
In the simplest terms, the old page should always be directed to the new page. Think of it as a non-stop flight.
-
Hi Ryan,
Thanks for your feedback, however I am getting a little lost
So what your are saying if I understand is, the 301 should be this:
example.com/a is redirected to example.com/abc
Kind Regards
-
The only thing that concerns me is what CafePress had said "Google stops crawling a link after the 5th redirect or so."
You can offer 100 links on a page. All the links can be to "seomoz.org" and they will all be crawled even though the real URL is "www.seomoz.org" and all 100 links will get redirected.
What CafePress referred to is redirects for a single URL.
www.example.com/a redirects to /ab which redirects to /abc and so forth. A crawler will only follow a single URL so far through a chain of redirects before the PR is completely gone and it stops.
Therefore the preferred solution is to redirect any old or broken URLs to their new URL in a single redirect. I'll share an example based on your site:
Very old URL: example.com/a. It is redirected to example.com/ab
Old URL: example.com/ab. It is redirected to example.com/abc
You could leave these two redirects in place, as-is, and they will work, but it is not recommended. The reason is any traffic to /a will have a double re-direct. First the traffic will go to /ab then to the final destination of /abc. This double redirect is an unnecessary delay, it adds extra points of vulnerability and is a waste of SEO link juice. The preferred solution would be to modify the /a redirect to point to the /abc page directly.
I hope that makes sense.
-
Also, if a page is indexed, which is highly likely (due to XML sitemaps, Google Analytics, Google Toolbar etc), then just removing the 301 redirect (links or no links) means that when this page disappears due to the site changes then you will have an indexed page resulting in a 404 error.
I maintain that you should have single hop 301 redirects on all of the pages that will not be there or will have been moved due to the site updated.
I also agree with what Ryan Kent says about links - you may have some links that have been discovered but not yet recognized pr picked up. If there is a chance that the content has been indexed then it should have an appropriate redirect.
-
Hi Ryan,
The only thing that concerns me is what CafePress had said "Google stops crawling a link after the 5th redirect or so."
I have another issue regarding the 301 re-directs:
We have:
/abcd http://www.example.com/abcde this is actually a 301 on a product page, however we have the same product in a shop page /shop/abcd which we have decided to do away with the shop directory, is it best practice to also do a 301 from the /shop/abcd to /abcde?
Hope that makes sense.
Kind Regards
-
I don't agree with the recommendation to simply delete the 301 due to no visible links. There are two reasons why:
1. It is more work for you to go and research the links to each page
2. There can always be links you are not aware of such as bookmarks, e-mail links, links which don't show up for various reasons, etc.
Just simply modify the 301 to point to the correct URL and you are all set.
-
Thanks for the fantastic feedback.
An example of what has happened on the .htaccess:
/abc http://www.example.com/abcd - This is the 301 that was made in March this year.
/abcd http://www.example.com/abcde - This is the new 301
If i notice that there are no links going to /abc using Open Site Explorer should i just delete this 301?
Kind Regards
-
I would change the original 301 redirect to the new location.
I would then add an additional 301 redirect to the secondary page (the old redirect) to the new location.
So you will have your original URL and the older redirected URL both 301 redirected to where the content now resides. This way you only have one hop on the 301 redirects and you have both old URLs pointing to the new one.
-
should i delete the first 301 redirect from the htaccess file?
The best results would be achieved if each URL had a single 301 redirect to the target page. To that end, yes, you should delete the old 301 redirect and create a new one.
-
+1
Totally forgot about mentioning the inbound links part. Thanks for picking it up, Rick!
-
Hey Gary,
I partially agree with Cafe. However, I wouldn't remove any redirects for URLs which may have backlinks. Maybe it would be a good idea to figure out if any of the redirects which you are removing are from URLs that have earned links? An Open Site Explorer link export would help you figure out if any of those URLs still have value.
-
Hi Gary,
Yes, it is always a good idea to cut down the number of 301 redirects (or any redirects in general) because if I remember correctly, Google stops crawling a link after the 5th redirect or so. You also lose another 10% link juice for each additional redirect.
Lastly, don't forget to 301 redirect the URLs from the beginning of the year to the new re-structured website.
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Forced Redirects/HTTP<>HTTPS 301 Question
Hi All, Sorry for what's about to be a long-ish question, but tl;dr: Has anyone else had experience with a 301 redirect at the server level between HTTP and HTTPS versions of a site in order to maintain accurate social media share counts? This is new to me and I'm wondering how common it is. I'm having issues with this forced redirect between HTTP/HTTPS as outlined below and am struggling to find any information that will help me to troubleshoot this or better understand the situation. If anyone has any recommendations for things to try or sources to read up on, I'd appreciate it. I'm especially concerned about any issues that this may be causing at the SEO level and the known-unknowns. A magazine I work for recently relaunched after switching platforms from Atavist to Newspack (which is run via WordPress). Since then, we've been having some issues with 301s, but they relate to new stories that are native to our new platform/CMS and have had zero URL changes. We've always used HTTPS. Basically, the preview for any post we make linking to the new site, including these new (non-migrated pages) on Facebook previews as a 301 in the title and with no image. This also overrides the social media metadata we set through Yoast Premium. I ran some of the links through the Facebook debugger and it appears that Facebook is reading these links to our site (using https) as redirects to http that then redirect to https. I was told by our tech support person on Newspack's team that this is intentional, so that Facebook will maintain accurate share counts versus separate share counts for http/https, however this forced redirect seems to be failing if we can't post our links with any metadata. (The only way to reliably fix is by adding a query parameter to each URL which, obviously, still gives us inaccurate share counts.) This is the first time I've encountered this intentional redirect thing and I've asked a few times for more information about how it's set up just for my own edification, but all I can get is that it’s something managed at the server level and is designed to prevent separate share counts for HTTP and HTTPS. Has anyone encountered this method before, and can anyone either explain it to me or point me in the direction of a resource where I can learn more about how it's configured as well as the pros and cons? I'm especially concerned about our SEO with this and how this may impact the way search engines read our site. So far, nothing's come up on scans, but I'd like to stay one step ahead of this. Thanks in advance!
Technical SEO | | ogiovetti0 -
Google Search Console 'Change of Address' Just 301s on source domain?
Hi all. New here, so please be gentle. 🙂 I've developed a new site, where my client also wanted to rebrand from .co.nz to .nz On the source (co.nz) domain, I've setup a load of 301 redirects to the relevant new page on the new domain (the URL structure is changing as well).
Technical SEO | | WebGuyNZ
E.G. On the old domain: https://www.mysite.co.nz/myonlinestore/t-shirt.html
In the HTACCESS on the old/source domain, I've setup 301's (using RewriteRule).
So that when **https://www.mysite.co.nz/**myonlinestore/t-shirt.html is accessed, it does a 301 to;
https://mysite.nz/shop/clothes/t-shirt All these 301's are working fine. I've checked in dev tools and a 301 is being returned. My question is, is having the 301's just on the source domain only enough, in regards to starting a 'Change of Address' in Google's Search Console? Their wording indicates it's enough but I'm concerned, maybe I also need redirects on the target domain as well? I.E. Does the Search Console Change of Address process work this way?
It looks at the source domain URL (that's already in Google's index), sees the 301 then updates the index (and hopefully pass the link juice) to the new URL. Also, I've setup both source and target Search Console properties as Domain Properties. Does that mean I no longer need to specify that the source and target properties are HTTP or HTTPS? I couldn't see that option when I created the properties. Thanks!0 -
SEO Content Audits Questions (Removing pages from website, extracting data, organizing data).
Hi everyone! I have a few questions - we are running an SEO content audit on our entire website and I am wondering the best FREE way to extract a list of all indexed pages. Would I need to use a mix of Google Analytics, Webmaster Tools, AND our XML sitemap or could I just use Webmaster Tools to pull the full list? Just want to make sure I am not missing anything. As well, once the data is pulled and organized (helpful to know the best way to pull detailed info about the pages as well!) I am wondering if it would be a best practice to sort by high trafficked pages in order to rank them for prioritization (ie: pages with most visits will be edited and optimized first). Lastly, I am wondering what constitutes a 'removable' page. For example, when it is appropriate to fully remove a page from our website? I understand that it is best, if you need to remove a page, to redirect the person to another similar page OR the homepage. Is this the best practice? Thank you for the help! If you say it is best to organize by trafficked pages first in order to optimize them - I am wondering if it would be an easier process to use MOZ tools like Keyword Explorer, Page Optimization, and Page Authority to rank pages and find ways to optimize them for best top relevant keywords. Let me know if this option makes MORE sense than going through the entire data extraction process.
Technical SEO | | PowerhouseMarketing0 -
Bing rankings question
Hi, We just wrapped up a website redesign about a month ago. The content stayed primarily the same. Once we launched the new site all of our rankings in Google stayed the same but we lost rank for all competitive keywords on Bing. I looked in Bing Webmaster tools and it doesn't show any penalties but it does show that we have too many H1 tags. I don't think the H1 tag thing is the issue but maybe. Do you know what could be causing this?
Technical SEO | | BT20090 -
Canonical Expert question!
Hello, I am looking for some help here with an estate agent property web site. I recently finished the MoZ crawling report and noticed that MoZ sees some pages as duplicate, mainly from pages which list properties as page 1,2,3 etc. Here is an example: http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=2
Technical SEO | | artdivision
http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=3 etc etc Now I know that the best practise says I should set a canonical url to this page:
http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=all but here is where my problem is. http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=1 contains good written content (around 750 words) before the listed properties are displayed while the "page=all" page do not have that content, only the properties listed. Also http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=1 is similar with the originally designed landing page http://www.xxxxxxxxx.com/property-for-rent/london/houses I would like yoru advise as to what is the best way to can url this and sort the problem. My original thoughts were to can=url to this page http://www.xxxxxxxxx.com/property-for-rent/london/houses instead of the "page=all" version but your opinion will be highly appreciated.0 -
Duplicate video content question
This is really two questions in one. 1. If we put a video on YouTube and on our site via Wistia, how would that affect our rankings/authority/credibility? Would we get punished for duplicate video content? 2. If we put a Wistia hosted video on our website twice, on two different pages, we would get hit for having duplicate content? Any other suggestions regarding hosting on Wistia and YouTube versus just Wistia for product videos would be much appreciated. Thank you!
Technical SEO | | ShawnHerrick1 -
Google Change of Address with Questionable Backlink Profile
We have a .com domain where we are 301-ing the .co.uk site into it before shutting it down - the client no longer has an office in the UK and wants to focus on the .com. The .com is a nice domain with good trust indicators. I've just redesigned the site, added a wad of healthy structured markup, had the duplicate content mostly rewritten - still finishing off this job but I think we got most of it with Copyscape. The site has not so many backlinks, but we're working on this too and the ones it does have are natural, varied and from trustworthy sites. We also have a little feature on the redesign coming up in .Net magazine early next year, so that will help. The .co.uk on the other hand has a fair few backlinks - 1489 showing in Open Site Explorer - and I spent a good amount of time matching the .co.uk pages to similar content on the .com so that the redirects would hopefully pass some pagerank. However, approximately a year later, we are struggling to grow organic traffic to the .com site. It feels like we are driving with the handbrake on. I went and did some research into the backlink profile of the .co.uk, and it is mostly made up of article submissions, a few on 'quality' (not in my opinion) article sites such as ezine, and the majority on godawful and broken spammy article sites and old blogs bought for seo purposes. So my question is, in light of the fact that the SEO company that 'built' these shoddy links will not reply to my questions as to whether they received a penalty notification or noticed a Penguin penalty, and the fact that they have also deleted the Google Analytics profiles for the site, how should I proceed? **To my mind I have 3 options. ** 1. Ignore the bad majority in the .co.uk backlink profile, keep up the change of address and 301's, and hope that we can just drown out the shoddy links by building new quality ones - to the .com. Hopefully the crufty links will fade into insignificance over time.. I'm not too keen on this course of action. 2. Use the disavow tool for every suspect link pointing to the .co.uk site (no way I will be able to get the links removed manually) - and the advice I've seen also suggests submitting a reinclusion request afterwards- but this seems pointless considering we are just 301-ing to the new (.com) site. 3. Disassociate ourselves completely from the .co.uk site - forget about the few quality links to it and cut our losses. Remove the change of address request in GWT and possibly remove the site altogether and return 410 headers for it just to force the issue. Clean slate in the post. What say you mozzers? Please help, working myself blue in the face to fix the organic traffic issues for this client and not getting very far as yet.
Technical SEO | | LukeHardiman0 -
Question about duplicate content in crawl reports
Okay, this one's a doozie: My crawl report is listing all of these as separate URLs with identical duplicate content issues, even though they are all the home page and the one that is http://www.ccisolutions.com (the preferred URL) has a canonical tag of rel= http://www.ccisolutions.com: http://www.ccisolutions.com http://ccisolutions.com http://www.ccisolutions.com/StoreFront/IAFDispatcher?iafAction=showMain I will add that OSE is recognizing that there is a 301-redirect on http://ccisolutions.com, but the duplicate content report doesn't seem to recognize the redirect. Also, every single one of our 404-error pages (we have set up a custom 404 page) is being identified as having duplicate content. The duplicate content on all of them is identical. Where do I even begin sorting this out? Any suggestions on how/why this is happening? Thanks!
Technical SEO | | danatanseo1