New "Static" Site with 302s
-
Hey all,
Came across a bit of an interesting challenge recently, one that I was hoping some of you might have had experience with!
We're currently in the process of a website rebuild, for which I'm really excited. The new site is using Markdown to create an entirely static site. Load-times are fantastic, and the code is clean. Life is good, apart from the 302s.
One of the weird quirks I've realized is that with oldschool, non-server-generated page content is that every page of the site is an Index.html file in a directory. The resulting in a www.website.com/page-title will 302 to www.website.com/page-title/.
My solution off the bat has been to just be super diligent and try to stay on top of the link profile and send lots of helpful emails to the staff reminding them about how to build links, but I know that even the best laid plans often fail.
Has anyone had a similar challenge with a static site and found a way to overcome it?
-
Wow. I wasn't expecting such a detailed and awesome answer Danny. Thanks so much, I'm in the process of migrating away from S3 anyways (for other reasons) though you're right in that I'm going to miss the cost & load times.
I'm using Middleman for now, though the technical part of my brain is indeed interested in how you're going to accomplish the Jekyll solution. I'll look out for your post!
And thanks for the tip on my site. Another thing to add to the list
Arun
-
Hey Arun,
Thanks for posting! I was beginning to think that I was the only Inbound guy anywhere that had to deal with this kind of issue
Yup, I created the same bug with redirect loops trying to get around the slash issue. The problem is that S3 doesn't consider the slash as part of the rewrite data unless something comes after it.
Ultimately, my number one suggestion would be to go with a different service that allows you to install a Server App like Nginx or Apache. Others have agreed that redirections set up through a server app are the way that they feel the most comfortable that link equity is being passed.
If you're dead-set on S3, which I would understand as the load times are crazy-awesome-insane, I may have a solution for you soon. Our dev team is working on a script for Jekyll + S3 sites that will essentially create extension-less files (i.e. example.com/contact) that contain meta refresh + rel canon.
The script will use a list of desired redirections + rules that is structured the same way an htaccess file would be. I can't speak to how it will get past S3's default 302ing yet, but I know that it will use CURL. Look for a YouMoz post soon from me!
Anyways, I hope my notes here help! I'm gonna try and make that post soon after the script is created. Just as a last note, in taking a look at your site I noticed that a lot of the internal links on your homepage don't have the trailing slash in them. I would definitely start there and add those slashes, and perform a "submit page + linked page" to Webmaster Tools after!
-
Hi Danny-
I've got the exact same issue (static site on S3 redirecting with 302s), and surprisingly can't find a lot of information out there. If I do a S3 metadata based redirect from (for example) /blog to /blog/ I just end up in a redirect loop.
I checked out your site and it still looks like you're working on it. Did you end up figuring anything out? If there's any way that I can help get to a solution I'd be happy to spend some time on it.
Thanks!
Arun
-
Thanks for the reply David!
Yup, I think that this has just been a case of wrapping my head around a new way of doing things (i.e. redirections in the AWS bucket config rather than using .htdocs). Static sites are a crazy combination of complicated and simple!
Thanks! We're using Jekyll somewhat, although we've had issues with the image hosting. I've actually had better results using the local github client + "Mou", a local Markdown editor.
-
Nice! (for speed at least)
I would show your team some examples of external URLs pointing at the non trailing slash versions of your pages and explain the downside of the 302 redirect. Also consider that people and bots visiting those URLs will be adding overhead to your server, and on Amazon that will equal increased cost (small as it may be, the pennies add up!)
Reading the link you provided it looks like the default behaviour of the page metadata redirect under the s3 console is to create a 301 redirect. That makes me think the 302 is coming from somewhere else. Look at the following URL:
http://docs.aws.amazon.com/AmazonS3/latest/dev/HowDoIWebsiteConfiguration.html
It looks like you can add advanced redirects under "Enable website hosting -> edit redirection rules". I'd explore if there are redirects listed there and maybe chat to your developers further.
While you are it I spotted two other issues for you to consider. Currently the index.html files in your directories resolve to the same page as your main directory. I would 301 those pages back to the parent directory (slash version). Or you could add canonical URLs pointing back to the parent directory (with trailing slash). I'd make a case for adding canonical URLs to all pages.
Also, you currently have a number of redirect chains e.g.
http://www.strutta.com/resources/posts/share-your-contests-and-sweepstakes-all-over-social-media 301 redirects to http://www.strutta.com/resources which 302 redirects to http://www.strutta.com/resources/.
You need to find the original redirect and change it to 301 redirect to the trailing slash version of the directory. Screaming Frog can help you find these redirect chains.
-
Hi Danny!
I don't have much to add here, I think the guys have it right in that you'll need to figure out how to make the 301 work. I quickly read that documentation, then realized I wasn't a robot, so I found this: http://aws.typepad.com/aws/2012/10/amazon-s3-support-for-website-redirects.html which was a bit more friendly.
I wish I could help you out more, but I'm not using AWS. I'm assuming you'll be able to use wildcard or regex matching somewhere, and that should solve your problem.
Great site by the way, anything you're using to help out with the static blog? (Jekyll, Octopress?)
-
Follow-up answer:
Our new website (Strutta.com) is entirely static, hosted on S3. No Apache, just straight HTML files. No apache means no htaccess.
Instead of using htaccess, we have to use the S3 Console: http://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html
As far as I can tell, this sets up redirects the same way. Although this doesn't answer my initial question, I'm going to try using the control panel later on today to see if 301ing the directories there to include the / will get recognized before whatever is causing the 302 currently
-
Thanks all,
I think the problem is coming from the fact that we're hosted on Amazon Webservices, and the devs are using the "aws bucket config" settings to institute redirects instead of htaccess. SEO vs Dev Battle time.
-
Hey Danny,
As Maximilian suggested above the best solution is going to be to change those 302s to 301s. I generally like to redirect to trailing slash URLs for directories and non trailing slash URLs for files/pages (that's that standard convention). I find in practice hardly anyone who links organically ever includes a trailing slash when linking to a page, but when it's the homepage I don't worry about it too much, browsers and Google can figure that out.
Basically you need to figure out where the 302 is coming from and hopefully it is in your .htaccess file. If you can edit your .htaccess file you need to change that to a 301 redirect, or you could remove the redirect and just use a canonical URL pointing at the / version of the page. I would prefer to go with the 301 though. Just be sure to look at how these redirects are being implemented and in what order, you don't want to end up with redirect chains either.
Can you get access to your .htaccess file or is the server running something funky?
-
Perhaps this is too obvious, but can you not change the 302 to 301's?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Getting a ton of "not found" errors in Webmaster tools stemming from /plugins/feedback.php
So recently Webmaster tools showed a million "not found" errors with the url "plugins/feedback.php/blah blah blah." A little googling helped me find that this comes from the Facebook comment box plugin. Apparently some changes recently have made this start happening. The question is, what's the right fix? The thread I was reading suggested adding "Disallow: /plugins/feedback.php" to the robots.txt file and marking them all fixed. Any ideas?
Technical SEO | | cbrant7770 -
Just Launched New Site - First Steps to Get it to Rank?
Good Morning Mozzers... We just recently launched a brand new site and now the fun part begins: trying to get it to appear in the SERPS. I'm wondering if you guys can share your best and most proven secrets/tricks to get brand new sites to rank in Google.... For example, what are the first directories you add the site to? What are some links you try and aquire first? Looking for some tips and ideas for a brand new site. Thanks in advance.
Technical SEO | | Prime850 -
Do I need to verify my site on webmaster both with and without the "www." at the start?
As per title, is it necessary to verify a site on webmaster twice, with and without the "www"? I only ask as I'm about to submit a disavow request, and have just read this: NB: Make sure you verify both the http:website.com and http://www.website.com versions of your site and submit the links disavow file for each. Google has said that they view these as completely different sites so it’s important not to forget this step. (here) Is there anything in this? It strikes me as more than a bit odd that you need to submit a site twice.
Technical SEO | | mgane0 -
"Fourth-level" subdomains. Any negative impact compared with regular "third-level" subdomains?
Hey moz New client has a site that uses: subdomains ("third-level" stuff like location.business.com) and; "fourth-level" subdomains (location.parent.business.com) Are these fourth-level addresses at risk of being treated differently than the other subdomains? Screaming Frog, for example, doesn't return these fourth-level addresses when doing a crawl for business.com except in the External tab. But maybe I'm just configuring the crawls incorrectly. These addresses rank, but I'm worried that we're losing some link juice along the way. Any thoughts would be appreciated!
Technical SEO | | jamesm5i0 -
How unique does a page need to be to avoid "duplicate content" issues?
We sell products that can be very similar to one another. Product Example: Power Drill A and Power Drill A1 With these two hypothetical products, the only real difference from the two pages would be a slight change in the URL and a slight modification in the H1/Title tag. Are these 2 slight modifications significant enough to avoid a "duplicate content" flagging? Please advise, and thanks in advance!
Technical SEO | | WhiteCap0 -
I have a lot of warnings for "Overly-Dynamic URL"
I have a lot of warnings for "Overly-Dynamic URLs" but all the pages listed have a canonical with a static url , does this mean that I can ignore the warnings? Seems to me that I can but I just want to make sure?
Technical SEO | | Arnx1 -
Is having "rel=canonical" on the same page it is pointing to going to hurt search?
i like the rel=canonical tag and i've seen matt cutts posts on google about this tag. for the site i'm working on, it's a great workaround because we often have two identical or nearly identical versions of pages: 1 for patients, 1 for doctors. the problem is this: the way our content management system is set up, certain pages are linked up in a number of places and when we publish, two different versions of the page are created, but same content. because they are both being made from the same content templates, if i put in the rel=canonical tag, both pages get it. so, if i have: http://www.myhospital.com/patient-condition.asp and http://www.myhospital.com/professional-condition.asp and they are both produced from the same template, and have the same content, and i'm trying to point search at http://www.myhospital.com/patient-condition.asp, but that tag appears on both pages similarly, we have various forms and we like to know where people are coming from on the site to use those forms. to the bots, it looks like there's 600 versions of particular pages, so again, rel=canonical is great. however, because it's actually all the same page, just a link with a variable tacked on (http://www.myhospital.com/makeanappointment.asp?id=211) the rel=canonical tag will appear on "all" of them. any insight is most appreciated! thanks! brett
Technical SEO | | brett_hss0 -
"/" at the end of a URL
I just noticed that I have the exact same page showing up separately in my Google Analytics reports. One has a "/" at the end and the other does not. Otherwise, these are the exact same URL's. Is this something I need to be aware of from a duplicate content perspective? If so, how do I go about fixing this? I thought the SE's would automatically see that a URL with a "/" at the end is the same as one without, but if that is the case, why is it showing up in my reports as two separate pages?
Technical SEO | | Blockinc0