New "Static" Site with 302s
-
Hey all,
Came across a bit of an interesting challenge recently, one that I was hoping some of you might have had experience with!
We're currently in the process of a website rebuild, for which I'm really excited. The new site is using Markdown to create an entirely static site. Load-times are fantastic, and the code is clean. Life is good, apart from the 302s.
One of the weird quirks I've realized is that with oldschool, non-server-generated page content is that every page of the site is an Index.html file in a directory. The resulting in a www.website.com/page-title will 302 to www.website.com/page-title/.
My solution off the bat has been to just be super diligent and try to stay on top of the link profile and send lots of helpful emails to the staff reminding them about how to build links, but I know that even the best laid plans often fail.
Has anyone had a similar challenge with a static site and found a way to overcome it?
-
Wow. I wasn't expecting such a detailed and awesome answer Danny. Thanks so much, I'm in the process of migrating away from S3 anyways (for other reasons) though you're right in that I'm going to miss the cost & load times.
I'm using Middleman for now, though the technical part of my brain is indeed interested in how you're going to accomplish the Jekyll solution. I'll look out for your post!
And thanks for the tip on my site. Another thing to add to the list
Arun
-
Hey Arun,
Thanks for posting! I was beginning to think that I was the only Inbound guy anywhere that had to deal with this kind of issue
Yup, I created the same bug with redirect loops trying to get around the slash issue. The problem is that S3 doesn't consider the slash as part of the rewrite data unless something comes after it.
Ultimately, my number one suggestion would be to go with a different service that allows you to install a Server App like Nginx or Apache. Others have agreed that redirections set up through a server app are the way that they feel the most comfortable that link equity is being passed.
If you're dead-set on S3, which I would understand as the load times are crazy-awesome-insane, I may have a solution for you soon. Our dev team is working on a script for Jekyll + S3 sites that will essentially create extension-less files (i.e. example.com/contact) that contain meta refresh + rel canon.
The script will use a list of desired redirections + rules that is structured the same way an htaccess file would be. I can't speak to how it will get past S3's default 302ing yet, but I know that it will use CURL. Look for a YouMoz post soon from me!
Anyways, I hope my notes here help! I'm gonna try and make that post soon after the script is created. Just as a last note, in taking a look at your site I noticed that a lot of the internal links on your homepage don't have the trailing slash in them. I would definitely start there and add those slashes, and perform a "submit page + linked page" to Webmaster Tools after!
-
Hi Danny-
I've got the exact same issue (static site on S3 redirecting with 302s), and surprisingly can't find a lot of information out there. If I do a S3 metadata based redirect from (for example) /blog to /blog/ I just end up in a redirect loop.
I checked out your site and it still looks like you're working on it. Did you end up figuring anything out? If there's any way that I can help get to a solution I'd be happy to spend some time on it.
Thanks!
Arun
-
Thanks for the reply David!
Yup, I think that this has just been a case of wrapping my head around a new way of doing things (i.e. redirections in the AWS bucket config rather than using .htdocs). Static sites are a crazy combination of complicated and simple!
Thanks! We're using Jekyll somewhat, although we've had issues with the image hosting. I've actually had better results using the local github client + "Mou", a local Markdown editor.
-
Nice! (for speed at least)
I would show your team some examples of external URLs pointing at the non trailing slash versions of your pages and explain the downside of the 302 redirect. Also consider that people and bots visiting those URLs will be adding overhead to your server, and on Amazon that will equal increased cost (small as it may be, the pennies add up!)
Reading the link you provided it looks like the default behaviour of the page metadata redirect under the s3 console is to create a 301 redirect. That makes me think the 302 is coming from somewhere else. Look at the following URL:
http://docs.aws.amazon.com/AmazonS3/latest/dev/HowDoIWebsiteConfiguration.html
It looks like you can add advanced redirects under "Enable website hosting -> edit redirection rules". I'd explore if there are redirects listed there and maybe chat to your developers further.
While you are it I spotted two other issues for you to consider. Currently the index.html files in your directories resolve to the same page as your main directory. I would 301 those pages back to the parent directory (slash version). Or you could add canonical URLs pointing back to the parent directory (with trailing slash). I'd make a case for adding canonical URLs to all pages.
Also, you currently have a number of redirect chains e.g.
http://www.strutta.com/resources/posts/share-your-contests-and-sweepstakes-all-over-social-media 301 redirects to http://www.strutta.com/resources which 302 redirects to http://www.strutta.com/resources/.
You need to find the original redirect and change it to 301 redirect to the trailing slash version of the directory. Screaming Frog can help you find these redirect chains.
-
Hi Danny!
I don't have much to add here, I think the guys have it right in that you'll need to figure out how to make the 301 work. I quickly read that documentation, then realized I wasn't a robot, so I found this: http://aws.typepad.com/aws/2012/10/amazon-s3-support-for-website-redirects.html which was a bit more friendly.
I wish I could help you out more, but I'm not using AWS. I'm assuming you'll be able to use wildcard or regex matching somewhere, and that should solve your problem.
Great site by the way, anything you're using to help out with the static blog? (Jekyll, Octopress?)
-
Follow-up answer:
Our new website (Strutta.com) is entirely static, hosted on S3. No Apache, just straight HTML files. No apache means no htaccess.
Instead of using htaccess, we have to use the S3 Console: http://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html
As far as I can tell, this sets up redirects the same way. Although this doesn't answer my initial question, I'm going to try using the control panel later on today to see if 301ing the directories there to include the / will get recognized before whatever is causing the 302 currently
-
Thanks all,
I think the problem is coming from the fact that we're hosted on Amazon Webservices, and the devs are using the "aws bucket config" settings to institute redirects instead of htaccess. SEO vs Dev Battle time.
-
Hey Danny,
As Maximilian suggested above the best solution is going to be to change those 302s to 301s. I generally like to redirect to trailing slash URLs for directories and non trailing slash URLs for files/pages (that's that standard convention). I find in practice hardly anyone who links organically ever includes a trailing slash when linking to a page, but when it's the homepage I don't worry about it too much, browsers and Google can figure that out.
Basically you need to figure out where the 302 is coming from and hopefully it is in your .htaccess file. If you can edit your .htaccess file you need to change that to a 301 redirect, or you could remove the redirect and just use a canonical URL pointing at the / version of the page. I would prefer to go with the 301 though. Just be sure to look at how these redirects are being implemented and in what order, you don't want to end up with redirect chains either.
Can you get access to your .htaccess file or is the server running something funky?
-
Perhaps this is too obvious, but can you not change the 302 to 301's?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does this type of writing follow the "original content" criterion of structured data?
Hi!' So, in Google's general guideline for structured data, it's stated that the webmasters must "provide original content that you or your users have generated." If I were to write an article about post similar to stuff like "how to get a driver's license" or "how to apply for an accounting license", which requires looking up information from official and non-official sources. After researching, I compiled the information I found and wrote a few blog posts. Are these considered original content? Can I apply structured data to these posts without Google penalizing them? Thanks!
Technical SEO | | EverettChen0 -
"Equity sculpting" with internal nofollow links
I’ve been trying a couple of new site auditor services this week and they have both flagged the fact that I have some nofollow links to internal pages. I see this subject has popped up from time to time in this community. I also found a 2013 Matt Cutts video on the subject: https://searchenginewatch.com/sew/news/2298312/matt-cutts-you-dont-have-to-nofollow-internal-links At a couple of SEO conferences I’ve attended this year, I was advised that nofollow on internal links can be useful so as not to squander link juice on secondary (but necessary) pages. I suspect many websites have a lot of internal links in their footers and are sharing the love with pages which don’t really need to be boosted. These pages can still be indexed but not given a helping hand to rank by strong pages. This “equity sculpting” (I made that up) seems to make sense to me, but am I missing something? Examples of these secondary pages include login pages, site maps (human readable), policies – arguably even the general contact page. Thoughts? Regards,
Technical SEO | | Warren_Vick
Warren1 -
Is it problematic for Google when the site of a subdomain is on a different host than the site of the primary domain?
The Website on the subdomain runs on a different server (host) than the site on the main domain.
Technical SEO | | Christian_Campusjaeger0 -
How do I get my pages to go from "Submitted" to "Indexed" in Google Webmaster Tools?
Background: I recently launched a new site and it's performing much better than the old site in terms of bounce rate, page view, pages per session, session duration, and conversions. As suspected, sessions, users, and % new sessions are all down. Which I'm okay with because the the old site had a lot of low quality traffic going to it. The traffic we have now is much more engaged and targeted. Lastly, the site was built using Squarespace and was launched the middle of August. **Question: **When reviewing Google Webmaster Tools' Sitemaps section, I noticed it says 57 web pages Submitted, but only 5 Indexed! The sitemap that's submitted seems to be all there. I'm not sure if this is a Squarespace thing or what. Anyone have any ideas? Thanks!!
Technical SEO | | Nate_D0 -
Best use of robots.txt for "garbage" links from Joomla!
I recently started out on Seomoz and is trying to make some cleanup according to the campaign report i received. One of my biggest gripes is the point of "Dublicate Page Content". Right now im having over 200 pages with dublicate page content. Now.. This is triggerede because Seomoz have snagged up auto generated links from my site. My site has a "send to freind" feature, and every time someone wants to send a article or a product to a friend via email a pop-up appears. Now it seems like the pop-up pages has been snagged by the seomoz spider,however these pages is something i would never want to index in Google. So i just want to get rid of them. Now to my question I guess the best solution is to make a general rule via robots.txt, so that these pages is not indexed and considered by google at all. But, how do i do this? what should my syntax be? A lof of the links looks like this, but has different id numbers according to the product that is being send: http://mywebshop.dk/index.php?option=com_redshop&view=send_friend&pid=39&tmpl=component&Itemid=167 I guess i need a rule that grabs the following and makes google ignore links that contains this: view=send_friend
Technical SEO | | teleman0 -
What is the criteria for link "Paged from Australia"
When i enter a keyword in google.com.au, and click on a link "Pages from australia" ( in the middle left ), i expect to australian sites only. But there are sites with .com extension. Then what is the meaning of link "Pages from australia". What does it signify ?
Technical SEO | | seoug_20050 -
Delete old site but redirect domain to a new domain and site
I just have a quick query and I have a feeling about what the answer is so just wanted to see what you guys thought... Basically I am working on a client site. This client has a few other websites that are divisions of their company. However these divisions/websites are no longer used. They are wanting to delete the websites but redirect the domains to their name main website. They believe this will pass on SEO benefits as these old division sites are old and have a good PR and history. I'm unsure for DEFINITE, which way is correct?
Technical SEO | | Weerdboil0 -
What are the pros and cons of moving one site onto a subdomain of another site?
Two sites. One has weaker sales. What would the benefits and problems for SEO of moving the weak site from its own domain to a subdomain of the stronger site?
Technical SEO | | GriffinHansen0