Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Vanity URL's and http codes
-
We have a vanity URL that as recommended is using 301 http code, however it has been discovered the destination URL needs to be updated which creates a problem since most browsers and search engines cache 301 redirects.
Is there a good way to figure out when a vanity should be a 301 vs 302/307?
If all vanity URL's should use 301, what is the proper way of updating the destination URL?
Is it a good rule of thumb that if the vanity URL is only going to be temporary and down the road could have a new destination URL to use 302, and all others 301?
Cheers,
-
Like I said in the last paragraph, if this is temporary, 302 redirect the original destination URL to the new destination URL as well as the redirecting URL to the new destination. If these changes are permanent, make them 301 instead of 302 redirects.
-
That does answer my question partly. How do you handle the cached URL for the original 301 that points to the invalid URL?
Example. www.bob.com/hello points to www.bob.com/directory/folder/file.aspx
It needs to now point to www.bob.com/directory/folder2/file2.aspx
If browsers and search engines cache the first 301 since it's meant to be permanent, visitors that have been to the first URL will not get passed off to the new one.
-
Let me make sure I understand you. You have a vanity URL like bit.ly or something. It redirects to your website which is bitly.com or something like that. This redirect is a 301 permanent redirect.
Are you asking if bitly.com changed what you should do with the redirect? That's how I understood the question. So say bitly.com now goes to bitly.com/new or something along those lines.
If this is the case, all you want to do is change your 301 redirect of bit.ly to the new destination URL and keep it a 301. That is, unless bitly.com/new is only a temporary URL. If it will be reverting back to bitly.com then don't do that.
Instead, when you redirect bitly.com to bitly.com/new use a 302 redirect, keeping the 301 from bit.ly to bitly.com in tact. Hopefully that answers your question. Let me know if your scenario is different.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website's server IP address is redirected to blog by mistake; does Google responds?
Hi all, Our website's server IP address is set to be redirected to our blog by mistake and it stayed same for months. Is there any way Google recognises it and how it responds if so? Thanks
Algorithm Updates | | vtmoz1 -
What does it mean to build a 'good' website.
Hi guys. I've heard a lot of SEO professionals, Google, (and Rand in a couple of whiteboard Friday's) say it's really important to build a 'good' website if you want to rank well. What does this mean in more practical terms? (Context... I've found some sites rank much better than they 'should' do based on the competition. However, when I built my own site (well-optimised (on-page) based on thorough keyword research) it was nowhere to be found (not even top 50 after I'd 'matched' the backlink profile of others on page 1). I can only put this down to there being 'good quality website' signals lacking in the latter example. I'm not a web developer so the website was the pretty basic WordPress site.)
Algorithm Updates | | isaac6630 -
Can 'Jump link'/'Anchor tag' urls rank in Google for keywords?
E.g. www.website.com/page/#keyword-anchor-text Where the part after the # is a section of the page you can jump to, and the title of that section is a secondary keyword you want the page to rank for?
Algorithm Updates | | rwat0 -
Link reclamation and many 301 redirect to one URL
We have many incoming links to a non existing pages of a sub-domain, which we are planning to take down or redirect to a sub-directory. But we are not ready to loose pagerank or link juice as many links of this sub-domain are referred from different external links. It's going to be double redirect obviously. What is the best thing we can go to reclaim these links without loss of link juice or PR? Can we redirect all these links to same sub-domain and redirect the same sub-domain to sub-directory? Will this double redirect works? Or Can we redirect all these links to same sub-domain and ask visitors to visit sub-directory, manual redirection? How fair to manually redirect visitors? Any other options? Thanks, Satish
Algorithm Updates | | vtmoz0 -
Google indexing https sites by default now, where's the Moz blog about it!
Hello and good morning / happy Friday! Last night an article from of all places " Venture Beat " titled " Google Search starts indexing and letting users stream Android apps without matching web content " was sent to me, as I read this I got a bit giddy. Since we had just implemented a full sitewide https cert rather than a cart only ssl. I then quickly searched for other sources to see if this was indeed true, and the writing on the walls seems to indicate so. Google - Google Webmaster Blog! - http://googlewebmastercentral.blogspot.in/2015/12/indexing-https-pages-by-default.html http://www.searchenginejournal.com/google-to-prioritize-the-indexing-of-https-pages/147179/ http://www.tomshardware.com/news/google-indexing-https-by-default,30781.html https://hacked.com/google-will-begin-indexing-httpsencrypted-pages-default/ https://www.seroundtable.com/google-app-indexing-documentation-updated-21345.html I found it a bit ironic to read about this on mostly unsecured sites. I wanted to hear about the 8 keypoint rules that google will factor in when ranking / indexing https pages from now on, and see what you all felt about this. Google will now begin to index HTTPS equivalents of HTTP web pages, even when the former don’t have any links to them. However, Google will only index an HTTPS URL if it follows these conditions: It doesn’t contain insecure dependencies. It isn’t blocked from crawling by robots.txt. It doesn’t redirect users to or through an insecure HTTP page. It doesn’t have a rel="canonical" link to the HTTP page. It doesn’t contain a noindex robots meta tag. It doesn’t have on-host outlinks to HTTP URLs. The sitemaps lists the HTTPS URL, or doesn’t list the HTTP version of the URL. The server has a valid TLS certificate. One rule that confuses me a bit is : **It doesn’t redirect users to or through an insecure HTTP page. ** Does this mean if you just moved over to https from http your site won't pick up the https boost? Since most sites in general have http redirects to https? Thank you!
Algorithm Updates | | Deacyde0 -
Case Sensitive URL Redirects for SEO
We want to use a 301 redirect rule to redirect all pages to a lower case url format. A 301 passes along most of the link juice... most. Will we even see a negative impact in PageRank/SERPS when we redirect every single page on our site?
Algorithm Updates | | tcanders0 -
Client's site dropped completely from Google - AGAIN! Please help...
ok guys - hoping someone out there can help... (kinda long, but wanted to be sure all the details were out there) Already had this happen once - even posted in here about it - http://www.seomoz.org/q/client-s-site-dropped-completely-for-all-keywords-but-not-brand-name-not-manual-penalty-help Guy was a brand new client, all we did was tweak title tags and add a bit of content to his site since most was generic boilerplate text... started on our KW research and competitor research... in just a week, from title tag and content tweaks alone, he went from ranking on page 4-5 to ranking on page 3-4... then as we sat down to really optimize his site... POOF - he was gone from the Googs... He only showed up in "site:" searches and for exact matches of his business name - everything else was gone. Posted in here and on WMT - had several people check it out, both local guys and people from here (thanks to John Doherty for trying!) - but no one could figure out any reason why it would have happened. We submitted a reconsideration request, explaining that we knew we hadn't violated any quality guidelines, that he had less than 10 backlinks so it couldn't be bad linking, and that we had hardly touched the site. They sent back a canned response a week later that said there was no manual penalty and that we should "check our content" - mysteriously, the site started to show back up in the SERPs that morning (we got the canned response in the afternoon) There WAS an issue with NAP mismatch on some citations, but we fixed that, and that shouldn't have contributed to complete disappearance anyway. SO - the site was back, and back at its page 3 or 4 position... we decided to leave it alone for a few days just to be sure we didn't do anything... and then just 6 days later, when we were sitting down to fully optimize the site - POOF - completely gone again. We do SEO for a lot of different car dealers all over the country, and i know our strategies work. Looking at the competition in his market, he should easily be ranked page 2 or 3 with the very minimal tweaking we did... AND, since we didn't change anything since he came back, it makes even less sense that he was visible for a week and then gone again. So, mozzers... Anybody got any ideas? I'm really at a loss here - it makes zero sense that he's completely gone, except for his biz name... if nothing else, he should be ranking for "used cars canton"... Definitely appreciate any help anyone can offer -
Algorithm Updates | | Greg_Gifford0 -
Should I block non-informative pages from Google's index?
Our site has about 1000 pages indexed, and the vast majority of them are not useful, and/or contain little content. Some of these are: -Galleries
Algorithm Updates | | UnderRugSwept
-Pages of images with no text except for navigation
-Popup windows that contain further information about something but contain no navigation, and sometimes only a couple sentences My question is whether or not I should put a noindex in the meta tags. I think it would be good because the ratio of quality to low quality pages right now is not good at all. I am apprehensive because if I'm blocking more than half my site from Google, won't Google see that as a suspicious or bad practice?1