Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
301 Redirect "wildcard" question
-
I have been looking at the SEOmoz redirect guide for some advice but I can't seem to find the answer : http://www.seomoz.org/learn-seo/redirection
I have lots of URLs from a previous version of a site that look like the following:
etc etc.
I want to write a redirect so whenever a URL with the terms "-c-25.html" is requested it redirects to a specified page, regardless of what comes after the question mark.
These URLs were created by our previous ecommerce software. The 'c' is for category, and each page of the cateogry created a different URL. I want to do these so I can rediect all of these URLs to the appropraite new cateogry page in a single redirect.
Thanks for any help.
-
When I did a similar transition with hundreds of thousands of links. I created a database table with source and destination columns. Then a script that handles all 404 requests. If the requested link matches an entry in the source column, the user is sent a 301 to the matching destination entry. That allowed for easier maintenance than a huge htaccess file and the server load caused by te script should go down over time as 301 are saved and you contact site owners to update links. The other benefit is that you can do enhanced tracking to see what is request, found and not found and where those people came from.
-
An easy way is to use RedirectMatch, example:
RedirectMatch 301 /-c-25.html http://www.domain.com/new-category
Drop the above in a .htaccess file, test it works how you expect first
-
OK, If I make it the first redirect then the redirection works - regardless of what is written after the 'c-21.html'.
However the redirect is retaining the erroneous URL data after redirection. It is adding the '?blahblahblah" to the end of the new URL. I want it to dispose of this so all the redirects are routed to just one URL. How do I instruct it to not include this unwanted data in the new URL?
Thanks
-
Order matters in Rewrites. You will have to place that Rewrite Rule above the others.
-
I thought that may do it but still nothing. Maybe I am entering it wrong? Here is the code in .htaccess:
RewriteEngine On
RewriteBase /test/
RewriteRule ^index.php$ - [L
]RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /test/index.php [L]
RewriteRule ^-c-21.html(.*)$ http://www.mysitename.com/test/category/t-shirts/dolphin_tshirts [R=301,L
]
The redirect just doesn't happen.
EDIT: If I write a standard redirect : Redirect 301 /test/-c-21.html then it will redirect to the desired page but it will retain the ?blahblah and add it to the new URL. I want it to work like this but discard the ?blahblahblah after redirecting.
-
If you need these to be 301 redirects...
RewriteRule ^-c-25.html(.*)$ http://www.yoursite.com/dolphin_tshirts [R=301,L]
-
Just to calrify I need a URL that has
/-c-25.html?blahblahblah
to change to:
/dolphin_tshirts
Regardless of that is written in the blahblahblah part.
-
I think that would probably work for him, assuming that the category IDs remain the same.
-
Would something liek this work:
RewriteRule ^-c-(.).html(.)$ category/$1.html$2 [R,NC]
I've not tested it, nor do I claim to be an expert, but I think it will work for what you're tryign to acheive - e.g. -c-25.html becomes category/25.html
-
If your site is in PHP, you could simply add the code...
$targetURL = "http://www.sitename.com/whatever-page-you-what";
if(stristr($_SERVER['REQUEST_URI'],"-c-25.html")) {
header("HTTP/1.1 301 Moved Permanently"); header("Location: $targetURL");
}
?>
If you don't have access to PHP, you could add a line like this to your HTACCESS file...
RewriteCond %{THE_REQUEST} (c-25.html) [NC]
RewriteRule .* http://www.sitename.com/your-target-page [L,R=301]Someone might want to double check me on that rewriteRule above, though.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots txt. in page with 301 redirect
We currently have a a series of help pages that we would like to disallow from our robots txt. The thing is that these help pages are located in our old website, which now has a 301 redirect to current site. Which is the proper way to go around? 1- Add the pages we want to disallow to the robots.txt of the new website? 2- Break the redirect momentarily and add the pages to the robots.txt of the old one? Thanks
Technical SEO | | Kilgray0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Changing title tags, do we need 301 redirects
I found many duplicate title tags and I'm in the process of changing it Do I need 301 redirects in place when I switch it? I am only changing the title tag. Also, we are switching over to a new site very soon, I am worried that we might be using too many 301 redirect "hops" because we are doing a lot of optimization as well. (video from matt cutts describing 301 redirects and hops: http://www.youtube.com/watch?v=r1lVPrYoBkA. Does anyone have any experience in doing too many redirect hops that it affected your rankings? Any good ideas to avoid this?
Technical SEO | | EcomLkwd0 -
Simple 301 redirect a subfolder to another subfolder
Hi, I have a number of sub-folders that I have to move, each of which contains a number of files. subfolder A has files a, b & c subfolder B has files d, e & f
Technical SEO | | aactive
subfolder C has files g, h & i A, B & C folders need to be X, Y & Z Will the following work? RewriteRule ^subfolder-A/* http://www.domain.com/subfolder-X/ [R=301,L]
RewriteRule ^subfolder-B/* http://www.domain.com/subfolder-Y/ [R=301,L]
RewriteRule ^subfolder-C/* http://www.domain.com/subfolder-Z/ [R=301,L] will this result in visitors to http://www.domain.com/subfolder-B/f.html being redirected to http://www.domain.com/subfolder-Y/f.html? All on the same domain. in reality we are talking hundreds of sub folders and thousands of files so we don't want to have to reference every file individually in the htaccess. Thanks0 -
Switching from a .org to .io (301 domain redirect)
I'm considering switching my main site from a .org to .io address; the .org is an exact match domain which helped to kickstart it a few years ago and now has about 50% repeat visitors, but was thrown off the Apple affiliation program for trademark infringement. I've found and purchased a nice (non-infringing) .io domain, and I've read the advice here on how to properly 301 the old domain; but my question is - does it matter that it's .io? Is this going to significantly hurt my rankings, even when everything has been 301'd properly? Another thought I had is that I may actually come out better off in the long run, what with Google penalties being applied to exact match domains. Is this a ranking suicide? If so, I'm tempted to leave it as is; even without the affiliation, it's making a good amount every month in ad fees that I don't want to disrupt. Thanks all!
Technical SEO | | w0lfiesmithUK0 -
301 redirects & merging two sites into one
We have a client that has two sites that rank well for different searches in their market. The main pages ranking are things like advice articles and news pieces. For various reasons, they just want one site. I believe they need to duplicate the content from the outgoing site and place it on the main site, with a 301 redirect from each old page to each new one. What happens when they eventually want to redirect the entire domain? Would these smaller, internal redirects become obsolete, therefore removing any link value they once had? I am not sure how this works or if there is a best practice way to do this. Thanks Gareth
Technical SEO | | Gmorgan0 -
Should we use "and" or "&"?
Our client has an ampersand in their brand name. The logo has "&", their url is spelled out. I'm trying to get them to standardize the use of the name for directories/listings. Should we use "and" or "&"?
Technical SEO | | vernonmack0 -
How to safely reduce the number of 301 redirects / should we be adding so many?
Hi All, We lost a lot of good rankings over the weekend with no obvious cause. Our top keyword went from p3 to p12, for example. Site speed is pretty bad (slower than 92% of sites!) but it has always been pretty bad. I'm on to the dev team to try and crunch this (beyond image optimisation) but I know that something I can effect is the number of 301 redirects we have in place. We have hundreds of 301s because we've been, perhaps incorrectly, adding one every time we find a new crawl error in GWT and it isn't because of a broken link on our site or on an external site where we can't track down the webmaster to fix the link. Is this bad practice, and should we just ignore 404s caused by external broken URLs? If we wanted to reduce these numbers, should we think about removing ones that are only in place due to external broken URLs? Any other tips for safely reducing the number of 301s? Thanks, all! Chris
Technical SEO | | BaseKit0