Redirect old URL's from referring sites?
-
Hi
I have just came across some URL's from the previous web designer and the site structure has now changed.
There are some links on the web however that are still pointing at the old deep weblinks.
Without having to contact each site it there a way to automatically sort the links from the old structure
www.mydomain.com/show/english/index.aspx
to just
Many Thanks
-
Hi Yes I think so, but is there a way to check/run a test?
UPDATE - Yes it is enabled its on Dreamhost
-
Does the apache have the mod rewrite installed?
-
Hi Robert
I am trying the following in a .htaccess file but the old URL is not redirecting to the root url?
RewriteEngine on
RedirectMatch 301 http://mydomain.co.uk/show/english/home.aspx http://www.mydomain.co.uk
RedirectMatch 301 http://www.mydomain.co.uk/show/english/home.aspx http://www.mydomain.co.uk
-
Sorry Ocelot, was tied up.
Here is a link to a moz resource from WebConfs.com
On moz here is another good resource page re redirects (where I pulled that link above from).
On WebConfs.com in the left sidebar is a redirect checker, once done put in your url and see how you did.
LMK if you get stuck.
-
Thanks Robert
But what is the correct code needed for the 301 in the .htaccess file as I am not getting this right at the moment! Been trying for the past few hours!
-
For yours, when you know the bad url is www.example.com/how-happened/dont-know
And you have a how-happened page on current site. In the .htaccess file put your 301 from bad link to current url and you will move any links as well.
-
Thanks again Robert!
How would you direct the old links to the relevant pages, as in my example of
www.mydomain.com/show/english/index. aspx
to just
Do you have any working examples that I could modify to suit my scenario above? Would this be done in a .htaccess file or in WMT? or could I just recreate the directory structure and file with a 301 to the new page?
-
Ocelot,
I would use Screaming Frog to spider the site. If you don't have, you should be able to download a free version from the web to whatever browser, etc. you use. When you open it, you will see where to put the domain and below that a row of buttons - use response codes to find what 404's you have. then click on them and use the links button at the bottom to see the links to those pages.
There is no blanket way to redirect the bad links if that is what you are asking. You can look at them in WMT and for the ones that have authority/get traffic, etc. you can redirect those to relevant pages and the others you need to have a good 404 page up that keeps them on the site.
Hope that helps,
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No: 'noindex' detected in 'robots' meta tag
Pages on my site show No: 'noindex' detected in 'robots' meta tag. However, when I inspect the pages html, it does not show noindex. In fact, it shows index, follow. Majority of pages show the error and are not indexed by Google...Not sure why this is happening. The page below in search console shows the error above...
Technical SEO | | Sean_White_Consult0 -
Having issues with Redirects not working and old links on SERP
We just migrated a site and built a redirect map for Site A to B. If there were old redirects made for site A that weren't pulled when pulling internal links for site A, do those also need to be redirected to site B to eliminate a Redirect chain? Cannot figure out why old links are still showing up, does it take a few days for google to figure out these are not real pages?
Technical SEO | | Ideas-Collide0 -
Moving wordpress to it's own server
Our company wants to remove wordpress from our current windows OS server at provider 1 and move it to a new server at provider 2. Godaddy handles our DNS. I would like to have it on the same domain without masking. I would like to make a DNS entry on godaddy so that our current server and our new server can use the same URL (ie sellstuff.com). But I only want the DNS to direct traffic to our current server. The goal here is to have the new server using the same URL as the old server so nothing needs to be masked once traffic is redirected with a 301 rule in the htaccess file. But no traffic outside of the 301 rule will end up going to the new server. I would then like to edit the htaccess file on our current server to redirect to the new servers IP address when someone goes to sellstuff.com/blog. Does this make since and is it possible?
Technical SEO | | larsonElectronics0 -
Sitemap url's not being indexed
There is an issue on one of our sites regarding many of the sitemap url's not being indexed. (at least 70% is not being indexed) The url's in the sitemap are normal url's without any strange characters attached to them, but after looking into it, it seems a lot of the url's get a #. + a number sequence attached to them once you actually go to that url. We are not sure if the "addthis" bookmark could cause this, or if it's another script doing it. For example Url in the sitemap: http://example.com/example-category/0246 Url once you actually go to that link: http://example.com/example-category/0246#.VR5a Just for further information, the XML file does not have any style information associated with it and is in it's most basic form. Has anyone had similar issues with their sitemap not being indexed properly ?...Could this be the cause of many of these url's not being indexed ? Thanks all for your help.
Technical SEO | | GreenStone0 -
What's wrong with this robots.txt
Hi. really struggling with the robots.txt file
Technical SEO | | Leonie-Kramer
this is it: User-agent: *
Disallow: /product/ #old sitemap
Disallow: /media/name.xml When testing in w3c.org everything looks good, testing is okay, but when uploading it to the server, Google webmaster tools gives 3 errors. Checked it with my collegue we both don't know what's wrong. Can someone take a look at this and give me the solution.
Thanx in advance! Leonie1 -
Inconsistent page titles in SERP's
I encountered a strange phenomenon lately and I’d like to hear if you have any idea what’s causing it. For the past couple of weeks I’ve seen some our Google rankings getting unstable. While looking for a cause, I found that for some pages, Google results display another page title than the actual meta title of the page. Examples http://www.atexopleiding.nl Meta title: Atex cursus opleider met ruim 40 jaar ervaring - Atexopleiding.nl Title in SERP: Atexopleiding.nl: Atex cursus opleider met ruim 40 jaar ervaring http://www.reedbusinessopleidingen.nl/opleidingen/veiligheid/veiligheidskunde Meta title: Opleiding Veiligheidskunde, MBO & HBO - Reed Business Opleidingen Title in SERP: Veiligheidskunde - Reed Business Opleidingen http://www.pbna.com/vca-examens/ Meta title: Behaal uw VCA diploma bij de grootste van Nederland - PBNA Title in SERP: VCA Examens – PBNA I’ve looked in the source code, fetched some pages as Googlebot in WMT, but the title shown in the SERP doesn’t even exist in the source code. Now I suspect this might have something to do with the “cookiewall” implemented on our sites. Here’s why: Cookiewall was implemented end of January The problem didn’t exist until recently, though I can’t pinpoint an exact date. Problem exists on both rbo.nl, atexopleiding.nl & pbna.com, the latter running on Silverstripe CMS instead of WP. This rules out CMS specific causes. The image preview in the SERPS of many pages show the cookie alert overlay However, I’m not able to technically prove that the cookiescript causes this and I’d like to rule out other any obvious causes before I "blame it on the cookies" :). What do you think?
Technical SEO | | RBO0 -
Ignore url parameters without the 'parameter=' ?
We are working on an ecommerce site that sorts out the products by color and size but doesn't use the sortby= but uses sortby/. Can we tell Google to ignore the sortby/ parameter in Webmaster Tools even though it is not followed by an = sign? For example: www.mysite.com/shirts/tshirts/shopby/size-m www.mysite.com/shirts/tshirts/shopby/color-black Can we tell WMT to ignore the 'shopby/' parameter so that only the tshirts page will be indexed? Or does the shopby have to be set up as 'shopby=' ? Thanks!
Technical SEO | | Hakkasan0 -
What's the best way to deal with an entire existing site moving from http to https?
I have a client that just switched their entire site from the standard unsecure (http) to secure (https) because of over-zealous compliance issues for protecting personal information in the health care realm. They currently have the server setup to 302 redirect from the http version of a URL to the https version. My first inclination was to have them simply update that to a 301 and be done with it, but I'd prefer not to have to 301 every URL on the site. I know that putting a rel="canonical" tag on every page that refers to the http version of the URL is a best practice (http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=139394), but should I leave the 302 redirects or update them to 301's. Something seems off to me about the search engines visiting an http page, getting 301 redirected to an https page and then being told by the canonical tag that it's actually the URL they were just 301 redirected from.
Technical SEO | | JasonCooper0