Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Redirecting blog.<mydomain>.com to www.<mydomain>.com\blog</mydomain></mydomain>
-
This is more of a technical question than pure SEO per se, but I am guessing that some folks here may have covered this and so I would appreciate any questions.
I am moving from a WordPress.com-based blog (hosted on WordPress) to a WordPress installation on my own server (as suggested by folks in another thread here).
As part of this I want to move from the format blog.<mydomain>.com to www.mydomain.com\blog. I have installed WordPress on my server and have imported posts from the hosted site to my own server.
How should I manage the transition from first format to the second? I have a bunch of links on Facebook, etc that refer to URLs of the blog..com format so it's important that I redirect.</mydomain>
I am running DotNetNuke/WordPress on my own IIS/ASP.Net servers.
Thanks.
Mark
-
Thank you. Yes, that's pretty much the plan I am executing now. Right now I'm struggling to get this working with the URL rewriting module in IIS 7 but I am sure it's possible.
Thanks again.
Mark
-
Yes, do what Alan is suggesting.
Create the blog.yourdomain.com folder on your own server and then put in 301 redirects from blog.yourdomain.com to your www.yourdomain.com/blog
After the redirects are setup, change your DNS from Wordpress.com to your installation of blog.yourdomain.com.
On Apache servers you just need to create a htaccess file in your blog.yourdomain.com folder, but I don't have any experience with IIS/ASP server.
-
ah gotcha. I paused initially reading, and was remiss in getting clarificatni. So if you have full control, you're in better shape to do it yourself.
Set up the DNS so that blog.yourdomain.com is pointed to your server, then you can implement the server level 301s on that subdomain yourself on that server.
-
Thank you, Alan. I want to make sure I understand this.
I have full control of my DNS zone entries. I currently link a CNAME record for blog to the <myblog>.wordpress.com. My hope is that I could:</myblog>
- Update the DNS entry to point to my own server (so, blog.<mydomain>.com would just be directed to that machine)</mydomain>
- Implement some sort of server-side redirect that translates the old format to the new format.
This way I have no reason to keep WordPress.com in the picture (with the redirection service) - I basically just create new links to www.<mydomain>.com</mydomain> and have all old links redirected as above.
Would that not work?
Thanks again.
-
Hi Mark
You are going to need to rely on WordPress' own 301 redirect solution. 301 Redirects have to happen on the server where the original content resided (you can't set up a 301 redirect on your own site's server, since the original files and domain weren't hosted there).
Here's the official solution http://en.support.wordpress.com/site-redirect/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
does <base> in html affect seo?
hey, just wanna know does <base> in head of website affect SEO? and if it's a yes, how?
Technical SEO | | m17001 -
301 Redirects, Sitemaps and Indexing - How to hide redirected urls from search engines?
We have several pages in our site like this one, http://www.spectralink.com/solutions, which redirect to deeper page, http://www.spectralink.com/solutions/work-smarter-not-harder. Both urls are listed in the sitemap and both pages are being indexed. Should we remove those redirecting pages from the site map? Should we prevent the redirecting url from being indexed? If so, what's the best way to do that?
Technical SEO | | HeroDesignStudio0 -
301 Redirects in subfolders
Hi, we're making our site into a static site but I would like to transfer the Google juice. Most of the links and database exist on subfolders though. Could I simply do 301 redirects on the subfolders and retain the value or does it have to be on the full domain?
Technical SEO | | Therealmattyd0 -
<sub>& <sup>tags, any SEO issues?</sup></sub>
Hi - the content on our corporate website is pretty technical, and we include chemical element codes in the text that users would search on (like S02, C02, etc.) A lot of times our engineers request that we list the codes correctly, with a <sub>on the last number. Question - does adding this code into the keyword affect SEO? The code would look like SO<sub>2</sub>.</sub> Thanks.
Technical SEO | | Jenny10 -
/~username
Hello, The utility on this site that crawls your site and highlights what it sees as potential problems reported an issue with /~username access seeing it as duplicate content i.e. mydomain.com/file.htm is the same as mydomain.com~/username/file.htm so I went to my server hosts and they disabled it using mod_userdir but GWT now gives loads of 404 errors. Have I gone about this the wrong way or was it not really a problem in the first place or have I fixed something that wasn't broken and made things worse? Thanks, Ian
Technical SEO | | jwdl0 -
Home Page .index.htm and .com Duplicate Page Content/Title
I have been whittling away at the duplicate content on my clients' sites, thanks to SEOmoz's pro report, and have been getting push back from the account manager at register.com (the site was built here and the owner doesn't want to move it). He says these are the exact same page and he can't access one to redirect to the other. Any suggestions? The SEOmoz report says there is duplicate content on both these urls: Durango Mountain Biking | Durango Mountain Resort - Cascade Village http://www.cascadevillagehotel.com/index.htm Durango Mountain Biking | Durango Mountain Resort - Cascade Village http://www.cascadevillagehotel.com/ Your help is greatly appreciated! Sheryl
Technical SEO | | TOMMarketingLtd.0 -
Rel=Canonical, WWW vs non WWW and SEO
Okay so I'm a bit of a loss here. For what ever reason just about every single Wordpress site I has will turn www.mysite.com into mysite.com in the browser bar. I assume this is the rel=canonical tag at work, there are no 301s on my site. When I use the Open Site Explorer and type in www.mysite.com it shows a domain authority of around 40 and a few hundred backlinks... and then I get the message. Oh Hey! It looks like that URL redirects to XXXXXX. Would you like to see data for <a class="clickable redirects">that URL instead</a>? So if I click to see this data instead I have less than half of that domain authority and about 2 backlinks. *** Does this make a difference SEO wise? Should my non WWW be redirecting to my WWW instead because that's where the domain authority and backlinks are? Why am I getting two different domain authority and backlink counts if they are essentially the same? Or am I wrong and all that link juice and authority passes just the same?
Technical SEO | | twilightofidols0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0