Which one is the best
-
Dear Seo experts,
1,5 month ago i started a informative website, i started it with a blank registrated domainname. Now 1 month further I've stacked the website with content and did much linkbuilding.
Yesterday i ve bought a domainname from quarantine, its a domainname around 6 years old and has a bunch of backlinks already.
What to do next? The first one has good content and good recent linkbuilding done. The second is a better domainname and is old and has old backlinks. And also higher PA and DA then the first one.
Should i now go for the first one and 301 redirect the old domainname to the new one.
Or should I do it the opposite way, 301 redirect the new website to the old domainname and move all content to the old domainname and try to move all linkbuilding to older domain?
Hopefully anyone could give me a great answere, thank you so much!
Kind regards,
Menno
-
Thank you for your fast reply.
The old domain name is a .nl and the new domain is a .com in my country .nl ranks little better than the .com. So also this will count, but i also think ill do a 301 redirect from old domain to new domain. I might help a little, it has indeed been expired. Though it is old and still has PA and DA.
Kind regards,
Menno
ps. i made it discussion because i am also curious on other opinions
-
Hi Menno,
when you do a 301 redirect, from domain A to domain B, you tell search engines Domain A moved permanently to domain B and in a few weeks or months search engines will remove from their index all pages of domain A.
Ideally, the pages of domain A should be found on domain B.
If I were you, I'd 301 redirect the old domain to the new one.
Make sure the old domain it's not an expired one (all backlinks value is lost).
You may want to read these resources to find out more about 301 redirect best practices:
- http://www.seomoz.org/learn-seo/redirection
- http://googlewebmastercentral.blogspot.com/2008/04/best-practices-when-moving-your-site.html
- http://support.google.com/webmasters/bin/answer.py?hl=en&answer=83105
- http://searchengineland.com/do-links-from-expired-domains-count-with-google-17811
Good luck!
P.S. I'm not a SEO expert so you'd want to read other opinions, too.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best Practices for Image Optimisation
Hi Guys, I would love some recommendations from you all. A potential client of mine is currently hosting all their website image galleries (of which there are many) on a flickr account and realise that they could gain more leverage in Google images (currently none of their images cover off any of the basics for optimisation eg filename, alt text etc), I did say that these basics would at least need to be covered off and that Image hosting is supposedly an important factor especially when it comes to driving traffic from Google Image Search. (potentially images hosted on the same domain as the text are given more value than the images hosted at another domain like websites such as Flickr). The client has now come back saying they have done some 'reading' and that this suggests a sub-domain could be the way to go, e.g. images.mydomain.com - would love feedback on this before I go back to them as it would be a huge undertaking for them. Cheers
Technical SEO | | musthavemarketing0 -
What is the best way to handle links that lead to a 404 page
Hi Team Moz, I am working through a site cutover with an entirely new URL structure and have a bunch of pages that could not, would not or just plain don't redirect to new pages. Steps I have taken: Multiple new sitemaps submitted with new URLs and the indexing looks solid used webmasters to remove urls with natural result listings that did not redirect and produce urls Completely built out new ppc campaigns with new URL structures contacted few major link partners Now here is my question: I have a pages that produce 404s that are linked to in forums, slick deals and stuff like that which will not be redirected. Is disavowing these links the correct thing to do?
Technical SEO | | mm9161570 -
Best Place to Redirect 301 to?
Hey Everyone! I have an old site with hundreds of blog posts that are very spammy (duplicate content, keyword stuffed, and just plain bad content). I am going to redirect them and delete them from WordPress but I'm wondering where is the best place to redirect them to? Home page, other posts, other pages...? Any thoughts would be appreciated! Thanks!
Technical SEO | | adamxj21 -
How can I best handle parameters?
Thank you for your help in advance! I've read a ton of posts on this forum on this subject and while they've been super helpful I still don't feel entirely confident in what the right approach I should take it. Forgive my very obvious noob questions - I'm still learning! The problem: I am launching a site (coursereport.com) which will feature a directory of schools. The directory can be filtered by a handful of fields listed below. The URL for the schools directory will be coursereport.com/schools. The directory can be filtered by a number of fields listed here: Focus (ex: “Data Science”) Cost (ex: “$<5000”) City (ex: “Chicago”) State/Province (ex: “Illinois”) Country (ex: “Canada”) When a filter is applied to the directories page the CMS produces a new page with URLs like these: coursereport.com/schools?focus=datascience&cost=$<5000&city=chicago coursereport.com/schools?cost=$>5000&city=buffalo&state=newyork My questions: 1) Is the above parameter-based approach appropriate? I’ve seen other directory sites that take a different approach (below) that would transform my examples into more “normal” urls. coursereport.com/schools?focus=datascience&cost=$<5000&city=chicago VERSUS coursereport.com/schools/focus/datascience/cost/$<5000/city/chicago (no params at all) 2) Assuming I use either approach above isn't it likely that I will have duplicative content issues? Each filter does change on page content but there could be instance where 2 different URLs with different filters applied could produce identical content (ex: focus=datascience&city=chicago OR focus=datascience&state=illinois). Do I need to specify a canonical URL to solve for that case? I understand at a high level how rel=canonical works, but I am having a hard time wrapping my head around what versions of the filtered results ought to be specified as the preferred versions. For example, would I just take all of the /schools?focus=X combinations and call that the canonical version within any filtered page that contained other additional parameters like cost or city? Should I be changing page titles for the unique filtered URLs? I read through a few google resources to try to better understand the how to best configure url params via webmaster tools. Is my best bet just to follow the advice on the article below and define the rules for each parameter there and not worry about using rel=canonical ? https://support.google.com/webmasters/answer/1235687 An assortment of the other stuff I’ve read for reference: http://www.wordtracker.com/academy/seo-clean-urls http://www.practicalecommerce.com/articles/3857-SEO-When-Product-Facets-and-Filters-Fail http://www.searchenginejournal.com/five-steps-to-seo-friendly-site-url-structure/59813/ http://googlewebmastercentral.blogspot.com/2011/07/improved-handling-of-urls-with.html
Technical SEO | | alovallo0 -
What is the best way to find stranded pages?
I have a client that has a site that has had a number of people in charge of it. All of these people have very different opinions about what should be on the site itself. When I look at their website on the server I see pages that do not have any obvious navigation to them. What is the best way to find out the internal linking structure of a site and see if these pages truly are stranded?
Technical SEO | | anjonr0 -
Best practices for migrating an html sitemap? Or just get rid of it all together?
We are migrating a very large site to a new CMS and I'm trying to determine the best way to handle all the links (~15k) in our html sitemap. The developers don't see the purpose of using an html sitemap anymore and I have yet to come up with a good reason why we should migrate rather than just get rid of the sitemap since it is not very useful to users. The html sitemap was created about 6 years ago when page rank sculpting was a high priority. Currently, since we already have an XML sitemap, I'm not sure that there's really a need for a html sitemap, other than to maintain all the internal links. How valuable are the internal links found in an html sitemap? And will it be a problem if we remove these from our link profile? 15,000 links sounds significant, but they only account for less than .5% of our internal links. What do all you think?
Technical SEO | | BostonWright0 -
Best Practice to Remove a Blog
Note: Re-posting since I accidentally marked as answered Hi, I have a blog that has thousands of URL, the blog is a part of my site. I would like to obsolete the blog, I think the best choices are 1. 404 Them: Problem is a large number of 404's. I know this is Ok, but makes me hesitant. 2. meta tag no follow no index. This would be great, but the question is they are already indexed. Thoughts? Thanks PS A 301 redirect to the main page would be flagged as a soft 404
Technical SEO | | Bucky0 -
Best way to handle different views of the same page?
Say I have a page: mydomain.com/page But I also have different views: /?sort=alpha /print-version /?session_ID=2892 etc. All same content, more or less. Should the subsequent pages have ROBOTS meta tag with noindex? Should I use canonical? Both? Thanks!
Technical SEO | | ChatterBlock0