Snagged an Expired Domain Best way to get all the Link Juce?
-
Found a PR4 domain that had expired and not been renewed. Best way to get all the link juice from it? just 301 the whole thing to MY main domain?
-
Good advice! Very timely considering I just asked a question about expired domains as well!
-
If the domain holds its linkjuice after the expiration then the 301 would be a good way to send the linkvalue to another site.
However, if this domain is similar in topic to a domain that you own it might be a good idea to do a page-by-page review of the backlinks and 301 redirect expired pages to active pages on your existing site that are about the same topic.
You might also consider creating new pages on the same topic where needed to receive the 301 and the traffic/linkvalue that goes with them.
One of the problems of redirecting anything is that webmasters who gave the original links might discover that what they linked to has changed and take down the link. So if you can point to same topic content - especially content that is superior - you will then have a better chance of holding the original links long term.
Try to surprise them with superior content.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the best way to show content in Listing Pages?
If it is e-commerce site and a product listing page there is always a conflict how to show the content? As per my understanding we can show content in two different ways. 1. To show little content and use **Read more. (**In this case there is a direct message to the google: Here is the content visible and rest content is hidden but available for visitors to read more 2. Can use** Scroll bar**. So here is the message to Google and visitors that my full content is available here. So just scroll down to read further. So I want to know that which method of showing content is best and it's impact of SEO where there is UI constraint or both the method is ok without any SEO impact. Please share your suggestions. DCdRJpH
Technical SEO | | kathiravan0 -
Which domain we should continue with?
Hello All, We are working with a client who had manual penalty from Google. We worked on that and now penalty has been removed. Client had already started working on the new domain and now the big dilemma is- Which domain should we continue with? Old or New? We are suggesting them to continue with the old one as that domain had good PR, good backlinks, better visibility on their social profiles etc. What do you suggest? any inputs are highly appreciated. Thanks
Technical SEO | | sachin-sv0 -
Partial manual action - unnatural links from domain takeover
One of our clients took over a competitor and it would appear that all links to that take over website got redirected to our client. This resulted in ~430,000 links to our client in a short time period. This also resulted in a partial manual action against the unnatural links. What would Google be looking for us to solve in this case? Should we change all of the links to "no follow", should we remove them completey?
Technical SEO | | aaronleven0 -
Old domain still being crawled despite 301s to new domain
Hi there, We switched from the domain X.com to Y.com in late 2013 and for the most part, the transition was successful. We were able to 301 most of our content over without too much trouble. But when when I do a site:X.com in Google, I still see about 6240 URLs of X listed. But if you click on a link, you get 301d to Y. Maybe Google has not re-crawled those X pages to know of the 301 to Y, right? The home page of X.com is shown in the site:X.com results. But if I look at the cached version, the cached description will say :This is Google's cache of Y.com. It is a snapshot of the page as it appeared on July 31, 2014." So, Google has freshly crawled the page. It does know of the 301 to Y and is showing that page's content. But the X.com home page still shows up on site:X.com. How is the domain for X showing rather than Y when even Google's cache is showing the page content and URL for Y? There are some other similar examples. For instance, you would see a deep URL for X, but just looking at the <title>in the SERP, you can see it has crawled the Y equivalent. Clicking on the link gives you a 301 to the Y equivalent. The cached version of the deep URL to X also shows the content of Y.</p> <p>Any suggestions on how to fix this or if it's a problem. I'm concerned that some SEO equity is still being sequestered in the old domain.</p> <p>Thanks,</p> <p>Stephen</p></title>
Technical SEO | | fernandoRiveraZ1 -
What is the best way to deal with https?
Currently, the site I am working on is using HTTPS throughout the website. The non-HTTPS pages are redirected through a 301 redirect to the HTTPS- this happens for all pages. Is this the best strategy going forward? if not, what changes would you suggest?
Technical SEO | | adarsh880 -
No crawl code for pages of helpful links vs. no follow code on each link?
Our college website has many "owners" who want pages of "helpful links" resulting in a large number of outbound links. If we add code to the pages to prevent them from being crawled, will that be just as effective as making every individual link no follow?
Technical SEO | | LAJN0 -
How to get rid of back links?
I would like to clear all of the links linking to us to start over fresh? Our site has become non existent on Google with no messages or warning from them through webmaster tools. From tools on here and other sources it indicates that we should be ranked ahead of our competitors or atleast comparable. We are not found under the keywords that we use to be...."parts washers" "part washer" industrial cleaning systems etc.. our site is www.vortexpartswashers.com. Any help will be greatly appreciated.
Technical SEO | | mhart0 -
Which is The Best Way to Handle Query Parameters?
Hi mozzers, I would like to know the best way to handle query parameters. Say my site is example.com. Here are two scenarios. Scenario #1: Duplicate content example.com/category?page=1
Technical SEO | | jombay
example.com/category?order=updated_at+DESC
example.com/category
example.com/category?page=1&sr=blog-header All have the same content. Scenario #2: Pagination example.com/category?page=1
example.com/category?page=2 and so on. What is the best way to solve both? Do I need to use Rel=next and Rel=prev or is it better to use Google Webmaster tools parameter handling? Right now I am concerned about Google traffic only. For solving the duplicate content issue, do we need to use canonical tags on each such URL's? I am not using WordPress. My site is built on Ruby on Rails platform. Thanks!0