How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
-
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines.
To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs.
I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index.
The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one.
I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value.
I figure if I can find the back links to the original page, then I can find out the original web address of the original pages.
My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
-
My first question back at you is why is your hosting company deciding your canonical structure for you? a hosting company should just host and you should be using canonical tags and .htaccess to sort out your own canonical URLs. So my gut reaction is to change hosting company.
After that:
- Open Site Explorer - put both URLs in and see which ones are being linked to and which has the highest value.
If you're not a PRO member then take the 30 day trial and use OSE to sort this issue out. You'll be glad you did.
After that... seriously as a web designer & SEO, not having control of my own URLs would annoy the heck out of me, so change hosting provider. That'll be the second biggest favour you could give yourself
Slightly opinionated, I know, but at the end of the day it's your website not your host's.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our original content is being outranked on search engines by smaller sites republishing our content.
We a media site, www.hope1032.com.au that publishes daily content on the WordPress platform using the Yoast SEO plugin. We allow smaller media sites to republish some of our content with canonical field using our URL. We have discovered some of our content is now ranking below Or not visible on some search engines when searching for the article heading. Any thoughts as to why? Have we got an SEO proble? An interesting point is the small amount of content we have republished is not ranking against the original author on search engines.
Technical SEO | | Hope-Media0 -
Is it possible to deindex old URLs that contain duplicate content?
Our client is a recruitment agency and their website used to contain a substantial amount of duplicate content as many of the listed job descriptions were repeated and recycled. As a result, their rankings rarely progress beyond page 2 on Google. Although they have started using more unique content for each listing, it appears that old job listings pages are still indexed so our assumption is that Google is holding down the ranking due to the amount of duplicate content present (one software returned a score of 43% duplicate content across the website). Looking at other recruitment websites, it appears that they block the actual job listings via the robots.txt file. Would blocking the job listings page from being indexed either by robots.txt or by a noindex tag reduce the negative impact of the duplicate content, but also remove any link juice coming to those pages? In addition, expired job listing URLs stay live which is likely to be increasing the overall duplicate content. Would it be worth removing these pages and setting up 404s, given that any links to these pages would be lost? If these pages are removed, is it possible to permanently deindex these URLs? Any help is greatly appreciated!
Technical SEO | | ClickHub-Harry0 -
Duplicate content issue: staging urls has been indexed and need to know how to remove it from the serps
duplicate content issue: staging url has been indexed by google ( many pages) and need to know how to remove them from the serps. Bing sees the staging url as moved permanently Google sees the staging urls (240 results) and redirects to the correct url Should I be concerned about duplicate content and request Google to remove the staging url removed Thanks Guys
Technical SEO | | Taiger0 -
Syndicated content outranks my original article
I have a small site and write original blog content for my small audience. There is a much larger, highly relevant site that is willing to accept guest blogs and they don't require original content. It is one of the largest sites within my niche and many potential customers of mine are there. When I create a new article I first post to my blog, and then share it with G+, twitter, FB, linkedin. I wait a day. By this time G has seen the links that point to my article and has indexed it. Then I post a copy of the article on the much larger site. I have a rel=author tag within the article but the larger site adds "nofollow" to that tag. I have tried putting a link rel=canonical tag in the article but the larger site strips that tag out. So G sees a copy of my content on this larger site. I'm hoping they realize it was posted a day later than the original version on my blog. But if not will my blog get labeled as a scraper? Second: when I Google the exact blog title I see my article on the larger site shows up as the #1 search result but (1) there is no rich snippet with my author creds (maybe because the author tag was marked nofollow?), and (2) the original version of the article from my blog is not in the results (I'm guessing it was stripped out as duplicate). There are benefits for my article being on the larger site, since many of my potential customers are there and the article does include a link back to my site (the link is nofollow). But I'm wondering if (1) I can fix things so my original article shows up in the search results, or (2) am I hurting myself with this strategy (having G possibly label me a scraper)? I do rank for other phrases in G, so I know my site hasn't had a wholesale penalty of some kind.
Technical SEO | | scanlin0 -
Showing duplicate content when I have canonical url set, why?
Just inspecting my sites report and I see that I have a lot of duplicate content issues, not sure why these two pages here http://www.thecheapplace.com/wholesale-products/Are-you-into-casual-sex-patch http://www.thecheapplace.com/wholesale-products/small-wholesale-patches-1/Are-you-into-casual-sex-patch are showing as duplicate content when both pages have a clearly defined canonical url of http://www.thecheapplace.com/Are-you-into-casual-sex-patch Any answer would be appreciated, thank you
Technical SEO | | erhansimavi0 -
An odd duplicate content issue...
Hi all, my developers have just assured me that nothing has changed form last week but in the today's crawl I see all the website duplicated: and the difference on the url is the '/' so basically the duplicated urls are: htts://blabla.bla/crop htts://blabla.bla/crop/ Any help in understanding why is much appreciated. thanks
Technical SEO | | LeadGenerator0 -
Bad Duplicate content issue
Hi, for grappa.com I have about 2700 warnings of duplicate page content. My CMS generates long url like: http://www.grappa.com/deu/news.php/categoria=latest_news/idsottocat=5 and http://www.grappa.com/deu/news.php/categoria%3Dlatest_news/idsottocat%3D5 (this is a duplicated content). What's the best solution to fix this problem? Do I have to set up a 301 redirect for all the duplicated pages or insert the rel=canonical or rel=prev,next ? It's complicated becouse it's a multilingual site, and it's my first time dealing with this stuff. Thanks in advance.
Technical SEO | | nico860 -
Duplicate content, how to solve?
I have about 400 errors about duplicate content on my seomoz dashboard. However I have no idea how to solve this, I have 2 main scenarios of duplication in my site: Scenario 1: http://www.theprinterdepo.com/catalogsearch/advanced/result/?name=64MB+SDRAM+DIMM+MEMORY+MODULE&sku=&price%5Bfrom%5D=&price%5Bto%5D=&category= 3 products with the same title, but different product models, as you can note is has the same price as well. Some printers use a different memory product module. So I just cant delete 2 products. Scenario 2: toners http://www.theprinterdepo.com/brother-high-capacity-black-toner-cartridge-compatible-73 http://www.theprinterdepo.com/brother-high-capacity-black-toner-cartridge-compatible-75 In this scenario, products have a different title but the same price. Again, in this scenario the 2 products are different. Thank you
Technical SEO | | levalencia10