All Thin Content removed and duplicate content replaced. But still no success?
-
Good morning,
Over the last three months i have gone about replacing and removing all the duplicate content (1000+ page) from our site top4office.co.uk.
Now it been just under 2 months since we made all the changes and we still are not showing any improvements in the SERPS.
Can anyone tell me why we aren't making any progress or spot something we are not doing correctly?
Another problem is that although we have removed 3000+ pages using the removal tool searching site:top4office.co.uk still shows 2800 pages indexed (before there was 3500).
Look forward to your responses!
-
Thanks for your responses. We are talking over 3000 pages of duplicate content which we have no removed and replaced with actual relevant unique and engaging content.
We completed all the content changes on the 6/06/2013. Im thinking to leave it for a while and see whether our rank improves within the next month or so. We may consider moving the site to another domain since its features lots of high quality content.
Thoughts?
-
I've had two sites with Panda problems. One had two copies of hundreds of pages in both .html and .pdf format (to control printing format). The other had a few hundred pages of .edu press releases republished verbatim at their request or with their permission.
Both of these sites had site-wide drops on Panda dates.
We used rel=canonical on the .pdf documents on one site using .htaccess. On the site with the .edu press releases we used noindex/follow.
Both sites recovered to former rankings a few weeks after the changes were made.
If you had a genuine Panda problem and only a Panda problem then a couple months might be about the amount of time needed to see a recovery.
-
That's hard to say. A recent history and link profile like yours won't give your site the authority it needs for index updates at the frequency you would like. It's also possible that a hole has been dug that you cannot pop out of simply by reversing the actions of your past SEO.
You really need a thorough survey of your site, it's history, and it's analytics to determine the extent of the current problem and the best path to take to get out of it. Absent that, shed what bad back links that you can and develop a strategy to build visitor engagement with your brand.
-
The site has not received a manual penalty from Google.
However traffic and generic keywords fell when the previous developer decided to copy all of the products directly from our other site top4office.com.
The site was ranking pretty well in the past. Do you have any kind of ETA of when the updates will take effect
-
Hi Apogee
It can certainly take several months for your pages to drop from the index so if you've removed the pages in GWT and removed the URLs they'll eventually fall out of the index.
Was the site penalized and that's why you removed/replaced the dupe content--meaning were you ranking well and then, all of a sudden your rankings tumbled or are you just now working to build up your rankings? This is an important distinction because there are few examples of sites that received a panda penalty (thin/duplicate content) coming back to life.
If you don't think you've been penalized and you're just working to optimize your site and pull it up in the rankings for the first time, consider how unique your content is and how you're communicating your unique value proposition to the visitor. Keep focusing on those things.
Also, your back link profile looks a bit seedy--in fact, your problem could well be penguin-related. If you were penalized and it was a penguin penalty, you should be looking to clean up some of those links and working to build new ones from more thematically relevant sites.
-
Removing duplicate content won't necessarily increase your search positioning. It will however, give your site the foundations needed to start a (relevant, natural and organic) link building campaign - which if done correctly should increase your SERP's.
You should see content as part of the foundations. Good quality and unique content is usually needed in order to be rankable but it doesn't make you rank necessarily.
Having good quality unique content will also minimise the chances of being hit by an algo update.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Magento products and eBay - duplicate content risk?
Hi, We are selling about 1000 sticker products in our online store and would like to expand a large part of our products lineup to eBay as well. There are pretty good modules for this as I've heard. I'm just wondering if there will be duplicate content problems if I sync the products between Magento and eBay and they get uploaded to eBay with identical titles, descriptions and images? What's the workaround in this case? Thanks!
Intermediate & Advanced SEO | | speedbird12290 -
Duplicate Content: Is a product feed/page rolled out across subdomains deemed duplicate content?
A company has a TLD (top-level-domain) which every single product: company.com/product/name.html The company also has subdomains (tailored to a range of products) which lists a choosen selection of the products from the TLD - sort of like a feed: subdomain.company.com/product/name.html The content on the TLD & subdomain product page are exactly the same and cannot be changed - CSS and HTML is slightly differant but the content (text and images) is exactly the same! My concern (and rightly so) is that Google will deem this to be duplicate content, therfore I'm going to have to add a rel cannonical tag into the header of all subdomain pages, pointing to the original product page on the TLD. Does this sound like the correct thing to do? Or is there a better solution? Moving on, not only are products fed onto subdomain, there are a handfull of other domains which list the products - again, the content (text and images) is exactly the same: other.com/product/name.html Would I be best placed to add a rel cannonical tag into the header of the product pages on other domains, pointing to the original product page on the actual TLD? Does rel cannonical work across domains? Would the product pages with a rel cannonical tag in the header still rank? Let me know if there is a better solution all-round!
Intermediate & Advanced SEO | | iam-sold0 -
Duplicate content question
Hi there, I work for a Theater news site. We have an issue where our system creates a chunk of duplicate content in Google's eyes and we're not sure how best to solve. When an editor produces a video, it simultaneously 1) creates a page with it's own static URL (e.g. http://www.theatermania.com/video/mary-louise-parker-tommy-tune-laura-osnes-and-more_668.html); and 2) displays said video on a public index page (http://www.theatermania.com/videos/). Since the content is very similar, Google sees them as duplicate. What should we do about this? We were thinking that one solution would to be dynamically canonicalize the index page to the static page whenever a new video is posted, but would Google frown on this? Alternatively, should we simply nofollow the index page? Lastly, are there any solutions we may have missed entirely?
Intermediate & Advanced SEO | | TheaterMania0 -
Hreflang tag could solve any duplicate content problems on the different versions??
I have run across a couple of articles recently suggesting that using the hreflang tag could solve any SEO problems associated with having duplicate content on the different versions (.co.uk, .com, .ca, etc). here is an example here: http://www.emarketeers.com/e-insight/how-to-use-hreflang-for-international-seo/ Over to you and your technical colleagues, I think ….
Intermediate & Advanced SEO | | JordanBrown0 -
How do I best handle Duplicate Content on an IIS site using 301 redirects?
The crawl report for a site indicates the existence of both www and non-www content, which I am aware is duplicate. However, only the www pages are indexed**, which is throwing me off. There are not any 'no-index' tags on the non-www pages and nothing in robots.txt and I can't find a sitemap. I believe a 301 redirect from the non-www pages is what is in order. Is this accurate? I believe the site is built using asp.net on IIS as the pages end in .asp. (not very familiar to me) There are multiple versions of the homepage, including 'index.html' and 'default.asp.' Meta refresh tags are being used to point to 'default.asp'. What has been done: 1. I set the preferred domain to 'www' in Google's Webmaster Tools, as most links already point to www. 2. The Wordpress blog which sits in a /blog subdirectory has been set with rel="canonical" to point to the www version. What I have asked the programmer to do: 1. Add 301 redirects from the non-www pages to the www pages. 2. Set all versions of the homepage to redirect to www.site.org using 301 redirects as opposed to meta refresh tags. Have all bases been covered correctly? One more concern: I notice the canonical tags in the source code of the blog use a trailing slash - will this create a problem of inconsistency? (And why is rel="canonical" the standard for Wordpress SEO plugins while 301 redirects are preferred for SEO?) Thanks a million! **To clarify regarding the indexation of non-www pages: A search for 'site:site.org -inurl:www' returns only 7 pages without www which are all blog pages without content (Code 200, not 404 - maybe deleted or moved - which is perhaps another 301 redirect issue).
Intermediate & Advanced SEO | | kimmiedawn0 -
Duplicate Content on Wordpress b/c of Pagination
On my recent crawl, there were a great many duplicate content penalties. The site is http://dailyfantasybaseball.org. The issue is: There's only one post per page. Therefore, because of wordpress's (or genesis's) pagination, a page gets created for every post, thereby leaving basically every piece of content i write as a duplicate. I feel like the engines should be smart enough to figure out what's going on, but if not, I will get hammered. What should I do moving forward? Thanks!
Intermediate & Advanced SEO | | Byron_W0 -
Cross-Domain Canonical and duplicate content
Hi Mozfans! I'm working on seo for one of my new clients and it's a job site (i call the site: Site A).
Intermediate & Advanced SEO | | MaartenvandenBos
The thing is that the client has about 3 sites with the same Jobs on it. I'm pointing a duplicate content problem, only the thing is the jobs on the other sites must stay there. So the client doesn't want to remove them. There is a other (non ranking) reason why. Can i solve the duplicate content problem with a cross-domain canonical?
The client wants to rank well with the site i'm working on (Site A). Thanks! Rand did a whiteboard friday about Cross-Domain Canonical
http://www.seomoz.org/blog/cross-domain-canonical-the-new-301-whiteboard-friday0 -
SEOMoz mistaking image pages as duplicate content
I'm getting duplicate content errors, but it's for pages with high-res images on them. Each page has a different, high-res image on it. But SEOMoz keeps telling me it's duplicate content, even though the images are different (and named different). Is this something I can ignore or will Google see it the same way too?
Intermediate & Advanced SEO | | JHT0