All Thin Content removed and duplicate content replaced. But still no success?
-
Good morning,
Over the last three months i have gone about replacing and removing all the duplicate content (1000+ page) from our site top4office.co.uk.
Now it been just under 2 months since we made all the changes and we still are not showing any improvements in the SERPS.
Can anyone tell me why we aren't making any progress or spot something we are not doing correctly?
Another problem is that although we have removed 3000+ pages using the removal tool searching site:top4office.co.uk still shows 2800 pages indexed (before there was 3500).
Look forward to your responses!
-
Thanks for your responses. We are talking over 3000 pages of duplicate content which we have no removed and replaced with actual relevant unique and engaging content.
We completed all the content changes on the 6/06/2013. Im thinking to leave it for a while and see whether our rank improves within the next month or so. We may consider moving the site to another domain since its features lots of high quality content.
Thoughts?
-
I've had two sites with Panda problems. One had two copies of hundreds of pages in both .html and .pdf format (to control printing format). The other had a few hundred pages of .edu press releases republished verbatim at their request or with their permission.
Both of these sites had site-wide drops on Panda dates.
We used rel=canonical on the .pdf documents on one site using .htaccess. On the site with the .edu press releases we used noindex/follow.
Both sites recovered to former rankings a few weeks after the changes were made.
If you had a genuine Panda problem and only a Panda problem then a couple months might be about the amount of time needed to see a recovery.
-
That's hard to say. A recent history and link profile like yours won't give your site the authority it needs for index updates at the frequency you would like. It's also possible that a hole has been dug that you cannot pop out of simply by reversing the actions of your past SEO.
You really need a thorough survey of your site, it's history, and it's analytics to determine the extent of the current problem and the best path to take to get out of it. Absent that, shed what bad back links that you can and develop a strategy to build visitor engagement with your brand.
-
The site has not received a manual penalty from Google.
However traffic and generic keywords fell when the previous developer decided to copy all of the products directly from our other site top4office.com.
The site was ranking pretty well in the past. Do you have any kind of ETA of when the updates will take effect
-
Hi Apogee
It can certainly take several months for your pages to drop from the index so if you've removed the pages in GWT and removed the URLs they'll eventually fall out of the index.
Was the site penalized and that's why you removed/replaced the dupe content--meaning were you ranking well and then, all of a sudden your rankings tumbled or are you just now working to build up your rankings? This is an important distinction because there are few examples of sites that received a panda penalty (thin/duplicate content) coming back to life.
If you don't think you've been penalized and you're just working to optimize your site and pull it up in the rankings for the first time, consider how unique your content is and how you're communicating your unique value proposition to the visitor. Keep focusing on those things.
Also, your back link profile looks a bit seedy--in fact, your problem could well be penguin-related. If you were penalized and it was a penguin penalty, you should be looking to clean up some of those links and working to build new ones from more thematically relevant sites.
-
Removing duplicate content won't necessarily increase your search positioning. It will however, give your site the foundations needed to start a (relevant, natural and organic) link building campaign - which if done correctly should increase your SERP's.
You should see content as part of the foundations. Good quality and unique content is usually needed in order to be rankable but it doesn't make you rank necessarily.
Having good quality unique content will also minimise the chances of being hit by an algo update.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Handling duplicate content, whilst making both rank well
Hey MOZperts, I run a marketplace called Zibbet.com and we have 1000s of individual stores within our marketplace. We are about to launch a new initiative giving all sellers their own stand-alone websites. URL structure:
Intermediate & Advanced SEO | | relientmark
Marketplace URL: http://www.zibbet.com/pillowlink
Stand-alone site URL: http://pillowlink.zibbet.com (doesn't work yet) Essentially, their stand-alone website is a duplicate of their marketplace store. Same items (item title, description), same seller bios, same shop introduction content etc but it just has a different layout. You can scroll down and see a preview of the different pages (if that helps you visualize what we're doing), here. My Questions: My desire is for both the sellers marketplace store and their stand-alone website to have good rankings in the SERPS. Is this possible? Do we need to add any tags (e.g. "rel=canonical") to one of these so that we're not penalized for duplicate content? If so, which one? Can we just change the meta data structure of the stand-alone websites to skirt around the duplicate content issue? Keen to hear your thoughts and if you have any suggestions for how we can handle this best. Thanks in advance!0 -
Ticket Industry E-commerce Duplicate Content Question
Hey everyone, How goes it? I've got a bunch of duplicate content issues flagged in my Moz report and I can't figure out why. We're a ticketing site and the pages that are causing the duplicate content are for events that we no longer offer tickets to, but that we will eventually offer tickets to again. Check these examples out: http://www.charged.fm/mlb-all-star-game-tickets http://www.charged.fm/fiba-world-championship-tickets I realize the content is thin and that these pages basically the same, but I understood that since the Title tags are different that they shouldn't appear to the Goog as duplicate content. Could anyone offer me some insight or solutions to this? Should they be noindexed while the events aren't active? Thanks
Intermediate & Advanced SEO | | keL.A.xT.o1 -
Hreflang tag could solve any duplicate content problems on the different versions??
I have run across a couple of articles recently suggesting that using the hreflang tag could solve any SEO problems associated with having duplicate content on the different versions (.co.uk, .com, .ca, etc). here is an example here: http://www.emarketeers.com/e-insight/how-to-use-hreflang-for-international-seo/ Over to you and your technical colleagues, I think ….
Intermediate & Advanced SEO | | JordanBrown0 -
Redirecting thin content city pages to the state page, 404s or 301s?
I have a large number of thin content city-level pages (possibly 20,000+) that I recently removed from a site. Currently, I have it set up to send a 404 header when any of these removed city-level pages are accessed. But I'm not sending the visitor (or search engine) to a site-wide 404 page. Instead, I'm using PHP to redirect the visitor to the corresponding state-level page for that removed city-level page. Something like: if (this city page should be removed) { header("HTTP/1.0 404 Not Found");
Intermediate & Advanced SEO | | rriot
header("Location:http://example.com/state-level-page")
exit();
} Is it problematic to send a 404 header and still redirect to a category-level page like this? By doing this, I'm sending any visitors to removed pages to the next most relevant page. Does it make more sense to 301 all the removed city-level pages to the state-level page? Also, these removed city-level pages collectively have very little to none inbound links from other sites. I suspect that any inbound links to these removed pages are from low quality scraper-type sites anyway. Thanks in advance!2 -
Is this duplicate content?
My client has several articles and pages that have 2 different URLs For example: /bc-blazes-construction-trail is the same article as: /article.cfm?intDocID=22572 I was not sure if this was duplicate content or not ... Or if I should be putting "/article.cfm" into the robots.txt file or not.. if anyone could help me out, that would be awesome! Thanks 🙂
Intermediate & Advanced SEO | | ATMOSMarketing560 -
Dropped ranking - Penguin penalty or duplicate content issue?
Just this weekend a page that had been ranking well for a competitive term fell completely out of the rankings. There are two possible causes and I'm trying to figure out which it is, so I can take action. I found out that I had accidentally put a canonical on another page that was for the same page as the one that dropped out of the rankings. If there are two pages with the same canonical tag with different content, will google drop both of them from the index? The other possibility is that this is a result of the recent Penguin update. The page that dropped has a high amount of exact anchor text. As far as I can tell, there were no other pages with any penalties from the Penguin update. One last question: The page completely dropped from the search index. If this were a Penguin issue, would it have dropped out completely,or just been penalized with a drop in position? If this is a result of the conflicting canonical tags, should I just wait for it to reindex, or should I request a reconsideration of the page?
Intermediate & Advanced SEO | | gametv0 -
Having a hard time with duplicate page content
I'm having a hard time redirecting website.com/ to website.com The crawl report shows both versions as duplicate content. Here is my htaccess: RewriteEngine On
Intermediate & Advanced SEO | | cgman
RewriteBase /
#Rewrite bare to www
RewriteCond %{HTTP_HOST} ^mywebsite.com
RewriteRule ^(([^/]+/)*)index.php$ http://www.mywebsite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.php -f
RewriteRule ^(.*)$ $1.php [NC,L]
RewriteCond %{HTTP_HOST} !^.localhost$ [NC]
RewriteRule ^(.+)/$ http://%{HTTP_HOST}$1 [R=301,L] I added the last 2 lines after seeing a Q&A here, but I don't think it has helped.0 -
How to deal with category browsing and duplicate content
On an ecommerce site there are typically a lot of pages that may appear to be duplications due to category browse results where the only difference may be the sorting by price or number of products per page. How best to deal with this? Add nofollow to the sorting links? Set canonical values that ignore these variables? Set cononical values that match the category home page? Is this even a possible problem with Panda or spiders in general?
Intermediate & Advanced SEO | | IanTheScot0