2013 Panda Update Question
-
Hi everyone, I'm new here So far I've had wonderful success seo wise and none of the updates (Penguin nor Panda) affected any sites, until this one.
For example, one site has 7 keywords I'm optimizing for. Out of those 7, all but 2 (and variations of the 2 - one word vs long-tail) completely tanked. These keywords were all on page 2/3. One of the two survivors never budged from page 2 (it's a brand keyword so I was sooo happy to finally get it to page 2)
Now when I check rankings, the other terms show up in the 200-400 spots, but NOT for the URL I was optimizing for (category page) but instead for random products in the category.
The only thing I've done differently with the 2 keywords that are still doing well, was focus - we did more link-building for those, but not an extreme amount. Never over-optimize.
My question is, how did 2 survive and 5 are still floating up and down. Last night I saw one go up 122 spots, now today down 14. I'm really struggling with this.
Thank you
-
i just ran a diagnostic - no errors, no duplicate content, nothing..
-
I just did a quick check right now with a free plagiarism checker. When I get the return I see some outside blog posts using the same text - not 100% though.
Just wondering if having these removed will alleviate the situation or if I need to do more?
-
Hey, what have you done so far?
Have you checked internally for duplicates? Have you used copyscape to see if there is external duplication?
A single blog post should not cause a huge problem, I would suspect that this may be more widespread.
What CMS system are you using here?
-
Thank you!
I am finding duplicate content category pages and then a blog article with the same content. Will having the blog article removed fix this or does the content on the category need to be re-written?
-
Hey, it's going to be near impossible to answer that question without examples I am afraid.
The update this week was a Panda update so should be primarily related to content and duplication so the very first thing I would check for is duplicate content issues on and off your site.
This would be a good read:
http://www.seomoz.org/blog/fat-pandas-and-thin-content
Then, maybe run your site through copyscape to get some quick feedback on any external duplication issues.
If you have been hit, there will be a reason so you need to start doing some digging, get a handle on the issue and put measures in place to resolve them.
Also, consider, this may be something to do with the work the client is doing on the site or more likely the content or it could be totally external factors (scrapers stealing content etc).
Hope that gives you some direction!
Marcus
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Question on Indexing, Hreflang tag, Canonical
Dear All, Have a question. We've a client (pharma), who has a prescription medicine approved only in the US, and has only one global site at .com which is accessed by all their target audience all over the world.
Intermediate & Advanced SEO | | jrohwer
For the rest of the US, we can create a replica of the home page (which actually features that drug), minus the existence of the medicine, and set IP filter so that non-US traffic see the duplicate of the home page. Question is, how best to tackle this semi-duplicate page. Possibly no-index won't do because that will block the site from the non-US geography. Hreflang won't work here possibly, because we are not dealing different languages, we are dealing same language (En) but different Geographies. Canonical might be the best way to go? Wanted to have an insight from the experts. Thanks,
Suparno (for Jeff)1 -
Taxonomy question - best approach for site structure
Hi all, I'm working on a dentist's website and want some advice on the best way to lay out the navigation. I would like to know which structure will help the site work naturally. I feel the second example would be better as it would focus the 'power' around the type of treatment and get that to rank better. .com/assessment/whitening
Intermediate & Advanced SEO | | Bee159
.com/assessment/straightening
.com/treatment/whitening
.com/treatment/straightening or .com/whitening/assessment
.com/straightening/assessment
.com/whitening/treatment
.com/straightening/treatment Please advise, thanks.0 -
Href lang and multilingual question
Greetings Moz-Hive mind! I'm hoping you can help me on the internationalisation conundrum below; We currently have a website with three distinct 'locales' US, SEA and UK we automatically redirect customers using IP recognition to a locale which matches, we also determine their currency based on IP. The issue we currently have is a lot of duplicate content and no use of href lang or rel=canonical tags etc... My proposed structure would be to create a locale based directory for the three locales we offer. / - being US and most other Worldwide /uk - being UK /as - being Hong Kong and other Asian territories. How would you suggest we set up the href lang tags for these? Because technically there are going to be multiple language possibilities within. Our main customers are English only if this helps. Also as a secondary question, how should I set up the Google Search Console settings for each of these directories? Many thanks in advance.
Intermediate & Advanced SEO | | Ashley-Jacada0 -
Proper naming convention when updating sitemaps
I have a semi-big site (500K pages) with lots of new pages being created. I also have a process that updates my sitemap with all of these pages automatically. I have 11 sitemap files and a sitemap index file. When I update my sitemaps and submit them to Google, should I keep the same names?
Intermediate & Advanced SEO | | jcgoodrich0 -
Google+ Page Question
Just started some work for a new client, I created a Google+ page and a connected YouTube page, then proceeded to claim a listing for them on google places for business which automatically created another Google+ page for the business listing. What do I do in this situation? Do I delete the YouTube page and Google+ page that I originally made and then recreate them using the Google+ page that was automatically created or do I just keep both pages going? If the latter is the case, do I use the same information to populate both pages and post the same content to both pages? That doesn't seem like it would be efficient or the right way to go about handling this but I could be wrong.
Intermediate & Advanced SEO | | goldbergweismancairo0 -
Question about multiple websites in same field
I know what most people say that it is best to only have the 1 website for focus but if we can put this to the back of our minds, if we create 2 different websites that are totally different designs (one upmarket one and one targeting the cheaper market) but in the same fields (printing) and go after 80% of the same keywords is this ok (could we be penalized). Please note we will not be interlinking the websites, the website .will be on different servers and the names will be registered under different people (2 partners in the business). We will however be accessing webmaster tools from the same location.
Intermediate & Advanced SEO | | BobAnderson0 -
Removing Content 301 vs 410 question
Hello, I was hoping to get the SEOmoz community’s advice on how to remove content most effectively from a large website. I just read a very thought-provoking thread in which Dr. Pete and Kerry22 answered a question about how to cut content in order to recover from Panda. (http://www.seomoz.org/q/panda-recovery-what-is-the-best-way-to-shrink-your-index-and-make-google-aware). Kerry22 mentioned a process in which 410s would be totally visible to googlebot so that it would easily recognize the removal of content. The conversation implied that it is not just important to remove the content, but also to give google the ability to recrawl that content to indeed confirm the content was removed (as opposed to just recrawling the site and not finding the content anywhere). This really made lots of sense to me and also struck a personal chord… Our website was hit by a later Panda refresh back in March 2012, and ever since then we have been aggressive about cutting content and doing what we can to improve user experience. When we cut pages, though, we used a different approach, doing all of the below steps:
Intermediate & Advanced SEO | | Eric_R
1. We cut the pages
2. We set up permanent 301 redirects for all of them immediately.
3. And at the same time, we would always remove from our site all links pointing to these pages (to make sure users didn’t stumble upon the removed pages. When we cut the content pages, we would either delete them or unpublish them, causing them to 404 or 401, but this is probably a moot point since we gave them 301 redirects every time anyway. We thought we could signal to Google that we removed the content while avoiding generating lots of errors that way… I see that this is basically the exact opposite of Dr. Pete's advice and opposite what Kerry22 used in order to get a recovery, and meanwhile here we are still trying to help our site recover. We've been feeling that our site should no longer be under the shadow of Panda. So here is what I'm wondering, and I'd be very appreciative of advice or answers for the following questions: 1. Is it possible that Google still thinks we have this content on our site, and we continue to suffer from Panda because of this?
Could there be a residual taint caused by the way we removed it, or is it all water under the bridge at this point because Google would have figured out we removed it (albeit not in a preferred way)? 2. If there’s a possibility our former cutting process has caused lasting issues and affected how Google sees us, what can we do now (if anything) to correct the damage we did? Thank you in advance for your help,
Eric1 -
Canonical Tag - Question
Hey, I will give a thumbs up and best answer to whoever answers my question correctly. The Canonical Tag is supposed to solve Duplication which is fine. My questions are: Does the Canonical Tag make the PR / Link Juice flow differently? If I have john.long.com/home and john.long.com but put a Canonical Tag on john.long.com/home reading john.long.com then what does this do? Does it flow the Link Equity back to john.long.com? Can you use the Canonical Tag to change PR flow in any means? If I had john.long.com/washing-machines and john.long.com/kids-toys... If I put a Canonical Tag on john.long.com/kids-toys reading john.long.com/washing-machines then would the PR from /kids-toys flow to /washing-machines or would Google just ignore this? (The pages are completely different in this example and content is completely different). Thank you.
Intermediate & Advanced SEO | | AdiRste0