Please help me with your advice
-
Hi all,
Couple years ago I started to build my business based on EMD domain. The intention was to create the source with the rich unique content.
After a year of hard work the site achieved top 10 in Google and started to generate good amount of leads.
Then Google announced the EMD Update and site lost the 90% of traffic (after Pandas updates our SERP was steady )
“ a new filter that tries to ensure that low-quality sites don’t rise high in Google’s search results simply because they have search terms in their domain names. ”
But I don’t consider my site low-quality site, every page, every post is 100% unique and has been created only to share the knowledge with others…
The site has EXCELLENT content from industry point of view....
Since the “ EMD Update “ I read hundreds , hundreds of different articles and opinions related to EMD update and finally I am confused and lost.
What should I do…
• Kill the site and start new one
• Get more links, but what type of links and how I should get them
• Keep hoping and pray....
• Or do something elsePlease help me with your advice
-
Thank you...I will appreciate if you will give me 10 mins of your time....i sent site to your PM
-
I have a post on the subject here - it's very long, because it's a complex subject:
http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world
We're not saying this is definitely the problem - just that you should be aware of how complicated it can get. Unfortunately, it's hard to tell without really looking at the site. A lot can happen to hurt a site's rankings, and the EMD update was just one piece of the puzzle.
-
But Panda...as far as know looking for duplicate content....and we are very careful with that for all our sites....Content is 100 unique and never been spinned, written by humans....
so i am confused more...
This is the article by M.Cutts I remeber i read it
date 3 october 2012....
after that day our site started to loose the trafic...not in 1 night but slowly..slowly...than
up to mid nov 2012 it lost about 65% ..this is why i have concluded from the begging that it was EMD -
Ah! I was thinking it was end of September because you said you were a casualty of EMD. If this happened mid November then it's definitely not EMD.
There were Panda updates November 5 and November 21. If the drop doesn't coincide with those dates then it is not due to a major algorithm change. (By major I mean Panda/Penguin as Google is constantly tweaking the algorithm.)
-
all top 10 positions were dropped in mid of november 2012.
I said 90 % ...i think the statment has been exaggerated....but 65% for sure -
The thing is that if you lost 90% of your traffic at the end of September (i.e. Sept 27/28) then the issue is very likely either EMD or Panda. If you have a good site with 300 well written unique pages then in my mind EMD is almost impossible. So, I would go investigating Panda issues. Duplicate and thin content are the top culprits but there can be other factors.
There are other possibilities though including a change in urls, DNS problems, hosting problems, malware issues, robots.txt problems, accidentally noindexing, a competitor ramping up their SEO etc. etc.
If the traffic drop was a little later, like October 5 then Penguin is a possibility. Penguin is related to overoptimized anchor text in your backlinks among other things.
Sometimes when a site is affected on a Panda date but doesn't seem to have Panda issues it is possible that sites linking to your site were affected by Panda and as such you have lost some of your link juice. But it is unlikely that 90% of your traffic would go because of this.
-
the site got only unique urls, by pages i meant urls too
-
I will appreciate if you will explain it bit more in details....
-
Keep in mind, too, that a lot of duplicate content is accidental. Google doesn't care about pages, per se - they care about unique URLs. So, if you have 300 unique pages, but something about your CMS translates that into 5,000 crawlable URLs, then you could definitely have problems.
-
I really appreciate your knowledge , but the site has nothing to do with Panda as it got 300 pages with unique and well written content
-
The number of unique pages doesn't really matter when it comes to Panda. You could have 300 unique pages, but if there are also 50 pages of copied content then this can trigger Panda.
But the other question I had was about thin content. An example of thin content would be a page that has say, a product photo, a bunch of template text that is the same from page to page a few ads and then only 1 or 2 lines of text.
Another example of a thin page would be if you had a section of definitions and each one had its own page. They could possibly be considered thin.
-
If this is really related to the EMD update, then I'd agree with Charles - starting over is a bad idea. My best guess is that the EMD update wasn't a penalty, per se - it was more like Google lowered the volume on EMDs. In other words, having one isn't bad now - it's just that it's not as good as it used to be. There's no way to fix that really (you can't turn the volume back up), but the risks of switching domains would probably far outweigh the benefits.
To back up Marie, though, a lot happened right around the EMD update, and it's really tough to diagnose. I'd definitely look at Panda factors, like "thin" content. Try to look at the site from Google's POV - what you view as unique doesn't matter, frankly. You could be spinning out URL-based duplicates, for example, and not realize it - that's more of a technical SEO issue (you're not doing anything devious, but the site may still be giving Google problems).
The other issue to consider is whether your EMD has caused you to really pile on exact-match anchor text, especially keyword-loaded anchor text. This could trigger Penguin or similar problems. This is often correlated with EMDs, even though it wouldn't necessarily be a result of the EMD update.
If they've really just turned down the "volume", then you have to get other ranking factors in play - build more relevant, authoritative links, increase your social signals, etc. In other words, focus on aspects of SEO beyond simple on-page ranking factors.
-
Hi Maire, thanks for your answer site got over 300 unique pages...
-
When EMD hit on September 28, I asked for people to send me domains that had been affected so that I could see if there were any patterns. I had over 100 domains sent to me and the vast majority of them actually had Panda issues. Then, a few days after EMD, Google announced that they had also done a Panda refresh on September 27.
In all of the domains that I analyzed I would say that one of them was likely a true EMD candidate. This was a one page site with very little content and several affiliate links. It previously was ranking well for a competitive niche. The only reason it was ranking well was because of its domain name. EMD was designed to take the ranking benefit away from sites that ONLY ranked because they had keywords in their domain name. It doesn't punish a site simply because there are keywords in the domain name.
You've mentioned that your pages are 100% unique. Do you have thin pages? If you have a section of your site that has pages with very little content on the page then this can cause Panda to affect you. But there are other possible reasons as well.
-
Thanks Charles for your time...
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help Dealing with Sustained Negative SEO Attack
Hello, I am hoping that someone is able to help with a problem that is destroying both my business and my health. We are an ecommerce site who have been trading since 2004 and who have always had strong rankings in Google. Unfortunately, over the past couple of months, these have significantly decreased (I would estimate around 40% drop in organic traffic). We have not had a manual penalty and still have decent rankings for a lot of competitive keywords, so we think it is more likely to be an algorithmic penalty.The most likely culprit is due to a huge scale negative SEO attack that has been going on for around 18 months. Last September, we suffered a major drop in rankings as a result of the 302 hijack scheme, but after submitting a disavow file (of around 500 domains) on 12th November, we recovered on 26th November (although we now don't know whether this was due to disavow file or the Phantom III update on 19th November).After suffering another major drop at the end of June, we submitted a disavow file of 1100 domains (this the scale of the problem!). This tempoarily halted the slide, however it is getting worse again. I have attached a file from Majestic which shows the increase in the backlinks (however we are not building these).We are at a loss and desperately need help. We have contacting all the sites to try and get links removed but they are happening faster than we can contact them. We have also done a full technical audit and added around 50,000 words of unique, handwritten content, as well as continuing to work through all technical fixes and improvements.At the moment, the only thing we can think of doing is submitting a weekly disavow for all the new spammy domains that come up. The questions I have are: Is there anything we can do to stop the attack? Is this increase in backlinks likely to be the culprit for the drops (both the big drops and the subsequent weekly 10% drop)? If so, would weekly disavows solve the problem? Is this likely to take months (years?) to recover from or can it be done quicker? Can you give me any ray of light to help me sleep at night? 😞 Really appreciate any and all help. I wouldn't wish ths on anyone.Thanks,Simon
Intermediate & Advanced SEO | | simonukss0 -
Need help with Robots.txt
An eCommerce site built with Modx CMS. I found lots of auto generated duplicate page issue on that site. Now I need to disallow some pages from that category. Here is the actual product page url looks like
Intermediate & Advanced SEO | | Nahid
product_listing.php?cat=6857 And here is the auto generated url structure
product_listing.php?cat=6857&cPath=dropship&size=19 Can any one suggest how to disallow this specific category through robots.txt. I am not so familiar with Modx and this kind of link structure. Your help will be appreciated. Thanks1 -
Transferring Domain and redirecting old site to new site and Having Issues - Please help
I have just completed a site redesign under a different domain and new wordpress woo commerce platform. The typical protocol is to just submit all the redirects via the .htaccess file on the current site and thereby tell google the new home of all your current pages on the new site so you maintain your link juice. This problem is my current site is hosted with network solutions and they do not allow access to the .htaccess file and there is no way to redirect the pages they say other than a script they can employ to push all pages of the old site to the new home page of the new site. This is of course bad for seo so not a solution. They did mention they could also write a script for the home page to redirect just it to the new home page then place a script of every individual page redirecting each of those. Does this sound like something plausible? Noone at network solutions has really been able to give me a straight answer. That being said i have discussed with a few developers and they mentioned a workaround process to avoid the above: “The only thing I can think of is.. point both domains (www.islesurfboards.com & www.islesurfandsup.com) to the new store, and 301 there? If you kept WooCommerce, Wordpress has plugins to 301 pages. So maybe use A record or CName for the old URL to the new URL/IP, then use htaccess to redirect the old domain to the new domain, then when that comes through to the new store, setup 301's there for pages? Example ... http://www.islesurfboards.com points to http://www.islesurfandsup.com ... then when the site sees http://www.islesurfboards.com, htaccess 301's to http://www.islesurfandsup.com.. then wordpress uses 301 plugin for the pages? Not 100% sure if this is the best way... but might work." Can anyone confirm this process will work or suggest anything else to redirect my current site on network solutions to my new site withe new domain and maintain the redirects and seo power. My domain www.islesurfboards.com has been around for 10 years so dont just want to flush the link juice down the toilet and want to redirect everything correctly.
Intermediate & Advanced SEO | | isle_surf0 -
Blank Cart Pages Showing as Duplicate, HELP
Hi Everyone, I'm seeing a bunch of URLs that look something like this [ domain.com/cart?add&id_product=42&token=776d4a08721f3d8c920e287248797547] showing as duplicate content in my Moz crawls. I think these are just blank pages for the most part. Is there anything to be concerned with here? Is there a way to clean this up? Thanks! Ricky
Intermediate & Advanced SEO | | RickyShockley0 -
New domain purchase 301 and 404 issues. Please help!
We recently purchased www.carwow.com and 301 redirected the site to www.carwow.co.uk (our main domain). The problem is that carwow.com had URLs indexed like www.carwow.com/a-b-c the 301 sends them to carwow.co.uk/a-b-c which obviously doesn't exist so is a 404! What should be done in this situation? Should it be ignored and not re-directed at all, or is there a way to delete/disavow these dead pages? An SEO has advised we redirect all pages to the homepage, but won't that mess up the link profile? Any advice would be great!
Intermediate & Advanced SEO | | JamesPursey0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
What things, that we might overlook, help retain link juice on the site?
Since subscribing to Moz, I have been focussing alot on some of the more technical aspects of SEO. The current thing I am finding interesting is stopping link juice leaks. Here are a selection of some of the things I have done: I have cloaked my affiliate links - see http://yoast.com/cloak-affiliate-links/ Removed some html coded social share links within the theme, and replaced with javascript plugin (http://wordpress.org/plugins/flare/) Used the Moz toolbar to view as Google, to see what google is seeing. Removed some meta links at the bottom of blog posts (author etc) that were duplicated. Now, I don't intend to go over the top with this, as links to social accounts on each page are there to encourage engagement etc, but are there any things you may have come across \ tips that people may have overlooked but perhaps should look out for? As example as some of the things that might be interesting to discuss: Are too many tags, categories bad? Do you index your tag, date archive pages? Does it matter?
Intermediate & Advanced SEO | | Jonathan19790 -
.htaccess 301 Redirect Help! Specific Redirects and Blanket Rule
Hi there, I have the following domains: OLD DOMAIN: domain1.co.uk NEW DOMAIN: domain2.co.uk I need to create a .htaccess file that 301 redirects specific, individual pages on domain1.co.uk to domain2.co.uk I've searched for hours to try and find a solution, but I can't find anything that will do what I need. The pages on domain1.co.uk are all kinds of filenames and extensions, but they will be redirected to a Wordpress website that has a clean folder structure. Some example URL's to be redirected from the old website: http://www.domain1.co.uk/charitypage.php?charity=357 http://www.domain1.co.uk/adopt.php http://www.domain1.co.uk/register/?type=2 These will need to be redirected to the following URL types on the new domain: http://www.domain2.co.uk/charities/ http://www.domain2.co.uk/adopt/ http://www.domain2.co.uk/register/ I would also like a blanket/catch-all redirect from anything else on www.domain1.co.uk to the homepage of www.domain2.co.uk if there isn't a specific individual redirect in place. I'm literally tearing my hair out with this, so any help would be greatly appreciated! Thanks
Intermediate & Advanced SEO | | Townpages0