Anyone else seeing increased duplication of domains since Penguin?
-
Hi Is it just me or are the Google SERPs showing more duplication of domains since the penguin update. As an example if I search for "business Christmas cards" on google.co.uk then results 2, 3 and 17 are from the same domain. Similarly results 4, 20, 21 and 22 are the same domain. All results are "reasonable" in that they are designed to catch traffic for variations on this term BUT I'm sure google used to filter this duplication per-penguin. Am I imagining this increased duplication of domains? Gary
-
I just thought I would share an astonishingly good (or is that poor) example of the problem at the moment....
Do a search on www.google.co.uk for "wedding venues in buckinghamshire" (no quotes needed) and most of the results from page two onwards are coming from a single domain!
Simply terrible results, IMHO! How broken is that!
-
That's encouraging. Even if the results are a bit dodgy at the moment, I'm sure Google will sort this out as it creates such a poor experience.
-
Hi Gary, there is a lot of chat on this on the Google Webmaster Forum . It's happening all over. Some people seeing the same domain dominate 8 out of the first 10 search results on Page 1 . Great if that's your domain - not so good for everyone else. Also not such a great user experience in my opinion.
-
Thanks for this. Funny, I'm looking from the perspective of "breaking in" right now. It's just a bit of a shame because many of the duplicated pages are really only designed for search engines and are variations on a theme. Hopefully this is an unintended consequence of the latest updates and we will see it reversed. I don't think it improves the results. Gary
-
** Is it just me or are the Google SERPs showing more duplication of domains since the penguin update.**
I noticed this too. I believe that Google is making it much more difficult to get two (or more) listings on the first page of the SERPs. However, the number you can get on other pages has really gone up.
I was reading about a study that looked at the first 1000 places in google. Usually, only about 200 domains are present in the top 1000. When I think about that I wonder how a new website in some moderately competitive niche is supposed to break in.
-
Yeah I have noticed the same, while searching on a keyword "sugar daddy oregon" on google.com, I noticed the SEO work I had done secured the top 3 positions for seekingarrangement.com
This only happened after those major updates.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Value of dormant domain
My client used to own a successful domain. They sold the business, the domain was not used by the purchaser. My client bought back the business and redirects the original STRONG domain to their new domain. How can I find out current page rank, traffic, etc of the original domain? Mik
Technical SEO | | mcorso0 -
Duplicate content : domain alias issue
Hello there ! Let's say my client has 2 webshops (that exists since long time, so many backlinks & good authority on both) : individuals.nl : for individuals (has 200 backlinks, let's say) pros.nl : exact same products, exact same content, but with a different branding intended to professionnals (has 100 backlinks, let's say) So, both websites are 99% identical and it has to remain like that !!! Obviously, this creates duplicate content issues. Goal : I want "individuals.nl" to get all ranking value (while "pros.nl" should remain accessible through direct access & appear on it's own brand queries). Solution ? Implement canonical tags on "pros**.nl**" that goes to "individuals.nl". That way, "individuals.nl" will get all ranking value, while "pros.nl" will still be reachable through direct access. However, "individuals.nl" will then replace "pros.nl" from SERP in the long-term. The only thing I want is to keep "pros.nl" visible for its own brand queries -> it won't be possible through organic search result, so, I'm just gonna buy those "pros" queries through paid search ! Put links on all pages of pros.nl to individuals.nl (but not the other way around), so that "pros.nl" will pass some ranking value to "individuals.nl" (but only a small part of the ranking value -> ideally, I would like to pass all link value to this domain). Could someone advise me ??? (I know it sound a bit complicated... but I don't have much choice ^^)
Technical SEO | | Netsociety0 -
Duplicate content
Hello mozzers, I have an unusual question. I've created a page that I am fully aware that it is near 100% duplicate content. It quotes the law, so it's not changeable. The page is very linkable in my niche. Is there a way I can build quality links to it that benefit my overall websites DA (i'm not bothered about the linkable page being ranked) without risking panda/dupe content issues? Thanks, Peter
Technical SEO | | peterm21 -
One server, two domains - robots.txt allow for one domain but not other?
Hello, I would like to create a single server with two domains pointing to it. Ex: domain1.com -> myserver.com/ domain2.com -> myserver.com/subfolder. The goal is to create two separate sites on one server. I would like the second domain ( /subfolder) to be fully indexed / SEO friendly and have the robots txt file allow search bots to crawl. However, the first domain (server root) I would like to keep non-indexed, and the robots.txt file disallowing any bots / indexing. Does anyone have any suggestions for the best way to tackle this one? Thanks!
Technical SEO | | Dave1000 -
Cross domain shared/duplicate content
Hi, I am working on two websites which share some of the same content and we can't use 301s to solve the problem; would you recommend using canonical tags? Thanks!
Technical SEO | | J_Sinclair0 -
Duplication, pagination and the canonical
Hi all, and thank you in advance for your assistance. We have an issue of paginated pages being seen as duplicates by pro.moz crawlers. The paginated pages do have duplicated by content, but are not duplicates of each other. Rather they pull through a summary of the product descriptions from other landing pages on the site. I was planing to use rel=canonical to deal with them, however I am concerned as the paginated pages are not identical to each other, but do feature their own set of duplicate content! We have a similar issue with pages that are not paginated but feature tabs that alter the URL parameters like so: ?st=BlueWidgets ?st=RedSocks ?st=Offers These are being seen as duplicates of the main URL, and again all feature duplicate content pulled from elsewhere in the site, but are not duplicates of each other. Would a canonical tag be suitable here? Many Thanks
Technical SEO | | .egg0 -
Is Buying Domains Good For SEO? Can I 301 redirect domains to an Original website?
I have a friend that purchased multiple domains related to their website. Each of these domains have the back ground of the original website and irrelevant content on them. Is is possible to redirect the various domains to certain pages on the original website. For example if the website is www.shoes.com and they purchased domains such as www.leathermensshoes.com and a few others related to the website. Is it SEO friendly to link the domains purchased to the original website?
Technical SEO | | TSpike10 -
Redirecting root domains to sub domains
Mozzers: We have a instance where a client is looking to 301 a www.example.com to www.example.com/shop I know of several issues with this but wondered if anyone could chip in with any previous experiences of doing so, and what outcomes positive and negative came out of this. Issues I'm aware of: The root domain URL is the most linked page, a HTTP 301 redirect only passes about 90% of the value. you'll loose 10-15% of your link value of these links. navigational queries (i.e.: the "domain part" of "domain.tld") are less likely to produce google site-links less deep-crawling: google crawls top down - starts with the most linked page, which will most likely be your domain url. as this does not exist you waste this zero level of crawling depth. robots.txt is only allowed on the root of the domain. Your help as always is greatly appreciated. Sean
Technical SEO | | Yozzer0