Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How does Indeed.com make it to the top of every single search despite of having aggregated content or duplicate content
-
How does Indeed.com make it to the top of every single search despite of having duplicate content. I mean somewhere google says they will prefer original content & will give preference to them who have original content but this statement contradict when I see Indeed.com as they aggregate content from other sites but still rank higher than original content provider side.
How does Indeed.com make it to the top of every single search despite of having aggregated content or duplicate content
-
Hello Anirban,
The main reason large-scale websites like Indeed feature so prominently on SERP's is that there are more ranking factors at work than just content. While Panda has been created to avoid duplicate content issues and it is widely known that duplicate content can lead to penalties and reduced ranking potential, duplicate content can be over-shadowed by other ranking factors, like a website's link profile and it's Authority relative to websites in its niche.
For example, Wikipedia is widely cited by webmasters around the world, but it features a lot of repetition and is an internal linking nightmare where crawling and indexing are concerned. That being said, it is normal to find a Wikipedia page every time a "What Is" query is made.
Youtube is in a similar situation. There are duplicated videos and content galore on that domain, but it is linked to so frequently that it simply cannot be beaten when it comes to domain authority and relevance.
While Indeed is probably being impacted by duplicate content, this pales in comparison to its link profile and the relevance it has in its industry.
Hope this helps! Let me know if I can help with anything else.
Best regards,
Rob
-
Hi there,
Indeed.com is a massive site with a solid domain authority (93/100), links, and social metrics - to name a few important factors. And, as a website that posts listings, the duplicate content probably wouldn't be as big of an issue for them in the eyes of search engines as it would be for other types of sites. And, though the content might be pulled from other areas, it is user-generated in a lot of cases which could be treated differently than regular website content entered in through a CMS. The same could be said for similar types of websites in other industries, like Zillow or Amazon.
I am sure others will weigh in but that should help clarify.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to fix duplicate content for homepage and index.html
Hello, I know this probably gets asked quite a lot but I haven't found a recent post about this in 2018 on Moz Q&A, so I thought I would check in and see what the best route/solution for this issue might be. I'm always really worried about making any (potentially bad/wrong) changes to the site, as it's my livelihood, so I'm hoping someone can point me in the right direction. Moz, SEMRush and several other SEO tools are all reporting that I have duplicate content for my homepage and index.html (same identical page). According to Moz, my homepage (without index.html) has PA 29 and index.html has PA 15. They are both showing Status 200. I read that you can either do a 301 redirect or add rel=canonical I currently have a 301 setup for my http to https page and don't have any rel=canonical added to the site/page. What is the best and safest way to get rid of duplicate content and merge the my non index and index.html homepages together these days? I read that both 301 and canonical pass on link juice but I don't know what the best route for me is given what I said above. Thank you for reading, any input is greatly appreciated!
On-Page Optimization | | dreservices0 -
Duplicate Content for Men's and Women's Version of Site
So, we're a service where you can book different hairdressing services from a number of different salons (site being worked on). We're doing both a male and female version of the site on the same domain which users are can select between on the homepage. The differences are largely cosmetic (allowing the designers to be more creative and have a bit of fun and to also have dedicated male grooming landing pages), but I was wondering about duplicate pages. While most of the pages on each version of the site will be unique (i.e. [male service] in [location] vs [female service] in [location] with the female taking precedent when there are duplicates), what should we do about the likes of the "About" page? Pages like this would both be unique in wording but essentially offer the same information and does it make sense to to index two different "About" pages, even if the titles vary? My question is whether, for these duplicate pages, you would set the more popular one as the preferred version canonically, leave them both to be indexed or noindex the lesser version entirely? Hope this makes sense, thanks!
On-Page Optimization | | LeahHutcheon0 -
Duplicate Content when Using "visibility classes" in responsive design layouts? - a SEO-Problem?
I have text in the right column of my responsive layout which will show up below the the principal content on small devices. To do this I use visibility classes for DIVs. So I have a DIV with with a unique style text that is visible only on large screen sizes. I copied the same text into another div which shows only up only on small devices while the other div will be hidden in this moment. Technically I have the same text twice on my page. So this might be duplicate content detected as SPAM? I'm concerned because hidden text on page via expand-collapsable textblocks will be read by bots and in my case they will detect it twice?Does anybody have experiences on this issue?bestHolger
On-Page Optimization | | inlinear0 -
Duplicate Content from on Competitor's site?
I've recently discovered large blocks of content on a competitors site that has been copy and pasted from a client's site. From what I know, this will only hurt the competitor and not my client since my guy was the original. Is this true? Is there any risk to my client? Should we take action? Dino
On-Page Optimization | | Dino640 -
Does schema.org assist with duplicate content concerns
The issue of duplicate content has been well documented and there are lots of articles suggesting to noindex archive pages in WordPress powered sites. Schema.org allows us to mark-up our content, including marking a components URL. So my question simply, is no-indexing archive (category/tag) pages still relevant when considering duplicate content? These pages are in essence a list of articles, which can be marked as an article or blog posting, with the url of the main article and all the other cool stuff the scheme gives us. Surely Google et al are smart enough to recognise these article listings as gateways to the main content, therefore removing duplicate content concerns. Of course, whether or not doing this is a good idea will be subjective and based on individual circumstances - I'm just interested in whether or not the search engines can handle this appropriately.
On-Page Optimization | | MarkCA0 -
Is content aggregation good SEO?
I didn't see this topic specifically addressed here: what's the current thinking on using content aggregation for SEO purposes? I'll use flavors.me as an example. Flavors.me lets you set up a domain that pulls in content from a variety of services (Twitter, YouTube, Flickr, RSS, etc.). There's also a limited ability to publish unique content as well. So let's say that we've got MyDomain.com set up, and most of the content is being drawn in from other services. So there's blog posts from WordPress.com, videos from YouTube, a photo gallery from Flickr, etc. How would Google look at this scenario? Is MyDomain.com simply scraped content from the other (more authoritative) sources? Is the aggregated content perceived to "belong" to MyDomain.com or not? And most importantly, if you're aggregating a lot of content related to Topic X, will this content aggregation help MyDomain.com rank for Topic X? Looking forward to the community's thoughts. Thanks!
On-Page Optimization | | GOODSIR0 -
Duplicate Content for Spanish & English Product
Hi There, Our company provides training courses and I am looking to provide the Spanish version of a course that we already provide in English. As it is an e-commerce site, our landing page for the English version gives the full description of the course and all related details. Once the course is purchased, a flash based course launches within a player window and the student begins the course. For the Spanish version of the course, my target customers are English speaking supervisors purchasing the course for their Spanish speaking workers. So the landing page will still be in English (just like the English version of the course) with the same basic description, with the only content differences on that page being the inclusion of the fact that this course is in Spanish and a few details around that. The majority of the content on these two separate landing pages will be exactly the same, as the description for the overall course is the same, just that it's presented in a different language, so it needs to be 2 separate products. My fear is that Google will read this as duplicate content and I will be penalized for it. Is this a possibility or will Google know why I set it up this way and not penalize me? If that is a possibility, how should I go about doing this correctly? Thanks!
On-Page Optimization | | NiallTom0 -
Percentage of duplicate content allowable
Can you have ANY duplicate content on a page or will the page get penalized by Google? For example if you used a paragraph of Wikipedia content for a definition/description of a medical term, but wrapped it in unique content is that OK or will that land you in the Google / Panda doghouse? If some level of duplicate content is allowable, is there a general rule of thumb ratio unique-to-duplicate content? thanks!
On-Page Optimization | | sportstvjobs0