Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Handling of Duplicate Content
-
I just recently signed and joined the moz.com system.
During the initial report for our web site it shows we have lots of duplicate content. The web site is real estate based and we are loading IDX listings from other brokerages into our site.
If though these listings look alike, they are not. Each has their own photos, description and addresses. So why are they appear as duplicates – I would assume that they are all too closely related. Lots for Sale primarily – and it looks like lazy agents have 4 or 5 lots and input the description the same.
Unfortunately for us, part of the IDX agreement is that you cannot pick and choose which listings to load and you cannot change the content. You are either all in or you cannot use the system.
How should one manage duplicate content like this? Or should we ignore it?
Out of 1500+ listings on our web site it shows 40 of them are duplicates.
-
Obviously Dirk is right but again you will lose the opportunity to rank in search engines from the related key phrases and if you have played around with real estate industry before, you will have an idea about how difficult it is to rank and what are the advantages of ranking for that particular term.
In my opinion, duplication on page works like when the page is 60 to 70% identical to another page on the website and this is exactly what is happening in your case. I do agree the fact that you cannot change the descriptions but you can actually add the section on the page that explain more about the property. A custom box where you can include your custom written content.
I agree it’s a lot of work at your end but at the end of the day you will get a chance to rank well for those important key phrases that can offer you great amount of conversions.
Just a thought!
-
Nice idea - I have already started this. I just now have to include it for each listing. Thanks!!
-
You could point a canonical to the original source (in fact that is the way Google prefers it). It's a great solution if it's you who's syndicating the content. However, if you would do that, you would loose any opportunity to get ranked on that content.
Googles view: (source: https://support.google.com/webmasters/answer/66359?hl=en).
"Duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results. If your site suffers from duplicate content issues, and you don't follow the advice listed above, we do a good job of choosing a version of the content to show in our search results."
The big problem with duplicate content across different domains is that it's up to google to decide which site is going to be displayed. This could be the site which is syndicating the content, but it could also be a site which has the highest authority.
In your case - if possible I would try to enrich the content you syndicate with content from other sources. Examples could be interesting stats on the neighbourhood like avg. age, income, nearby schools, number of house sold & average price...etc or other types of content that might interest potential buyers. This way your content becomes more unique and probably more interesting (and engaging) for your visitors (and for Google)
Hope this helps,
Dirk
-
Pretty much everyone has the same feed. Would it be wise to include the original source. Seeing we are getting the data from REALTOR.ca - point the canonical to where the listing comes from. I am new to this stuff - so I am hoping that I am getting this right.
Thanks T
-
Hi,
This is question which is asked quite often on Moz Q&A. Pages that have a big chunk of source code in common are sometimes considered as duplicated - even if the content is quite different. Recently they did a post on the tech blog on how they identify duplicates (it's quite technical stuff - but still interesting to read - https://moz.com/devblog/near-duplicate-detection/)
If only address & image are different but description is identical - the page will probably be considered as a duplicate by the Moz bot. If it's only for 40 of 1500 listings, I wouldn't worry to much about it, especially because you are unable the content anyway.
I would be more worried if other real estate companies would use the same feed and hence provide exactly the same content on their side, not only the 40 you mention but the full listing.
rgds
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content and Subdirectories
Hi there and thank you in advance for your help! I'm seeking guidance on how to structure a resources directory (white papers, webinars, etc.) while avoiding duplicate content penalties. If you go to /resources on our site, there is filter function. If you filter for webinars, the URL becomes /resources/?type=webinar We didn't want that dynamic URL to be the primary URL for webinars, so we created a new page with the URL /resources/webinar that lists all of our webinars and includes a featured webinar up top. However, the same webinar titles now appear on the /resources page and the /resources/webinar page. Will that cause duplicate content issues? P.S. Not sure if it matters, but we also changed the URLs for the individual resource pages to include the resource type. For example, one of our webinar URLs is /resources/webinar/forecasting-your-revenue Thank you!
Technical SEO | | SAIM_Marketing0 -
Duplicate Content Issues with Pagination
Hi Moz Community, We're an eCommerce site so we have a lot of pagination issues but we were able to fix them using the rel=next and rel=prev tags. However, our pages have an option to view 60 items or 180 items at a time. This is now causing duplicate content problems when for example page 2 of the 180 item view is the same as page 4 of the 60 item view. (URL examples below) Wondering if we should just add a canonical tag going to the the main view all page to every page in the paginated series to get ride of this issue. https://www.example.com/gifts/for-the-couple?view=all&n=180&p=2 https://www.example.com/gifts/for-the-couple?view=all&n=60&p=4 Thoughts, ideas or suggestions are welcome. Thanks
Technical SEO | | znotes0 -
How does Google view duplicate photo content?
Now that we can search by image on Google and see every site that is using the same photo, I assume that Google is going to use this as a signal for ranking as well. Is that already happening? I ask because I have sold many photos over the years with first-use only rights, where I retain the copyright. So I have photos on my site that I own the copyright for that are on other sites (and were there first). I am not sure if I should make an effort to remove these photos from my site or if I can wait another couple years.
Technical SEO | | Lina5000 -
Headers & Footers Count As Duplicate Content
I've read a lot of information about duplicate content across web pages and was interested in finding out about how that affected the header and footer of a website. A lot of my pages have a good amount of content, but there are some shorter articles on my website. Since my website has a header, footer, and sidebar that are static, could that hurt my ranking? My only concern is that sometimes there's more content in the header/footer/sidebar than the article itself since I have an extensive amount of navigation. Is there a way to define to Google what the header and footer is so that they don't consider it to be duplicate content?
Technical SEO | | CyberAlien0 -
Duplicate Content Issues on Product Pages
Hi guys Just keen to gauge your opinion on a quandary that has been bugging me for a while now. I work on an ecommerce website that sells around 20,000 products. A lot of the product SKUs are exactly the same in terms of how they work and what they offer the customer. Often it is 1 variable that changes. For example, the product may be available in 200 different sizes and 2 colours (therefore 400 SKUs available to purchase). Theese SKUs have been uploaded to the website as individual entires so that the customer can purchase them, with the only difference between the listings likely to be key signifiers such as colour, size, price, part number etc. Moz has flagged these pages up as duplicate content. Now I have worked on websites long enough now to know that duplicate content is never good from an SEO perspective, but I am struggling to work out an effective way in which I can display such a large number of almost identical products without falling foul of the duplicate content issue. If you wouldnt mind sharing any ideas or approaches that have been taken by you guys that would be great!
Technical SEO | | DHS_SH0 -
How to resolve this Duplicate content?
Hi , There is page i get when i do proper menu navigation Caratlane.com>jewellery>rings>casualsrings> http://www.caratlane.com/jewellery/rings/casual-rings/leaves-dew-diamond-0-03-ct-peridot-1-ct-ring-18k-yellow-gold.html When i do a site search in my search box by my product code number "JR00219" The same page is appears with different url http://www.caratlane.com/leaves-dew-diamond-0-03-ct-peridot-1-ct-ring-18k-yellow-gold.html So there is a duplicate content. How can we resolve it. Regards, kathir caratlane.com
Technical SEO | | kathiravan0 -
Duplicate content and http and https
Within my Moz crawl report, I have a ton of duplicate content caused by identical pages due to identical pages of http and https URL's. For example: http://www.bigcompany.com/accomodations https://www.bigcompany.com/accomodations The strange thing is that 99% of these URL's are not sensitive in nature and do not require any security features. No credit card information, booking, or carts. The web developer cannot explain where these extra URL's came from or provide any further information. Advice or suggestions are welcome! How do I solve this issue? THANKS MOZZERS
Technical SEO | | hawkvt10 -
Duplicate Content issue
I have been asked to review an old website to an identify opportunities for increasing search engine traffic. Whilst reviewing the site I came across a strange loop. On each page there is a link to printer friendly version: http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes That page also has a link to a printer friendly version http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes&printfriendly=yes and so on and so on....... Some of these pages are being included in Google's index. I appreciate that this can't be a good thing, however, I am not 100% sure as to the extent to which it is a bad thing and the priority that should be given to getting it sorted. Just wandering what views people have on the issues this may cause?
Technical SEO | | CPLDistribution0