Duplicate content
-
I have to sentences that I want to optimize to different pages for.
sentence number one is
travel to ibiza by boat
sentence number to is
travel to ibiza by ferry
My question is, can I have the same content on both pages exept for the keywords or will Google treat that as duplicate content and punish me? And If yes, where goes the limit/border for duplicate content?
-
yes, I agree with this!
-
you cannot use the same copy to replicate it for some other topic by replacing the words
-
Yes, I agree with ameliavargo, you could make just a single page including both keywords and potentially rank for both phrases. Boat and ferry are really similar, and I am not sure if you can really make two different pages differently relevant enough to rank both well. That could actually fireback.
I would definitively opt for a single optimized page which includes both words.
-
I may be missing the point here, but why do you need two pages at all? Google understands synonyms so surely as boat / ferry are essentially the same thing, you could probably get away with just writing one... Especially as the message would be the same for both topics.
If you feel you need two pages, then definitely make them unique!
As for URLs - if you can, make sure your keyword is in the url.
Hope this helps
-
I don't know how much control do you have on your url structure but you can always create:
-domain.com/toibiza/ --> tips on how to travel to ibiza
-domain.com/toibiza/ferry/ --> create a content focused on advantages on going by ferry to ibiza (and link to book now if you have)
-domain.com/toibiza/boat --> create content focused on advantages on going by boat (and link to book now if you have)
You can link to both boat and ferry from ibiza page, an to each other using them as alternatives.
Creating 3 different text of 200 words won't be high cost with a freelancer, though you can always try to mash up some content elsewhere.
-
Why not use - www.domain.com/your-key-word.html
-
Any difference or are you actually on the same thing once you hop on a boat/ferry?
-
Thank you, but can the url be similar?
like this www.domain.com/ferrytoibiza and www.domain.com/boattoibizza
And then make unique content on both pages?
-
Definitely invest in two unique copies of text. If there are thousands and you are unable to get them all done at once you may try variable driven text, however replacing one element and expecting Google to like it is a bit too much.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content from Multiple Sources Cross-Domain
Hi Moz Community, We have a client who is legitimately repurposing, or scraping, content from site A to site B. I looked into it and Google recommends the cross-domain rel=canonical tag below: http://googlewebmastercentral.blogspot.com/2009/12/handling-legitimate-cross-domain.html The issue is it is not a one to one situation. In fact site B will have several pages of content from site A all on one URL. Below is an example of what they are trying to accomplish. EX - www.siteB.com/apples-and-oranges is made up of content from www.siteA.com/apples & www.siteB.com/oranges So with that said, are we still in fear of getting hit for duplicate content? Should we add multiple rel=canonical tags to reflect both pages? What should be our course of action.
Technical SEO | | SWKurt0 -
Image centric site and duplicate content issues
We have a site that has very little text, the main purpose of the site is to allow users to find inspiration through images. 1000s of images come to us each week to be processed by our editorial team, so as part of our process we select a subset of the best images and process those with titles, alt text, tags, etc. We still host the other images and users can find them through galleries that link to the process and unprocessed image pages. Due to the lack of information on the unprocessed images, we are having lots of duplicate content issues (The layout of all the image pages are the same, and there isn't any unique text to differentiate the pages. The only changing factor is the image itself in each page) Any suggestions on how to resolve this issue, will be greatly appreciated.
Technical SEO | | wedlinkmedia0 -
Rel=canonical overkill on duplicate content?
Our site has many different health centers - many of which contain duplicate content since there is topic crossover between health centers. I am using rel canonical to deal with this. My question is this: Is there a tipping point for duplicate content where Google might begin to penalize a site even if it has the rel canonical tags in place on cloned content? As an extreme example, a site could have 10 pieces of original content, but could then clone and organize this content in 5 different directories across the site each with a new url. This would ultimately result in the site having more "cloned" content than original content. Is this at all problematic even if the rel canonical is in place on all cloned content? Thanks in advance for any replies. Eric
Technical SEO | | Eric_Lifescript0 -
Dealing with duplicate content
Manufacturer product website (product.com) has an associated direct online store (buyproduct.com). the online store has much duplicate content such as product detail pages and key article pages such as technical/scientific data is duplicated on both sites. What are some ways to lessen the duplicate content here? product.com ranks #1 for several key keywords so penalties can't be too bad and buyproduct.com is moving its way up the SERPS for similar terms. Ideally I'd like to combine the sites into one, but not in the budget right away. Any thoughts?
Technical SEO | | Timmmmy0 -
How can something be duplicate content of itself?
Just got the new crawl report, and I have a recurring issue that comes back around every month or so, which is that a bunch of pages are reported as duplicate content for themselves. Literally the same URL: http://awesomewidgetworld.com/promotions.shtml is reporting that http://awesomewidgetworld.com/promotions.shtml is both a duplicate title, and duplicate content. Well, I would hope so! It's the same URL! Is this a crawl error? Is it a site error? Has anyone seen this before? Do I need to give more information? P.S. awesomewidgetworld is not the actual site name.
Technical SEO | | BetAmerica0 -
Duplicate Content Issue within the Categories Area
We are in the process of building out a new website, it has been built in Drupal. Within the scan report from SEOMOZ Crawl Diagnostics and it look like I have a duplicate content issue. Example: We sell Vinyl Banners so we have many different templates one can use from within our Online Banner Builder Tool. We have broken them down via categories: Issue: Duplicate Page Content /categories/activities has 9 other URLS associated this issue, I have many others but this one will work for an example. Within this category we have multiple templates attached to this page. Each of the templates do not need their own page however we use this to pull the templates into one page onto the activities landing page. I am wondering if I need to nofollow, noindex each of those individule templates and just get the main top level category name indexed. Or is there a better way to do this to minimize the impact of Panda?
Technical SEO | | Ben-HPB0 -
Duplicate Content
Hi - We are due to launch a .com version of our site, with the ability to put prices into local currency, whereas our .co.uk site will be solely £. If the content on both the .com and .co.uk sites is the same (at product level mainly), will we be penalised? What is the best way to get around this?
Technical SEO | | swgolf1230 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0