Duplicate content due to credit card testing
-
I recently launched a site -
http://www.footballtriviaquestions.co.uk
and the site uses Paypal. In order to test the PayPal functionality I set up a zapto.org domain via a permanent IP service that points directly to the computer I've written the website on.
It appears that Google has now indexed the zapto.org website.
Will this cause problems to my main website, as the zapto.org website will pretty much contain content that is an exact duplicate of what is held on the main website.
I've looked in Google webmaster tools for the main website and it doesn't mention any duplicate content, but I'm currently not in the top 50 ranking for "football trivia questions' on Google despite SEOMoz ranking my home page with an A rating. The page does rank at position 16 in Yahoo and Bing. This seems odd to me, although I do have very few back links pointing to my site.
If the duplicate content is likely to be causing me problems what would be the best way to knock the zapto.org results out of Google
-
Thanks for the info John, I've added the tags you suggest. Does anyone know how likely this is to have affected the main sites Google rankings?
Also, is there any way to know if Google has removed the zapto site from it's index. I've been doing a number of 'site:<sitename>.zapto.org' searches over the past few days and have seen no decrease in the number of pages that are indexed.</sitename>
-
Yes, it will be seen as duplicate content .... They are both identical websites.
Firstly I would immediately add rel canonical tag to each page indexed on the zapto.org website pointing to the page on your main website. I would alos add noidex meta tag to the zapota pages also
You could also set up redirects on the Zapota website to redirect to your main website
Lastly, you could remove the Zapota website, its just a subdomain of the domain Zapota, or that is how it appears in the search results
Hope that helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Recurring events and duplicate content
Does anyone have tips on how to work in an event system to avoid duplicate content in regards to recurring events? How do I best utilize on-page optimization?
Technical SEO | | megan.helmer0 -
How to handle one section of duplicate content
Hi guys, i'm wondering if I can get some best practice advice in preparation for launching our new e-commerce website. For the new website we are creating location pages with a description and things to do which will lead the user to hotels in the location. For each hotel page which relates to the location we will have the same 'Things to do' content. This is what the content will look like on each page: Location page Location title (1-3 words) Location description (150-200 words) Things to do (200-250 words) Reasons to visit location (15 words) Hotel page Hotel name and address (10 words) Short description (25 words) Reasons to book hotel (15 words) Hotel description (100-200 words) Friendly message why to visit (15 words) Hotel reviews feed from trust pilot Types of break and information (100-200 words) Things to do (200-250 words) My question is how much will we penalised for having the same 'Things to do' content on say up to 10 hotels + 1 location page? In an ideal world we want to develop a piece of code which tells search engines that the original content lies on the location page but this will not be possible before we go live. I'm unsure whether we should just go and take the potential loss in traffic or remove the 'Things to do' section on hotel pages until we develop the piece of code?
Technical SEO | | CHGLTD1 -
Self inflicted duplicate content penalty?
Wondering if I could pick the brains of fellow mozer's. Been working with a client for about 3 months now to get their site up in the engine. In the three months the DA has gone from about 11 to 34 and PA is 40 (up from about 15) so that's all good. However, we seem not to be moving up the ranking much. The average DA of competitors in the niche in the top ten is 25. We have 9.2 times the average no of backlinks too. During a call to the client today they told me that they noticed a major drop in their rankings a few months back. Didn't say this when we started the project. I just searched for the first paragraph on their homepage and it returns 16,000 hits in google, The second returns 9600 and the third 1,400. Searching for the first paragraph of their 'about us' page gives me 13,000 results!! Clearly something is not right here. Looking into this, I seems that someone has use their content, word for word, as the descriptions on thousands of blogs, social sites. I am thinking that this, tied in with the slow movement in the listings, has caused a duplicate content penalty in the search engines. The client haven't copied anyone's content as it is very specific for their site but it seems all over the web. I have advised them to change their site content asap and hope we get a Panda refresh in to view the new unique content. Once the penalty is off i expect the site to shoot up the rankings. From an seo company point of view, should I have seen this before? Maybe. If they had said they suffered a major drop in rankings a few months back - when they dropped their seo agency, I would have looked into it, but one doesn't naturally assume that a client's copy will be posted all over the web, it is not something I would have searched for without reason to search Any thoughts on this, either saying yes or no to my theory would be most welcome please. Thanks Carl
Technical SEO | | GrumpyCarl0 -
Over 700+ duplicate content pages -- help!
I just signed up for SEO Moz pro for my site. The initial report came back with over 700+ duplicate content pages. My problem is that while I can see why some of the content is duplicated on some of the pages I have no idea why it's coming back as duplicated. Is there a tutorial for a novie on how to read the duplicate content report and what steps to take? It's an e-commerce website and there is some repetitive content on all the product pages like our "satisfaction guaranteed" text and the fabric material... and not much other text. There's not a unique product description because an image speaks for itself. Could this be causing the problem? I have lots of URLs with over 50+ duplicates. Thx for any help.
Technical SEO | | Santaur0 -
Testing for duplicate content and title tags
Hi there, I have been getting both Duplicate Page content and Duplicate Title content warnings on my crawl diagnostics report for one of my campaigns. I did my research, and implemented the preferred domain setting in Webmaster Tools. This did not resolve the crawl diagnostics warnings, and upon further research I discovered the preferred domain would only be noted by Google and not other bots like Roger. My only issue was that when I ran an SEOmoz crawl test on the same domain, I saw none of the duplicate content or title warnings yet they still appear on my crawl diagnostics report. I have now implemented a fix in my .htaccess file to 301 redirect to the www. domain. I want to check if it's worked, but since the crawl test did not show the issue last time I don't think I can rely on that. Can you help please? Thanks, Claire
Technical SEO | | SEOvet0 -
Product Duplicate Content Issue with Google Shopping
I have a site with approx 20,000 products. These products are resold to hundreds of other companies and are fed from one database therefore the content is duplicated many many times. To overcome this, we are launching the site with noindex meta tags on all product pages. (In phase 2 we will begin adding unique content for every product eek) However, we still want them to appear in Google Shopping. Will this happen or will it have to wait until we remove the noindex tags?
Technical SEO | | FGroup0 -
Thin/Duplicate Content
Hi Guys, So here's the deal, my team and I just acquired a new site using some questionable tactics. Only about 5% of the entire site is actually written by humans the rest of the 40k + (and is increasing by 1-2k auto gen pages a day)pages are all autogen + thin content. I'm trying to convince the powers that be that we cannot continue to do this. Now i'm aware of the issue but my question is what is the best way to deal with this. Should I noindex these pages at the directory level? Should I 301 them to the most relevant section where actual valuable content exists. So far it doesn't seem like Google has caught on to this yet and I want to fix the issue while not raising any more red flags in the process. Thanks!
Technical SEO | | DPASeo0 -
Omniture tracking code URLs creating duplicate content
My ecommerce company uses Omniture tracking codes for a variety of different tracking parameters, from promotional emails to third party comparison shopping engines. All of these tracking codes create URLs that look like www.domain.com/?s_cid=(tracking parameter), which are identical to the original page and these dynamic tracking pages are being indexed. The cached version is still the original page. For now, the duplicate versions do not appear to be affecting rankings, but as we ramp up with holiday sales, promotions, adding more CSEs, etc, there will be more and more tracking URLs that could potentially hurt us. What is the best solution for this problem? If we use robots.txt to block the ?s_cid versions, it may affect our listings on CSEs, as the bots will try to crawl the link to find product info/pricing but will be denied. Is this correct? Or, do CSEs generally use other methods for gathering and verifying product information? So far the most comprehensive solution I can think of would be to add a rel=canonical tag to every unique static URL on our site, which should solve the duplicate content issues, but we have thousands of pages and this would take an eternity (unless someone knows a good way to do this automagically, I’m not a programmer so maybe there’s a way that I don’t know). Any help/advice/suggestions will be appreciated. If you have any solutions, please explain why your solution would work to help me understand on a deeper level in case something like this comes up again in the future. Thanks!
Technical SEO | | BrianCC0