I have a duplicate content problem
-
The website guy that made the website for my business Premier Martial Arts Austin disappeared and didn't set up that www. was to begin each URL, so I now have a duplicate content problem and don't want to be penalized for it. I tried to show in Webmaster tools the preferred setup but can't get it to OK that I'm the website owner. Any idea as what to do?
-
Thanks for the help!
-
Hey Steve,
if you are searching for a good CMS take a look at contao (www.contao.org) a german open source CMS. I run it on a couple of sites such as (www.waescherei-suche.de). It's site-based and has everything out of the box (newsletter, calendar-entries, user management etc.) There are tons of plugins and upgrades for nearly all kind of purposes. If you run a small site with a few pages this might be the thing you are looking for.
Sebastian
EDIT: it's in English..
-
Thank you for helping me through this issue. I was worried I'd be facing a penalty for the duplicate content and am glad to get it corrected. This site has Drupal as it's CMS and it's been a bear trying to change/add/update anything. There are a number of problems with this site - it doesn't show the Description tags even though I have them filled out.You can see this on View Source that there's no Descriptions tags. Any changes I have wanted to make can't be changed like the Facebook button and Alt tags, by me anyway.
I'm working on a new Wordpress website to replace this one with. Again, thank you for your help!
Sincerely,
Steve
-
Hi Steve.
Log into Google WMT and go to the Home page. From there press the "Add a Site" button. You will most likely see the non-www version of your site listed. Add the www version of your site.
Next, go through the process of confirming your site. When your site was originally set up your website developer probably added the code to your web page or web server, so it may already be there. Try to confirm the code without actually adding anything to your site or server. If you receive an error, then go ahead and follow Google's instructions.
Once your site is verified, you will then be able to go to Site Configuration > Settings > Set Preferred Domain.
Decide which version you prefer for your site and stick with it. It looks like you are already set up to use "www" so I would recommend using that URL style unless you had a specific reason to change it.
EDIT: I can't help but offering a bit of feedback regarding your site. These are just suggestions so feel free to disregard any you don't care for:
-
Change the "Facebook" block to the same color as Facebook pages. Your current green color blends in too much with everything else. It needs to stand out and be easy to find.
-
Update your copyright to 2011
-
Add a meta description tag to your pages. This tag wont help you rank better, but it will often be visible to users and may influence whether they click through to your site.
-
Add ALT tags to your images, and try to align your image names with your keywords when possible. Presently they have names such as "teens" where "teen martial arts" might more accurately describe the image, and help you rank better.
You have other opportunities, but the above will help get you moving in the right direction. If you have a chance, I highly recommend reading the SEO Beginners Guide as it contains a lot of great information.
-
-
I would not worry too much about that info. You rank well for a term like "austin tx martial arts" so everything seems good.
On a sitenode: If you ever want to reuse content and check how similar two pages are you could use a service like http://utext.rikuz.com/en/ to test it algorythmically.
Hope this answers your questions.
-
Thank you for your help, but according to the Crawl Dignostics for this website's SEOMoz Campaign, I have 16 pages of duplicate content. Should I discard this info.?
-
A site:pmaaustin.com showed 19 pages indexed. All of which with www.
Calling a non www. version of a page returned
HTTP/1.1 301 Moved Permanently
Date: Mon, 04 Jul 2011 05:16:49 GMT
Server: Apache
Location: http://www.pmaaustin.com/index.php?q=node/18So everythings looks good....
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content across similar computer "models" and how to properly handle it.
I run a website that revolves around a niche rugged computer market. There are several "main" models for each computer that also has several (300-400) "sub" models that only vary by specifications for each model. My problem is I can't really consolidate each model to one product page to avoid duplicate content. To have something like a drop down list would be massive and confusing to the customer when they could just search the model they needed. Also I would say 80-90% of the market searches for a specific model when they go to purchase or in Google. A lot of our customers are city government, fire departments, police departments etc. they get a list of approved models and purchase off that they don't really search by specs or "configure" a model so each model number having a chance to rank is important. Currently we have all models in each sub category rel=canonical back to the main category page for that model. Is there a better way to go about this? Example page you can see how there are several models all product descriptions are the same they only vary by model writing a unique description for each one is an unrealistic possibility for us. Any suggestions on this would be appreciated I keep going back on forth on what the correct solution would be.
Intermediate & Advanced SEO | | The_Rugged_Store0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Duplicate Content Question
Brief question - SEOMOZ is teling me that i have duplicate content on the following two pages http://www.passportsandvisas.com/visas/ and http://www.passportsandvisas.com/visas/index.asp The default page for the /visas/ directory is index.asp - so it effectively the same page - but apparently SEOMOZ and more importantly Google, etc treat these as two different pages. I read about 301 redirects etc, but in this case there aren't two physical HTML pages - so how do I fix this?
Intermediate & Advanced SEO | | santiago230 -
Syndicating duplicate content descriptions - Can these be canonicalised?
Hi there, I have a site that contains descriptions of accommodation and we also use this content to syndicate to our partner sites. They then use this content to fill their descriptions on the same accommodation locations. I have looked at copyscape and Google and this does appear as duplicate content across these partnered sites. I do understand as well that certain kinds of content will not impact Google's duplication issue such as locations, addresses, opening times those kind of things, but would actual descriptions of a location around 250 words long be seen and penalised as duplicate content? Also is there a possible way to canonicalise this content so that Google can see it relates back to our original site? The only other way I can think of getting round a duplicate content issue like this is ordering the external sites to use tags like blockquotes and cite tags around the content.
Intermediate & Advanced SEO | | MalcolmGibb0 -
Category Content Duplication
Does indexing category archive page for a blog cause duplications? http://www.seomoz.org/blog/setup-wordpress-for-seo-success After reading this article I am unsure.
Intermediate & Advanced SEO | | SEODinosaur0 -
Is a "Critical Acclaim" considered duplicate content on an eCommerce site?
I have noticed a lot of wine sites use "Critical Acclaims" on their product pages. These short descriptions made by industry experts are found on thousands of other sites. One example can be found on a Wine.com product page. Wine.com also provides USG through customer reviews on the page for original content. Are the "Critical Acclaim" descriptions considered duplicate content? Is there a way to use this content and it not be considered duplicate (i.e. link to the source)?
Intermediate & Advanced SEO | | mj7750 -
Duplicate Content / 301 redirect Ariticle issue
Hello, We've got some articles floating around on our site nlpca(dot)com like this article: http://www.nlpca.com/what-is-dynamic-spin-release.html that's is not linked to from anywhere else. The article exists how it's supposed to be here: http://www.dynamicspinrelease.com/what-is-dsr/ (our other website) Would it be safe in eyes of both google's algorithm (as much as you know) and with Panda to just 301 redirect from http://www.nlpca.com/what-is-dynamic-spin-release.html to http://www.dynamicspinrelease.com/what-is-dsr/ or would no-indexing be better? Thank you!
Intermediate & Advanced SEO | | BobGW0 -
I need to add duplicate content, how to do this without penalty
On a site I am working on we provide a landing page summary (say top 10 information snippets) and provide a link 'see more' to take viewers to a page with all the snippets. Now those first 10 snippets will be repeated in the full list. Is this going to be a duplicate content problem? If so, any suggestions.
Intermediate & Advanced SEO | | oznappies0