Database driven content producing false duplicate content errors
-
How do I stop the Moz crawler from creating false duplicate content errors. I have yet to submit my website to google crawler because I am waiting to fix all my site optimization issues.
Example: contactus.aspx?propid=200, contactus.aspx?propid=201.... these are the same pages but with some old url parameters stuck on them. How do I get Moz and Google not to consider these duplicates. I have looked at http://moz.com/learn/seo/duplicate-content with respect to
Rel="canonical"
and I think I am just confused.
Nick
-
All of you guys rock! I have never been involved in a community that has had the right answers every time... I used the on all my static pages such as directions, policies, contact, etc... and it removed all the parameters thereby eliminating them from standing out in the MOZ crawl.. I feel like and idiot not knowing about this HTML tag and its importance. My moz crawl now looks so so much better.
When I mean old url parameters, I just meant a few seconds old, meaning the user is on property.aspx?property=1 then when they moved to a static page such as contact, directions, policy we now have another page called contact.aspx?property=1 which if I have 150 properties times 10 static pages I basically just created 150 duplicate content errors just for the contact page alone. Because contact.aspx?property=1 or contact.aspx?property=150 and in between are all the same page... I am sure this has killed my SEO. SO THAT PROBLEM IS NOW FIXED!!
NOW to revisit what zenstorageunits says about URL rewriting which has many different ways to do it using .net, but Miketek I would not have to create subdirectories because it is done in the code... they are more like virtual directories...
zenstorageunits or anyone else for that matter, Is it worth it for me to hire somebody to create a URL rewrite app that can change the following;
http:/www.destinationbigbear.com/property_detail.aspx?propid=202 to
http://www.destinationbigbear.com/big-bear-cabin-rentals/a-true-cabin/details
and
http:/www.destinationbigbear.com/property_photos.aspx?propid=202 to
http://www.destinationbigbear.com/big-bear-cabin-rentals/a-true-cabin/photos
See everyone of my 150 cabins has these pages; info, photos, calendar, video, reviews, rates...and they all have unique cabin names... so it is basically 150 cabins x 6 pages = 900 unique pages with unique content but really only 6 pages dynamically being changed by 150 cabins.
I have been able to dynamically change all the page titles for everyone of these 900 database driven pages such as
Big-Bear-Cabin | A True Cabin Photos or Big-Bear-Cabin | A True Cabin Calendar and so on.
-
Hi Nick,
I think you've gotten some good tips here - I'd agree with Prestashop that the preferred solution would be to find where these parameters are being included in links to this page and remove them.
Failing that, zenstorageunits's advice to use rel="canonical" would be my recommendation - or a 301 redirect from the URLs that include parameters back to the core URL would work.
I wouldn't convert these parameters to subdirectories unless they are integral to the way your site works and pull up unique content - you called them "old parameters" so it sounds like they're not supposed to be there, so probably not a case where you'd want to convert these parameters to subdirectories.
Failing the above, you could utilize the Google Webmaster Tools "URL Parameters" interface to tell Googlebot to ignore these parameters.
Overall, your best course of action is to find and remove the links that include the parameters.
I'd also add that the Moz crawl report is highly sensitive to "duplicate content," and I often find it flags up issues has high/medium priority that are not actually going to have a significant impact on the site. You have to take the crawl report with a grain of salt - while duplicate content can be a serious issue for some sites (ecommerce retailers for example with duplication issues across a wide catalog of products), in most cases it has minimal impact and isn't something I'd hold up your site launch for.
Best of Luck,
Mike -
I agree zenstorageunits about using rel=canonical but one thing I would like to point out is that Moz does not create false errors. It is a simple crawler, not like google. Google will actually try to follow links that people have used before and that show up in your analytics files. moz uses no logic like that, it just jumps from page to page. If it is picking up a page with a query string like that then it is a link on your site. I would find the links and take them off.
-
You have a few options you could do. One thing I would look into is maybe doing some url rewriting to change
contactus.aspx?propid=200
to
contactus/propid/200
look at http://msdn.microsoft.com/en-us/library/ms972974.aspx on how to do that for IIS.
A better option I think if you need to keep the parameters the way they are is to use the rel canocial tag look at moz article
http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questions
but basicly you would need to add something like this to your contact.aspx page(replace example.com with your website url)
This suggest to the website crawler, like google or moz crawler, that those pages should be assoicated with the contact.aspx page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content or an update ???
Buying Guide and Product Category page competing for the same keyword ? Got a “nuts and bold website” selling basic stuff. Imagine selling simple nuts, bolts and washers (the little ring that goes in between) in different metals. Imagine a website with a very wide and deep line of these simple products. For long tail keywords we rank well (Example: 0.25 inch bolts). For the keyword: “Nuts bolts” our main category page use to rank well low 1<sup>st</sup> page to second page up against the big guys (Amazon, Walmart, Target, Costco, some drug store who may have a mix pack of nuts and bolts, but still Google don’t see the difference and list 2 pages each for these guys). But then in mid-February there were an update and suddenly our “Buying guide for nuts and bolts” rank higher and started to compete with our own product category page. That was never our intention. These two pages now compete for the ranking on page 4<sup>th</sup>. Clearly there were more words on the buying guide page but no changes had been made to it for well months or years. To make up for it some more words were added to the category page, but of cause there is only so many way you can fraise words about “nuts and bolts” without sounding a bit duplicate/re-writing. So what do I do now ?? Clearly the product category page is the one we like to rank highest with the guide a close 2nd. Most customer don’t need the buying guide but it is good to have and great support as we got lot of good comments from customer who read it. Made a link to the buying guide from the category page and wise verses. The category page got an embedded video. Moz list the page authority for the category page to 16 and 1 for the buying guide but clearly G see it differently. Already tried to change the Meta Tag Title and Description a little but it is hard to do if the word “Nuts Bolts” is to appear in the description or people don’t know what to expect. Could just insert a “do not index” for the buying guide but not a good long term solution. Unfortunately I am out of imagination at this point. Any good suggestions ?? Thanks, Kim Any good suggestions ???
Technical SEO | | KimX0 -
Https Duplicate Content
My previous host was using shared SSL, and my site was also working with https which I didn’t notice previously. Now I am moved to a new server, where I don’t have any SSL and my websites are not working with https version. Problem is that I have found Google have indexed one of my blog http://www.codefear.com with https version too. My blog traffic is continuously dropping I think due to these duplicate content. Now there are two results one with http version and another with https version. I searched over the internet and found 3 possible solutions. 1 No-Index https version
Technical SEO | | RaviAhuja
2 Use rel=canonical
3 Redirect https versions with 301 redirection Now I don’t know which solution is best for me as now https version is not working. One more thing I don’t know how to implement any of the solution. My blog is running on WordPress. Please help me to overcome from this problem, and after solving this duplicate issue, do I need Reconsideration request to Google. Thank you0 -
Duplicate Content Issue
SEOMOZ is giving me a number of duplicate content warnings related to pages that have an email a friend and/or email when back in stock versions of a page. I thought I had those blocked via my robots.txt file which contains the following... Disallow: /EmailaFriend.asp Disallow: /Email_Me_When_Back_In_Stock.asp I had thought that the robot.txt file would solve this issue. Anyone have any ideas?
Technical SEO | | WaterSkis.com0 -
Duplicate content due to csref
Hi, When i go trough my page, i can see that alot of my csref codes result in duplicate content, when SeoMoz run their analysis of my pages. Off course i get important knowledge through my csref codes, but im quite uncertain of how much it effects my SEO-results. Does anyone have any insights in this? Should i be more cautios to use csref-codes or dosent it create problems that are big enough for me to worry about them.
Technical SEO | | Petersen110 -
Duplicate content
I have two page, where the second makes a duplicate content from the first Example:www.mysite.com/mypagewww.mysite.com/mysecondpageIf i insert still making duplicate content?Best regards,Wendel
Technical SEO | | peopleinteractive0 -
Press Releases & Duplicate Content
How do you do press releases without duplicating the content? I need to post it on my website along with having it on PR websites. But isn't that considered bad for SEO since it's duplicate content?
Technical SEO | | MercyCollege0 -
Duplicate Page Content and Titles
A few weeks ago my error count went up for Duplicate Page Content and Titles. 4 errors in all. A week later the errors were gone... But now they are back. I made changes to the Webconfig over a month ago but nothing since. SEOmoz is telling me the duplicate content is this http://www.antiquebanknotes.com/ and http://www.antiquebanknotes.com Thanks for any advise! This is the relevant web.config. <rewrite><rules><rule name="CanonicalHostNameRule1"><match url="(.*)"><conditions><add input="{HTTP_HOST}" pattern="^www.antiquebanknotes.com$" negate="true"></add></conditions>
Technical SEO | | Banknotes
<action type="Redirect" url="<a href=" http:="" www.antiquebanknotes.com="" {r:1"="">http://www.antiquebanknotes.com/{R:1}" />
</action></match></rule>
<rule name="Default Page" enabled="true" stopprocessing="true"><match url="^default.aspx$"><conditions logicalgrouping="MatchAll"><add input="{REQUEST_METHOD}" pattern="GET"></add></conditions>
<action type="Redirect" url="/"></action></match></rule></rules></rewrite>0 -
50+ duplicate content pages - Do we remove them all or 301?
We are working on a site that has 50+ pages that all have duplicate content (1 for each state, pretty much). Should we 301 all 50 of the URLs to one URL or should we just completely get rid of all the pages? Are there any steps to take when completely removing pages completely? (submit sitemap to google webmaster tools, etc) thanks!
Technical SEO | | Motava0