Duplicate content error?
-
I am seeing an error for duplicate content for the following pages:
http://www.bluelinkerp.com/contact/
http://www.bluelinkerp.com/contact/index.asp
Doesn't the first URL just automatically redirect to the default page in that directory (index.asp)? Why is it showing up as separate duplicate pages?
-
@Streamline is right - as soon as the engines encounter both versions, they see it as two pages. It's no problem for human visitors, but it can create issues with duplicate URLs in the Google index. You can either 301-redirect the "index.asp" version back to the cleaner, root URL or use a canonical. ASP/.Net can be weird about 301s, so the canonical is probably easier.
Generally, we suggest people canonical to the shorter/friendlier version, but the trouble here is that you're using the "/index.asp" version in your internal links. If you can change the internal links to the "/contact/" version, I'd prefer that, but if not, then set the canonical tag to the "/index.asp" version. The most important thing is consistency. If you link to one version but canonical to the other, Google could ignore your canonical tag. Put simply, your canonical URL isn't really canonical, in that case.
-
Thanks for the response! Is it best practice to specify the canonical URL as the "unspecific" link? Should I not rather specify the canonical URL as "http://www.bluelinkerp.com/contact/index.asp"?
-
They're two different URLs.
If the URL changes but the content stays the same then it's classed as duplicate content.
I feel your pain though - the amount of duplicate pages I've ended up with just because copywriters like to capitalize their words...
-
There's several possible ways the search engines could have come across both versions of that page. If I had to guess, it's because somewhere on the web or even on your own website there are links to both URLs. It's a pretty common issue but one that is easily resolved with the rel="canonical" tag.
Simply put the following code in the header of that page -
This tells the search engines to use your designated URL anytime they access that page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Using Weglot on wordpress (errors)
Good day to you all, Does anyone have experience of the errors being pulled up by Moz about the utility of the weglot plugin on Wordpress? Moz is pulling up URLs such as: https://www.ibizacc.com/es/chapparal-2/?wg-choose-original=false These are classified under "redirect issues" and 99% of the pages are with the ?wg-choose parameter in the URL. Is this having an actual negative impact on my search or is it something more Moz related being highlighted. Any advice be appreciated and a resolution .. Im thinking I could exclude this parameter.
Moz Pro | | alwaysbeseen0 -
403 error but page is fine??
Hi, on my report im getting 4xx error. When i look into it it says the error is crital fo4r 403 error on this page https://gaspipes.co.uk/contact-us/ i can get to the page and see it fine but no idea why its showing a 403 error or how to fix it. This is the only page that the error is coming up on, is there anything i can check/do to get this resolved? Thanks
Moz Pro | | JU-Mark0 -
2 different pages being shown as duplicate content.
I have a small problem with some of the pages on one of my websites.
Moz Pro | | horkans
Pages are shown as duplicate content when they have no content the same apart from the template. But it only happens with a few products and we have well over 100 products for sale. An example would be these which are seen as duplicate content.
http://www.petworlddirect.ie/p/mr-johnsons-supreme-rabbit-food-15kg/106006139
http://www.petworlddirect.ie/p/dreamscape-stone-bridge/187041111 Any help would be appreciated.0 -
How can I correct this massive duplicate content problem?
I just updated a clients website which resulted in about 6000 duplicate page content errors. The way I set up my clients new website is I created a sub folder calles blog and installed wordpress on that folder. So when you go to suncoastlaw.com your taken to an html website, but if you click on the blog link in the nav, your taken to the to blog subfolder. The problem I'm having is that the url's seem to be repeating them selves. So for example, if you type in in http://suncoastlaw.com/blog/aboutus.htm/aboutus.htm/aboutus.htm/aboutus.htm/ that somehow is a legitimate url and is being considered duplicate content of of http://suncoastlaw.com/aboutus.htm/. This repeating url only seems to be a problem when the blog/ is in the url. Any ideas as to how I can fix this?
Moz Pro | | ScottMcPherson0 -
Crawl Errors from URL Parameter
Hello, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login. I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls. Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't. So what I ended doing was going to the robots.txt and disallowing rogerbot. It looks like this: User-agent: rogerbot Disallow:/login However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix? Thanks!
Moz Pro | | WrightIMC0 -
Help with URL parameters in the SEOmoz crawl diagnostics Error report
The crawl diagnostics error report is showing tons of duplicate page titles for my pages that have filtering parameters. These parameters are blocked inside Google and Bing webmaster tools. I do I block them within the SEOmoz crawl diagnostics report?
Moz Pro | | SunshineNYC0 -
Crawl Diagnostics shows two title and meta tag errors but they are false positives.
I got one hit each on "Missing Meta Description Tag" and "Title Missing or Empty" but in the source of my page they are clearly there: <title>Protein Powder | Compare and Get the Best Prices</title> <meta name="keywords" content="protein powder, whey protein, protein supplement, whey protein isolate, hydrolyzed whey" /> I understand there are conventions which may or may not be followed by Drupal (I read an earlier question where ordering and W3C conventions were suggested) but i'm not sure how to fix them given Drupal will just overwrite any hand editing the next time something is built and importantly, I can't get the crawl to work on cue - it works on the automatic once a week crawl in the main campaign summary but every time I've specifically used the Crawl Test tool it gives me a "There was an error submitting your request to the crawler. Please try again later" so I can't really test any changes. Given Google seems to be recognising the title tag - ie showing it in the results - Do I put this down as seomoz just not working? Kind Regards, Brian
Moz Pro | | btrr690 -
What causes Crawl Diagnostics Processing Errors in seomoz campaign?
I'm getting the following error when seomoz tries to spider my site: First Crawl in Progress! Processing Issues for 671 pages Started: Apr. 23rd, 2011 Here is the robots.txt data from the site: Disallow ALL BOTS for image directories and JPEG files. User-agent: * Disallow: /stats/ Disallow: /images/ Disallow: /newspictures/ Disallow: /pdfs/ Disallow: /propbig/ Disallow: /propsmall/ Disallow: /*.jpg$ Any ideas on how to get around this would be appreciated 🙂
Moz Pro | | cmaddison0