Craw Diagnostics Questions
-
SEO Moz is reporting that I have 50+ pages with a duplicate content issue based on this URL: http://www. f r e d aldous.co.uk/art-shop/art-supplies/art-canvas.html?manufacturer=178
But I have included this tag in the source: rel="canonical" href="http://www.f r e daldous.co.uk/art-shop/art-supplies/art-canvas.html"/>
(I have purposefully added white space to the URLs in this message as I'm not sure about the rules for posting links here)
I though this "canonical" tag prevented the duplicate content being indexed?
is the reporting by SEOMoz wrong or being over cautious?
-
Hi Niall,
This isn't a case of the canonical tag being properly applied, but a case where two or more pages are so similar in code that they are setting off the SEOmoz duplicate content flags.
First of all, those pages look different to us humans. But the SEOmoz web app uses a similarity threshold of 95% of the html code. This takes everything on the page, both hidden and visible into account.
In this case, it's counting all of the navigation and sidebar as well, which is significant. What's left of the unique content - the part that matters, makes up less than 5% of the code.
Here's a tool you can use to check the similarity: http://www.duplicatecontent.net/
I ran the pages through a couple of tools which showed 98% HTML similarity. And 99% text similarity.
For perspective, take a look at Google's cached versions of one of these pages. This is how googlebot sees the page: http://webcache.googleusercontent.com/search?q=cache:mdybPKIjOxUJ:www.fredaldous.co.uk/craft-shop/general-crafts.html+http://www.fredaldous.co.uk/craft-shop/general-crafts.html&hl=en&gl=us&strip=1
That, as we say, is a lot of links!
Since Panda, when I see a site with this many navigation links, I usually advise them to restructure their site architecture into more of a Pyramid shape, so that you reduce the overall navigation on each page.
Hope this helps! Best of luck with your SEO.
-
It claims that this is one of the duplicate URLS:
http://www.f r e daldous.co.uk/photo-gift/design-led-gifts.html?manufacturer=436
Now I am confused as page is no where near duplicate content of the URL I posted 1st.
Can anyone explain this?
-
Helo Niall,
It seems that you have inserted the rel="canonical" href= in the correct spot. I think the software is giving you the potentials which is always a bonus precaution. I really don't want to make a premature determination without knowing which 50 pages are showing up as duplicate. A deeper look will allow me to give you a more accurate response.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Questions about the DA,PA of website
I am counting on some more ads on the site https://gogoanime.city/, is it a problem if I add some ads about sex, porn ..., to make a little more money. So does it affect the PA DA score. Thank you!
Technical SEO | | gogoanimetp0 -
Fundamental HTTP to HTTPS Redirect Question
Hi All I'm planning a http to https migration for a site with over 500 pages. The site content and structure will be staying the same, this is simply a https migration. Can I just confirm the answer to this fundamental question? From my reading, I do not need to create 301 redirect for each and every page, but can add a single generic redirect so that all http references are redirected to https. Can I just double check this would suffice to preserve existing google rankings? Many Thanks
Technical SEO | | ruislip180 -
301 Redirect Question
I am working with a website and I ran a Screaming Frog and noticed there are 4,600 301's on the website (www.srishoes.com). It seems like the issue is between the www. and without it and they aren't working together. Is this something that the website provider should update and what type of impact might this have on the site? Thanks!
Technical SEO | | ReunionMarketing
Matt0 -
Launching large content project - date-stamp question
Hello mozzers! So my company is about to launch a large scale content project with over 100 pieces of newly published content. I'm being asked what the date-stamp for each article should be. Two questions:
Technical SEO | | Vacatia_SEO
1- Does it hurt article's SEO juice to have a lot of content with the same "published on" date?
2- I have the ability to manually update each articles date stamp. Is there a recommended best practice? p.s. Google has not crawled any of these pages yet.1 -
301 Redirect Questions
I have a site I built on a wisiwig editing platform that will not allow a 301 redirect. The site has already been remade and I need to point it to another domain. To do the redirect, can I change it to another domain host that will allow a 301 or will that make me loose the authority of the site? I may not be able to move the content of the site. Please help.
Technical SEO | | photoseo10 -
Sub-domains for keyword targeting? (specific example question)
Hey everyone, I have a question I believe is interesting and may help others as well. Our competitor heavily (over 100-200) uses sub-domains to rank in the search engines... and is doing quite well. What's strange, however, is that all of these sub-domains are just archives -- they're 100% duplicate content! An example can be seen here where they just have a bunch of relevant posts archived with excerpts. How is this ranking so well? Many of them are top 5 for keywords in the 100k+ range. In fact their #1 source of traffic is SEO for many of the pages. As an added question: is this effective if you were to actually have a quality/non-duplicate page? Thanks! Loving this community.
Technical SEO | | naturalsociety0 -
Robots.txt questions...
All, My site is rather complicated, but I will try to break down my question as simply as possible. I have a robots.txt document in the root level of my site to disallow robot access to /_system/, my CMS. This looks like this: # /robots.txt file for http://webcrawler.com/
Technical SEO | | Horizon
# mail webmaster@webcrawler.com for constructive criticism **User-agent: ***
Disallow: /_system/ I have another robots.txt file in another level down, which is my holiday database - www.mysite.com/holiday-database/ - this is to disallow access to /holiday-database/ControlPanel/, my database CMS. This looks like this: **User-agent: ***
Disallow: /ControlPanel/ Am I correct in thinking that this file must also be in the root level, and not in the /holiday-database/ level? If so, should my new robots.txt file look like this: # /robots.txt file for http://webcrawler.com/
# mail webmaster@webcrawler.com for constructive criticism **User-agent: ***
Disallow: /_system/
Disallow: /holiday-database/ControlPanel/ Or, like this: # /robots.txt file for http://webcrawler.com/
# mail webmaster@webcrawler.com for constructive criticism **User-agent: ***
Disallow: /_system/
Disallow: /ControlPanel/ Thanks in advance. Matt0 -
I have mulitple domains that are both drawing traffic and that I should only have doing that. my question is how do I make one go away?
First off I am VERY new to his SEO stuff and If you guys could be so kind as to help. I was setting up my first campaign for my web site and when i entered it into the URL search it came back with having 2 web sites that it searched. Both are mine but one has the "www.website.com" and the other just has the "website.com" how can i fix this so i just have one? thanks in advance for your help
Technical SEO | | madabouthats0