Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
-
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results.
However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do!
I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl?
Can I get my developer to add some code somewhere.
Help will be much appreciated. Asif
-
Hello Tom and Asif,
First of all Tom thanks for the excellent blog post re google docs.
We are also using the Jshop platform for one of our sites. And am not sure whether it is working correctly in terms of SEO. I just ran an seomoz crawl of the site and found that every single link in the list has a rel canonical in it, even the ones with session id's.
Here is an example:
www.strictlybeautiful.com/section.php/184/1/davines_shampoo/d112a41df89190c3a211ec14fdd705e9
www.strictlybeautiful.com/section.php/184/1/davines_shampoo
As Asif has pointed out the Jshop people say they have programmed it so that google cannot pick up the session ids, firstly is that even possible? And if I assume thats not an issue then what about the fact that every single page on the site has a rel canonical link on it?
Any help would be much appreciated.
<colgroup><col width="1074"></colgroup>
| |
| | -
Asif, here's the page with the information on the SEOmoz bot.
-
Thanks for the reply Tom. Spoke to our developer he has told me that the website platform (Jshop) does not show session ID's to the search engines so we are ok on that side. However as it doesn't recognise the Seomoz bot it shows it the session ID's. Do you know where I can find info on the Seomoz bot so we can see what it identifies itself as so it can be added to the list of recognised spiders?
Thanks
-
Hi Asif!
Firstly - I'd suggest that as soon as possible you address the core problem - the use of session ids in the URL. There are not many upsides to the approach and there are many downsides.That it doesn't show up with the site: command doesn't mean it isn't having a negative impact.
In the meantime, you should add a rel=canonical tag to all the offending pages pointing to the URL without the session id. Secondly, you could use robots.txt to block the SEOmoz bot from crawling pages with session ids, but it may affect the bots ability to crawl the site if all the links it is presented with are with session ids - which takes us back around to fixing the core problem.
Hope this helps a little!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Diagnostics 2261 Issues with Our Blog
I just recently signed up for MOZ, so much information. I've done the walk through and will continue learning how to us the tools. But I need your help. Our first moz crawl indicated 2261 issues (447 404's, 803 duplicate content, 11 502's, etc). I've reviewed all of the crawls issues and they are linked to our Yahoo hosted WordPress blog. Our blog is over 9 years old. The only issue that I'm able to find is our categories are not set up correctly. I've searched for WordPress assistance on this topic and cant find any issues with our current category set up. Every category link that I click returns Nothing Found Apologies, but no results were found for the requested archive. Perhaps searching will help find a related post. http://site.labellaflorachildrensboutique.com/blog/ Any assistance is greatly appreciated.
Moz Pro | | Girlstuff0 -
Crawl diagnostics up to date after Magento ecommerce site crawl?
Howdy Mozzers, I have a Magento ecommerce website and I was wondering if the data (errors/warnings) from the Crawl diagnostics are up to date. My Magento website has 2.439 errors, mainly 1.325 duplicate page content and 1.111 duplicate page title issues. I already implemented the Yoast meta data plugin that should fix these issues, however I still see there errors appearing in the crawl diagnostics, but when going to the mentioned URL in the crawl diagnostics for e.g.: http://domain.com/babyroom/productname.html?dir=desc&targetaudience=64&order=name and checking the source code and searching for 'canonical' I do see: http://domain.com/babyroom/productname.html" />. Even I checked the google serp for url: http://domain.com/babyroom/productname.html?dir=desc&targetaudience=64&order=name and I couldn't find the url indexed in Google. So it basically means the Yoast meta plugin actually worked. So what I was wondering is why I still see the error counted in the crawl diagnostics? My goal is to remove all the errors and bring it all to zero in the crawl diagnostics. And now I am still struggling with the "overly-dynamic URL" (1.025) and "too many on-page links" (9.000+) I want to measure whether I can bring the warnings down after implementing an AJAX-based layered navigation. But if it's not updating it here crawl diagnostics I have no idea how to measure the success of eliminating the warnings. Thanks for reading and hopefully you all can give me some feedback.
Moz Pro | | videomarketingboys0 -
Why is my crawl STILL in progress?
I'm a bit new here, but we've had a few crawls done already. They are always finished by Wednesday night. Our website is not large (by any means), but the crawl still says it's in progress now 3 days later. What's the deal here?!?
Moz Pro | | Kibin0 -
Crawl Diagnostics Shows thousands of 302's from a single url. I'm confused
Hi guys I just ran my first campaign and the crawl diagnostics are showing some results I'm unfamiliar with.. In the warnings section it shows 2,838 redirects.. this is where I want to focus. When I click here it shows 5 redirects per page. When I go to click on page 2, or next page, or any other page than page 1 for that matter... this is where things get confusing. Nothing shows. Downloading the csv reveals that 2,834 of these are all showing: URL: http://www.mydomain.com/401/login.php url: http://www.mydomain.com/401/login.php referrer: http://www.mydomain.com/401/login.php location_header: http://www.mydomain.com/401/login.php I guess I'm just looking for an explanation as to why it's showing so many to the same page and what possible actions can be taken on my part to correct it (if needed). Thanks in advance
Moz Pro | | sethwb0 -
90% of our sites that are designed are in wordpress and the report brings up "duplicate" content errors. I presume this is down to a conical error?
We are looking at getting the Agency version of SEOMoz and are based in the UK Could you please tell me what would be the best way to correct this issue as this appears to be a problem with all our clients websites. an example would be www.fsgenergy.co.uk Would you also be able to suggest the best SEO plugin to use with SEOMOz ? Many thanks Paul
Moz Pro | | KloodLtd1 -
Crawl Test produced only 1 page
Hi, I recently submitted a crawl for www.cirrato.com using SEOMoz Crawl Test Tool. I have a lot of pages, but the crawl result shows only 1 page, which is the front page and nothing else... Does anyone know what this could mean or what the problem is?
Moz Pro | | yusufcirrato0 -
Blogger Duplicate Content? and Canonical Tag
Hello: I previously asked this question, but I would love to get more perspectives on this issue. In Blogger, there is an archive page and label(s) page(s) created for each main post. Firstly, does Google, esp. considering Blogger is their product, possibly see the archive and tag pages created in addition to the main post as partial duplicate content? The other dilemma is that each of these instances - main post, archive, label(s) - claim to be the canonical. Does anyone have any insight or experience with this issue and Blogger and how Google is treating the partial duplicates and the canonical claims to the same content (even though the archives and label pages are partial?) I do not see anything in Blogger settings that allows altering these settings - in fact, the only choices in Blogger settings are 'Email Posting' and 'Permissions' (could it be that I cannot see the other setting options because I am a guest and not the blog owner?) Thanks so much everyone! PS - I was not able to add the blog as a campaign in SEOmoz Pro, which in and of itself is odd - and which I've never seen before - could this be part of the issue? Are Blogger free blogs not able to be crawled for some reason via SEOmoz Pro?
Moz Pro | | holdtheonion0 -
How long does a crawl take?
A crawl of my site started on the 8th July & is still going on - is there something wrong???
Moz Pro | | Brian_Worger1