Crawl Diagnostics 2261 Issues with Our Blog
-
I just recently signed up for MOZ, so much information. I've done the walk through and will continue learning how to us the tools. But I need your help.
Our first moz crawl indicated 2261 issues (447 404's, 803 duplicate content, 11 502's, etc). I've reviewed all of the crawls issues and they are linked to our Yahoo hosted WordPress blog. Our blog is over 9 years old. The only issue that I'm able to find is our categories are not set up correctly. I've searched for WordPress assistance on this topic and cant find any issues with our current category set up. Every category link that I click returns Nothing Found Apologies, but no results were found for the requested archive. Perhaps searching will help find a related post.
http://site.labellaflorachildrensboutique.com/blog/
Any assistance is greatly appreciated.
-
Go Dan!
-
While what Matt and CleverPHD (Hi Paul!) have said is correct - here's your specific issue:
Your categories are loading with "ugly" permalinks like this: http://site.labellaflorachildrensboutique.com/blog/?cat=175 (that loads fine)
But you are linking to them from the bottom of posts with the "clean" URLs --> http://screencast.com/t/RIOtqVCrs
The fix is that Catgory URLs need to load with "clean" URLs and the ugly one should redirect to the clean one.
Possible fixes:
- Try updating wordpress (I see you're on a slightly older version)
- See if you .htaccess file has been modified (ask a developer or your hosting for help with this perhaps)
Found another linking issue:
This link to Facebook in your left sidebar --> http://screencast.com/t/EqltiBpM it's just coded incorrectly. It adds the current page URL so you get a link like this http://site.labellaflorachildrensboutique.com/blog/category/unique-baby-girl-gifts/www.facebook.com/LaBellaFloraChildrensBoutique instead of your Facebook page: http://www.facebook.com/LaBellaFloraChildrensBoutique
You can fix that Facebook link probably in Appearance->Widgets.
That one issue is causes about 200 of your broken URLs
-
One other thing I forgot. This video by Matt Cutts
It explains why Google might show a link even though the page was blocked by robots.txt
https://www.youtube.com/watch?v=KBdEwpRQRD0
Google really tries not to forget URLs and this video reminds us that Google uses links not just for ranking, but discovery so you really have to pay attention to how you link internally. This is especially important for large sites.
-
Awesome! Thanks for straightening it out.
-
Yes, the crawler will avoid the category pages if they are in robots.txt. It sounded like from the question that this person was going to remove or change the category organization and so you would have to do something with the old URLs (301 or noindex) and that is why I would not use robots.txt in this case so that those directives can be seen.
If these category pages had always been blocked using robots.txt, then this whole conversation is moo as the pages never got in the index. It is when unwanted pages get in the index that you potentially want to get rid of that things get a little tricky, but workable.
I have seen issues where there are pages on sites that got into the index and ranking but they were the wrong pages and so the person just blocked with robots.txt. Those URLs continued to rank and cause problems with the canonical pages that should be ranking. We had to unblock, let Google see the 301, rank the new pages then put the old URLs back into robots to prevent the old URLs from getting back into the index.
Cheers!
-
Oh yeah, that's a great point! I've found that the category pages rarely rank directly, but you'll definitely want to double-check before outright blocking crawlers.
Just to check my own understanding, CleverPhD, wouldn't crawlers avoid the category pages if they were disallowed by robots.txt (presuming they obey robots.txt), even if the links were still on the site?
-
One wrinkle. If the category pages are in Google and potentially ranking well - you may want to 301 them to consolidate them into a more appropriate page (if this makes sense) or if you want to get them out of the index, use a meta noindex robots tag on the page(s) to have them removed from the index, then block them in robots.txt.
Likewise, you have to remove the links on the site that are pointing to the category pages to prevent Google from recrawling and reindexing etc.
-
Category pages actually turn up as duplicate content in Crawl Diagnostics _really _often. It just means that those categories are linked somewhere on your site, and the resulting category pages look almost exactly like all the others.
Generally, I recommend you use robots.txt to block crawlers from accessing pages in the category directory. Once that's done and your campaign has re-crawled your site, then you can see how much of the problem was resolved by that one change, and consider what to do to take care of the rest.
Does that make sense?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Solving URL Too Long Issues
Moz.com is reporting that many URL's are to long, these particularly affect Product URL's where the URL is typically https://www.domainname.com/collections/category-name/products/product-name, (You guessed it we're using Shopify). However, we use Canonicals that ignore all most of the URL and are just structured https://www.domainname.com/product-name, so Google should be reading the Canonical and not the long-winded version. However, Moz cannot seem to spot this... does anyone else have this problem and how to solve so that we can satisfy the Moz.com crawl engine?
Moz Pro | | Tildenet0 -
WEbsite cannot be crawled
I have received the following message from MOZ on a few of our websites now Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. I have spoken with our webmaster and they have advised the below: The Robots.txt file is definitely there on all pages and Google is able to crawl for these files. Moz however is having some difficulty with finding the files when there is a particular redirect in place. For example, the page currently redirects from threecounties.co.uk/ to https://www.threecounties.co.uk/ and when this happens, the Moz crawler cannot find the robots.txt on the first URL and this generates the reports you have been receiving. From what I understand, this is a flaw with the Moz software and not something that we could fix form our end. _Going forward, something we could do is remove these rewrite rules to www., but these are useful redirects and removing them would likely have SEO implications. _ Has anyone else had this issue and is there anything we can do to rectify, or should we leave as is?
Moz Pro | | threecounties0 -
Does MOZ still do deep crawls of the website?
In the past you could get MOZ to crawl your website, now I don't see this option, no do I see a crawl at the beginning of the month. Has this change? I saw this as a useful feature.
Moz Pro | | cdgospel0 -
404 errors High Priority Issues in Moz Pro: change to 301 or not ?
Hi there, Moz Pro is showing us 404 errors on our site as High Priority Issues. These 404 errors regard deleted product pages, which we did not 301. Should we 301 them all backwards ? We have an ecommerce site. After reading How Should You Handle Expired Content? on Moz and a few other Q&A discussions I now know we should 301 each expired url and now we do so. My concern is with what was done in the past, and what we should do about it: for the past few years we have been leaving the pages on the site, creating a big amount of outdated url's without either content nor traffic in march our IT decided to delete these url's, and ask for a webpage removal in Google Search Console: we 301 only a 40 url's and 404 the other 3500 now 6 monthts after, we still have 2500 crawl errors in the Search Console, and Moz Pro finding each week new 404 errors Our SEO consultant says we should not bother about the errors shown in the Search Console. But I am concerned about these errors not reducing, and about Moz Pro High Priority Issues: should we 301 the url's to similar categories or products?
Moz Pro | | isabelledylag0 -
1 week has passed: Crawled pages still N/A
Roughly one week ago I went pro, and then I created a campaing for the smallish webshop that I'm employed at, however it doesn't seem to crawl. I've check our visitors log and while we find other bots such as google, bing, yandex and so fourth, seomoz bot hasn't been visible. Perhaps I'm looking for a normal useragent, ohwell, onwards. While I thought it might take time, as a small test I added a domain that I've owned for sometime but don't really use, that target site is only 17 pages, now this site was crawled almost within the hour, and I realised that our ~5000pages on the main campaing would take some time, but wouldn't the initial 250 pages be crawled by now? I should add, that I didn't add http:// to the original Campaing, but the one that got crawled I did. I cannot seem to change this myself inorder to spot if that's the problem or not. Anyone has any ideas, should I just wait or is there something I can activly do to force it to start rolling?
Moz Pro | | Hultin0 -
Crawl Diagnostics returning duplicate content based on session id
I'm just starting to dig into crawl diagnostics and it is returning quite a few errors. Primarily, the crawl is indicating duplicate content (page titles, meta tags, etc), because of a session id in the URL. I have set-up a URL parameter in Google Webmaster Tools to help Google recognize the existence of this session id. Is there any way to tell the SEOMoz spider the same thing? I'd like to get rid of these errors since I've already handled them for the most part.
Moz Pro | | csingsaas0 -
Issue with temporary redirects
I have an issue with my temporary redirects. The seomoz campaign is showing many pages are redirecting to the homepage. However, the pages it identifies are old pages and have been deleted and are not on the sitemap. How do i solve this?
Moz Pro | | CompleteOffice0 -
On page optimisation tool issues
When viewing my campaign and looking at the on page optimisation tool, I have a few issues. I seems to only shows the keywords I want rankings for and how optimised my homepage is for those keywords. Is there any way I can get it to analyse permanently specifc keywords for specific pages because my homepage isnt optimised for some keywords which are on my list, which I have optimised other pages for, and because its looking at my homepage its getting a really low grade, and looks really bad and frustrates me because I cant work this out. Any help greatly appreciated.
Moz Pro | | CompleteOffice1