Who wants to help go over my crawl diagnostics via skype?
-
I have run a crawl diagnostic on my site and have 194 errors and most of them are 404 errors in wordpress. Not sure why, but many of my pages had name changes (possibly a permalinks issue) but I have no idea how to fix it.
- I had 5 duplicate page titles, and 1 tile missing or empty.
- 72 crawl notices found (2 permanent redirect, 17 blocked by robots, 53 rel canonical)
- 19 Crawl warnings were found
Who wants to have some fun?
-
Alright. Having over 100 errors and warnings had me worried. Sounds like its not that big of a deal. I did notice some issues where it looks like my blog page and some other pages are not being indexed. Any insight on this?
-
Your site seems to be in decent shape, based on the screaming frog crawl i ran
- You've got 2 internal 404 errors due to erroneous links you placed in the code probably
- your dupe titles don't look like they're a problem - canonical and no index tags seem to have sorted the issue
- page with a missing title is .php?Action=sitemap - doesn't seem to be the end of the world
The spider follows robots.txt
All in all, I wouldn't worry bout the warnings.
Mark
-
http://starkseo.com/downloads/
my skype is dfwnerdherd
-
Instead of Skype, why don't you download the PDF and upload it to your server, and then share the link. I'd like to take a look.
-
Zack, what's your site? I can run it through Screaming Frog and take a look at the sources for the 404 errors, and export the data for you, so you'll be able to go in and fix the links.
-
Hi Zack,
Have you checked out our Help Hub, where we have a lot of information about the Crawl Diagnostics tool and a video that walks you through it some of the features?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Special Characters in URL & Google Search Engine (Index & Crawl)
G'd everyone, I need help with understanding how special characters impact SEO. Eg. é , ë ô in words Does anyone have good insights or reference material regarding the treatment of Special Characters by Google Search Engine? how Page Title / Meta Desc with Special Chars are being index & Crawl Best Practices when it comes to URLs - uses of Unicode, HTML entity references - when are where? any disadvantage using special characters Does special characters in URL have any impact on SEO performance & User search, experience. Thanks heaps, Amy
Moz Pro | | LabeliumUSA0 -
How to creat link building,why seomoz has no such tool? also why it not crawl on daily basis?
how to creat link building? why seomoz has no such tool? also why it not crawl on daily basis?
Moz Pro | | mrgunii0 -
New on SEO Moz and need help regarding panda recover.
Hello Dear Members, I am a new member to not only seomoz but to the SEO World as well. I was learning and implementing SEO from the last 5,6 months and got my website on Google.com page 1. I was quite happy with all the results. I recently got adsense approved on my site and was quite happy with the earnings as well. Unfortunately the most recent Panda update has effected most of my SEO efforts I was doing from the last 5,6 months. My rankings are effected a lot and same with my earnings from adsense. My rankings are effected in this manner. A couple of keywords that were in top 3 spots (1st page Google.com) are now at 7,8 (1st page Google.com) position. There are 3,4 pages that were previously on 5,6 spot (1st page Google.com) are now on 15, 16 spot (2nd page Google.com). Now I want to ask what tools from seomoz can help me to regain my lost positions in Google? Secondly what type of strategy (backlinks) can help me to regain the lost rankings? Best Regards Sam
Moz Pro | | sampaul5490 -
Crawl Diagnostics returning duplicate content based on session id
I'm just starting to dig into crawl diagnostics and it is returning quite a few errors. Primarily, the crawl is indicating duplicate content (page titles, meta tags, etc), because of a session id in the URL. I have set-up a URL parameter in Google Webmaster Tools to help Google recognize the existence of this session id. Is there any way to tell the SEOMoz spider the same thing? I'd like to get rid of these errors since I've already handled them for the most part.
Moz Pro | | csingsaas0 -
El "Crawl Diagnostics Summary" me indica que tengo contenido duplicado
Como El "Crawl Diagnostics Summary"pero la verdad es que son las mismas paginas con el wwww y sin el www. que puedo hacer para quitar esta situacionde error.
Moz Pro | | arteweb20 -
Our Duplicate Content Crawled by SEOMoz Roger, but Not in Google Webmaster Tools
Hi Guys, We're new here and I couldn't find the answer to my question. Here it goes: We had SEOMoz's Roger Crawl all of our pages and he came up with quite a few erros (Duplicate Content, Duplicate Page Titles, Long URL's). Per our CTO and using our Google Webmaster Tools, we informed Google not to index those Duplicate Content Pages. For our Long URL Errors, they are redirected to SEF URL's. What we would like to know is if Roger is able to know that we have instructed Google to not index these pages. My concern is Should we still be concerned if Roger is still crawling those pages and the errors are not showing up in our Webmaster Tools Is there a way we can let Roger know so they don't come up as errors in our SEOMoz Tools? Thanks so much, e
Moz Pro | | RichSteel0 -
Unsubscribe to weekly crawl notifications never works
Hello! All of my campaigns have the box 'Weekly crawl completed for campaign ...' unticked under Campaign Settings, yet for all of them I still receive an email regularly with the subject 'New crawl completed for ...'. How do I stop this? Is there a bug here? Adam Bishop
Moz Pro | | arbishop0