Understanding the actions needed from a Crawl Report
-
I've just joined SEOMOZ last week and have not even received my first full-crawl yet, but as you know, I do get the re-crawl report. It shows I have 50 301's and 20 rel canonical's. I'm still very confused as to what I'm supposed to fix...And, all the rel canonical's are my sites main pages, so hence I am still equally confused as to what the canonical is doing and how do I properly setup my site. I'm a technical person and can grasp most things fairly quickly, but on this the light bulb is taking a little while longer to fire-up
If my question wasn't total jibberish and you can help shed some light, I would be forever grateful.
Thank you.
-
Thanks Charles
I'm really happy with him
-
Thanks Woj - it helps..a little :). SEO is definitely a journey...
On another note, I just read the post on your company website regarding your process of developing the Kwasi robot logo - very interesting read, I enjoyed it.
-
The 301s are warnings and could be in place for a reason - you can also download a spreadsheet with all the crawl findings.. it's really useful.
Generally, fix all the errors (in red) if any.. fix warnings as required & examine the notices
For example, I have a site that has 100+ canonicals - all fine & a couple of warnings (titles too long but only over by 1 or 2 characters)
Hope that helps a little
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Recovery After A Hack - No Manual Action Notice
Hi Guys, I am helping out an agency who have had a couple of site hacked on their server. I can confirm by correlating increase in not found errors and drop in rankings, that the drop was definitely hack based although the site had no manual action notice from Google. The site looks to have been fixed i.e all not found pages look to have been sorted. Obviously there are some dodgy backlinks to now non existant pages but it looks like two months on no sign of a recovery. Is this normal?, Could the site still be hacked and the web designer is claming it has been cleaned up? I am used to dealing with hacked sites when there has been a manual action listed and then it's quite easy to complete the clean up work, submit a reconsideration and then get the manual action revoked but when you don't receieve a manual notification and the site doesn't recover, what do you do? Kind Regards Neil
Technical SEO | | nezona0 -
Need to de-index certain pages fast
I need to de-index certain pages as fast as possible. These pages are already indexed. What is the fastest way to do this? I have added the noindex meta tag and run a few of the pages through Search Console/Webmaster tools (fetch as google) earlier today, however nothing has changed yet. The 'fetch as google' services do see the noindex tag, but it haven't changed the SERPs yet. I now I should be patient, but if there is a faster way to get Google to de-index these pages, I want to try that. I am considering the removal tool also, but I'm unsure if that is risky to do. And even if it's not, I can understand it's not a permanent solution anyway. What to do?
Technical SEO | | WebGain0 -
Blocked URL parameters can still be crawled and indexed by google?
Hy guys, I have two questions and one might be a dumb question but there it goes. I just want to be sure that I understand: IF I tell webmaster tools to ignore an URL Parameter, will google still index and rank my url? IS it ok if I don't append in the url structure the brand filter?, will I still rank for that brand? Thanks, PS: ok 3 questions :)...
Technical SEO | | catalinmoraru0 -
Pages appear fine in browser but 404 error when crawled?
I am working on an eCommerce website that has been written in WordPress with the shop pages in E commerce Plus PHP v6.2.7. All the shop product pages appear to work fine in a browser but 404 errors are returned when the pages are crawled. WMT also returns a 404 error when ‘fetch as Google’ is used. Here is a typical page: http://www.flyingjacket.com/proddetail.php?prod=Hepburn-Jacket Why is this page returning a 404 error when crawled? Please help?
Technical SEO | | Web-Incite0 -
Canonical needed after no index
Hi do you need to point canonical from a subpage to main page if you have already marked a no index on the subpage, like when google is not indexing it so do we need canonicals now as is it passing any juice?
Technical SEO | | razasaeed0 -
Crawling issues in google
Hi everyone, I think i have crawling issues with one of my sites. It has vanished form Google rankings it used to rank for all services i offered now it doesn't anymore ever since September 29th. I have resubmitted to Google 2 times and they came back with the same answer: " We reviewed your site and found no manual actions by the web spam team that might affect your site's ranking in Google. There's no need to file a reconsideration request for your site, because any ranking issues you may be experiencing are not related to a manual action taken by the webspam team. Of course, there may be other issues with your site that affect your site's ranking. Google's computers determine the order of our search results using a series of formulas known as algorithms. We make hundreds of changes to our search algorithms each year, and we employ more than 200 different signals when ranking pages. As our algorithms change and as the web (including your site) changes, some fluctuation in ranking can happen as we make updates to present the best results to our users. If you've experienced a change in ranking which you suspect may be more than a simple algorithm change, there are other things you may want to investigate as possible causes, such as a major change to your site's content, content management system, or server architecture. For example, a site may not rank well if your server stops serving pages to Googlebot, or if you've changed the URLs for a large portion of your site's pages. This article has a list of other potential reasons your site may not be doing well in search. " How i detected that it may be a crawling issue is that 2 weeks ago i changed metas - metas are very slow in getting updated and for some of my pages never did update Do you know any good tools to check for bad code that could slow down the crawling. I really don't know where to look other than issues for crawling. I validated the website with w3c validator and ran xenu and cleaned these up but my website is still down. Any ideas are appreciated.
Technical SEO | | CMTM0 -
Problem wth Crawling
Hello, I have a website http://digitaldiscovery.eu here in SEOmoz. Its strange since the last week SEOmoz is crawling only one page! And before it was crwaling all the pages. Whats happening? Help SEOmoz! :))
Technical SEO | | PedroM0