Understanding the actions needed from a Crawl Report
-
I've just joined SEOMOZ last week and have not even received my first full-crawl yet, but as you know, I do get the re-crawl report. It shows I have 50 301's and 20 rel canonical's. I'm still very confused as to what I'm supposed to fix...And, all the rel canonical's are my sites main pages, so hence I am still equally confused as to what the canonical is doing and how do I properly setup my site. I'm a technical person and can grasp most things fairly quickly, but on this the light bulb is taking a little while longer to fire-up
If my question wasn't total jibberish and you can help shed some light, I would be forever grateful.
Thank you.
-
Thanks Charles I'm really happy with him
-
Thanks Woj - it helps..a little :). SEO is definitely a journey...
On another note, I just read the post on your company website regarding your process of developing the Kwasi robot logo - very interesting read, I enjoyed it.
-
The 301s are warnings and could be in place for a reason - you can also download a spreadsheet with all the crawl findings.. it's really useful.
Generally, fix all the errors (in red) if any.. fix warnings as required & examine the notices
For example, I have a site that has 100+ canonicals - all fine & a couple of warnings (titles too long but only over by 1 or 2 characters)
Hope that helps a little
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Google Search Console Still Reporting Errors After Fixes
Hello, I'm working on a website that was too bloated with content. We deleted many pages and set up redirects to newer pages. We also resolved an unreasonable amount of 400 errors on the site. I also removed several ancient sitemaps that listed content deleted years ago that Google was crawling. According to Moz and Screaming Frog, these errors have been resolved. We've submitted the fixes for validation in GSC, but the validation repeatedly fails. What could be going on here? How can we resolve these error in GSC.
Technical SEO | | tif-swedensky0 -
Do I need a separate robots.txt file for my shop subdomain?
Hello Mozzers! Apologies if this question has been asked before, but I couldn't find an answer so here goes... Currently I have one robots.txt file hosted at https://www.mysitename.org.uk/robots.txt We host our shop on a separate subdomain https://shop.mysitename.org.uk Do I need a separate robots.txt file for my subdomain? (Some Google searches are telling me yes and some no and I've become awfully confused!
Technical SEO | | sjbridle0 -
Pagination when not needed
Hello Moz, Odd one for you today. I've a site with has pagination (rel= next / prev) however its not being used correctly. I'll give you some examples: lets assume its a 5 page site with a home page, about us etc. The home page has a rel="next" tag on it leading to the next tab (about us) this goes all the way down to the final tag (contact us). Normally you use these tags for pages e.g page 1 - 5 but how much will they affect being used in the way above I'm thinking site structure. Just to add there is no view all on it either though this would make no sense in the way it is being used. Normally I would remove but the client wants to know why and I wanted to articulate better then "because its wrong" As always Moz - thanks!
Technical SEO | | GPainter0 -
Blogspot domains - giving me a manual action
So some agency did horrendous article submissions on mass in 08/09. Since I have been tidying this up by manually getting the domains removed in our back-link profile. Some however i just cannot get rid of. Recent penguin update obviously penalised me for this, so i disavowed the rest i could not remove and did a reconsideration request. The reply from Google was still that it violates guidelines and it used 3 blogspot domains (which no crawler i used had previously found) as examples. Now there is NOONE at Google to contact about this and the sites are abandoned, so they just sit there doing damage. I will ofcourse add these to the disavow but can i disavow the whole of blogspot.com ? What if all are in the disavow but they still use it against us in the reconsideration request and i cannot remove them as noone to contact at Google? Really appreciate the help, thanks, 2 years of hell tidying up bad agency work!
Technical SEO | | pauledwards0 -
Do I need Redirects?
I've recently changed my old static website to a WordPress one. I'd like to know what do do (if anything) about my old links. For example a page on my old site was: www.iainmoran.com/corporate-magician.html - now I'm using WordPress, the url is:
Technical SEO | | iainmoran
www.iainmoran.com/corporate-magician/ My question is, do I need to set up redirects on these old pages (which no longer exist or will Google eventually re-crawl my site and update the links themselves? I'm using the Yoast SEO Plugin for WP and it creates a sitemap, which of course will have my new pages on. But don't want Google to penalise me for having broken links, etc. Many thanks, Iain.0 -
Javascript --can SE crawl?
I have a couple of nested div's. I'd like to do an onclick="location.href='http://www.example.com';" - within the outermost div so that all content within will link to one url. Can the Search Engines crawl this? Thanks!
Technical SEO | | Morris770 -
Moz Crawl Reporting Duplicate content on "template" styled pages
We have a lot of detail pages on our site that reference specific scholarships. Each page has a different Title and Description. They also have unique information all regarding the same data points. The pages are displayed in a similar structure to the user so the data is easy to read. My problem is a lot of these pages are being reported as duplicate content when they certainly are not. Most of them are reported as duplicates when they have the same sponsor. They may have the same contact information listed. These two are being reported as duplicate of each other. They share some data but they are definitely different scholarships. http://www.collegexpress.com/scholarships/adelaide-mcclelland-garden-club-scholarship/9254/ http://www.collegexpress.com/scholarships/mary-wannamaker-witt-and-lee-hampton-witt-memorial-scholarship/10785/ Would it help to add a Canonical for each page to themselves? Any other suggestions would be great. Thanks
Technical SEO | | GeorgeLaRochelle0 -
Site maintenance and crawling
Hey all, Rarely, but sometimes we require to take down our site for server maintenance, upgrades or various other system/network reasons. More often than not these downtimes are avoidable and we can redirect or eliminate the client side downtime. We have a 'down for maintenance - be back soon' page that is client facing. ANd outages are often no more than an hour tops. My question is, if the site is crawled by Bing/Google at the time of site being down, what is the best way of ensuring the indexed links are not refreshed with this maintenance content? (ie: this is what the pages look like now, so this is what the SE will index). I was thinking that add a no crawl to the robots.txt for the period of downtime and remove it once back up, but will this potentially affect results as well?
Technical SEO | | Daylan1