Understanding the actions needed from a Crawl Report
-
I've just joined SEOMOZ last week and have not even received my first full-crawl yet, but as you know, I do get the re-crawl report. It shows I have 50 301's and 20 rel canonical's. I'm still very confused as to what I'm supposed to fix...And, all the rel canonical's are my sites main pages, so hence I am still equally confused as to what the canonical is doing and how do I properly setup my site. I'm a technical person and can grasp most things fairly quickly, but on this the light bulb is taking a little while longer to fire-up
If my question wasn't total jibberish and you can help shed some light, I would be forever grateful.
Thank you.
-
Thanks Charles I'm really happy with him
-
Thanks Woj - it helps..a little :). SEO is definitely a journey...
On another note, I just read the post on your company website regarding your process of developing the Kwasi robot logo - very interesting read, I enjoyed it.
-
The 301s are warnings and could be in place for a reason - you can also download a spreadsheet with all the crawl findings.. it's really useful.
Generally, fix all the errors (in red) if any.. fix warnings as required & examine the notices
For example, I have a site that has 100+ canonicals - all fine & a couple of warnings (titles too long but only over by 1 or 2 characters)
Hope that helps a little
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google not crawling the website from 22nd October
Hi, This is Suresh. I made changes to my website and I see that google is unable to crawl my website from 22nd October. Even it is not showing any content when I use Cache:www.vonexpy.com. Can any body help me in knowing why Google is unable to crawl my website. Is there any technical issue with the website? Website is www.vonexpy.com Thanks in advance.
Technical SEO | | sureshchowdary1 -
Ignore these external links reported in GWT?
Taking a long, Ace-Ventura-like breath here. This question is loaded. Here we go: No manual actions reported against my client's site in GWT HOWEVER, a link: operator search for external links to my client's website shows NO links in results. That seems like a very bad omen to me. OSE shows 13 linking domains, but not the one that's listed in the next bullet point. Issue: In Google Webmaster Tools, I noticed 1,000+ external links to my client's website all coming from riddim-donmagazine.com (there are a small handful of other domains listed, but this one stuck out for the large quantity of links coming from this domain) Those external links all point to two URLs on my client's website. I have no knowledge of any campaigns run by client that would use this other domain (or any schemes for that matter) It appears that this website riddim-donmagazine.com has been suspended by hostgator All of the links were first discovered last year (dates vary but basically August through December 2013) There have not been any newly discovered links from this website reported by GWT since those 2013 dates All of the external links are /? based. Example: http://riddim-donmagazine.com/?author=1&paged=31 If I run that link in preceding bullet point through http://www.webconfs.com/http-header-check.php, or any others from riddim-donmagazine.com those external links return 302 status. My best guess is at one time the client was running an advertising program and this website may have been on that network. One of the external links points to an ad page on the client's website.
Technical SEO | | EEE3
(web.archive.org confirms this is a WordPress site and that it's coverage of Bronx news could trigger an ad for my client or make it related to my client's website when it comes to demographics.) Believe me, this externally linked domain is only a small problem in comparison with the rest of my client's issues (mainly they've changed domains, then they changed website vendors, etc., etc.), but I did want to ask about that one externally linked domain. Whew.Thanks in advance for insights/thoughts/advice!0 -
Sitemap as Referrer in Crawl Error Report
I have just downloaded the SEOMoz crawl error report, and I have a number of pages listed which all show FALSE. The only common denominator is the referrer - the sitemap. I can't find anything wrong, should I be worried this is appearing in the error report?
Technical SEO | | ChristinaRadisic0 -
Need some help Pagerank N/A
Hi All! We have a small e commerce site, www.ruggedpcstore.com, built on Wordpress. The home page has a PR of 3, our About page is a PR 0, and all other pages are PR N/A. We've been pulling our hair out trying to fix what's wrong. Any ideas?
Technical SEO | | CsmBill0 -
Http VS https and google crawl and indexing ?
Is it true that https pages are not crawled and indexed by Google and other search engines as well as http pages?
Technical SEO | | sherohass0 -
Need advice on having customer stores running on my subdomain
We have an online store product and we're working on the SEO for our new domain (foo.com in this example.) Our customers have the ability to change the domain of their store but many of them will likely stick with the subdomains we give them (store1.foo.com) We could potentially have thousands of stores soon using our subdomain. Each of these stores will have a very small link at the bottom to our own domain but other than this, the content is completely user-generated and not under our control. Are there risks/problems associated with this type of strategy? If so, could we perhaps avoid them by using robots.txt to block entire site until they change to their own domain? TIA, Sean
Technical SEO | | schof0 -
Understanding page and subdomain metrics in OSE
When using OSE to look at the **Total External Links **of a websites homepage, I dont understand why the page and subdomain metrics are so different. For example, privacy.net has 1,992 external links at the page level and 55,371 at the subdomain level. What subdomain? www redirects to privacy.net. And they have 56,982 at the root domain level - does that mean they have around 55k deep links or what?
Technical SEO | | meterdei0 -
On-Page Report Card, rel canonical
My site has the rel canonical tags set up for it. The developers say that it is set up correctly. Looking at the source code myself, it looks (to my untutored eyes) to be set up correctly. However, on the On Page Report Card for every page I have checked, it says that it doesn't point to the right page. I'd really like to change all my 'B's to 'A's, but I simply can't see what the issue is.
Technical SEO | | Breakout0