Google Webmaster tools vs SeoMOZ Crawl Diagnostics
-
Hi Guys I was just looking over my weekly report and crawl diagnostics. What I've noticed is that the data gathered on SeoMoz is different from Google Webmaster diagnostics. The number of errors, in particular duplicate page titles, content and pages not found is much higher that what google webmaster tools is represents. I'm a bit confused and don't know which data is more accurate. Please Help
-
I had this on the last crawl. There was no mention on the one before and I haven't changed anything in between so could be a glitch ?
-
I've been having similar issues but more specifically, SEOmoz reporting dup content which I cannot even locate manually. Hoping to see something different with the next crawl but it has been pretty confusing over the last couple of crawls.
-
I always make sure to manually check stuff on a regular basis as well as use tools. Tools are great, but like Steve said this is always going to be the case with software, especially in an industry as unregulated as ours.
Try to use the tools as a means of directing your attention to items that need it; rather than trusting them implicitly and using them as a checklist.
And again, Steve is correct in that Google's Webmaster Tools (as great as it is) can be a little dated. They will sometimes show errors that have been cleaned for a while. And keep in mind that seomoz tools are created within the context of SEO interests (Google will just show technical errors, SEO tools will show those, plus other warnings based on experience on things that will impact your rankings).
-
This will always be the case... there are different sources, and different interpretations of what constitutes errors. I wouldn't go thinking that just because GWT is Google that means it's the most efficient though, Google have a lot of other business to be thinking about and are unlikely to be too concerned if everything isn't perfect in their free GWT product, so I doubt it would take priority over other services of theirs. Where-as Moz of course is more focused in that particular area. I would just combine the two and iron out as much as you can from each... you'll never get them to match up though.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can Google Crawl This Page?
I'm going to have to post the page in question which i'd rather not do but I have permission from the client to do so. Question: A recruitment client of mine had their website build on a proprietary platform by a so-called recruitment specialist agency. Unfortunately the site is not performing well in the organic listings. I believe the culprit is this page and others like it: http://www.prospect-health.com/Jobs/?st=0&o3=973&s=1&o4=1215&sortdir=desc&displayinstance=Advanced Search_Site1&pagesize=50000&page=1&o1=255&sortby=CreationDate&o2=260&ij=0 Basically as soon as you deviate from the top level pages you land on pages that have database-query URLs like this one. My take on it is that Google cannot crawl these pages and is therefore having trouble picking up all of the job listings. I have taken some measures to combat this and obviously we have an xml sitemap in place but it seems the pages that Google finds via the XML feed are not performing because there is no obvious flow of 'link juice' to them. There are a number of latest jobs listed on top level pages like this one: http://www.prospect-health.com/optometry-jobs and when they are picked up they perform Ok in the SERPs, which is the biggest clue to the problem outlined above. The agency in question have an SEO department who dispute the problem and their proposed solution is to create more content and build more links (genius!). Just looking for some clarification from you guys if you don't mind?
Technical SEO | | shr1090 -
Google Site Search
I'm considering to implement google site search bar into my site.
Technical SEO | | JonsonSwartz
I think I probably choose the version without the ads (I'll pay for it). does anyone use Google Site Search and can tell if it's a good thing? does it affects in any way on seo? thank you0 -
403s vs 404s
Hey all, Recently launched a new site on S3, and old pages that I haven't been able to redirect yet are showing up as 403s instead of 404s. Is a 403 worse than a 404? They're both just basically dead-ends, right? (I have read the status code guides, yes.)
Technical SEO | | danny.wood1 -
How to rank in Google Places
Normally, I don't have a problem with local SEO (more of a multi-channel sort of online marketing guy) but this one has got me scratching my head. Look at https://www.google.co.uk/search?q=wedding+venues+in+essex Theres two websites there (fennes and quendon park) that both have a much more powerful DA but don't appear in the Google Places (Google + Business or whatever it's labeled as). Why are websites such as Boreham house ranking top in the map listings? Quendon Park has a Google places listing, it's full of content, the NAP all matches up. Its a stronger website. Boreham House isn't any closer to the centroid than Quendon Park Just got me struggling this one
Technical SEO | | jasonwdexter0 -
SEOMoz Crawler vs Googlebot Question
I read somewhere that SEOMoz’s crawler marks a page in its Crawl Diagnostics as duplicate content if it doesn’t have more than 5% unique content.(I can’t find that statistic anywhere on SEOMoz to confirm though). We are an eCommerce site, so many of our pages share the same sidebar, header, and footer links. The pages flagged by SEOMoz as duplicates have these same links, but they have unique URLs and category names. Because they’re not actual duplicates of each other, canonical tags aren’t the answer. Also because inventory might automatically come back in stock, we can’t use 301 redirects on these “duplicate” pages. It seems like it’s the sidebar, header, and footer links that are what’s causing these pages to be flagged as duplicates. Does the SEOMoz crawler mimic the way Googlebot works? Also, is Googlebot smart enough not to count the sidebar and header/footer links when looking for duplicate content?
Technical SEO | | ElDude0 -
Is any know if seomoz update for site crawl.
i belive my site www.breeze-air.com hit by penguin; i found that i had un-natural anchors text and able to remove around 1200 from the 1900 seomoz found. seomoz still shows those anchors - but when i check the link its not there. i removed them 3-4 weeks ago any idea?
Technical SEO | | eoberlender0 -
Bing Webmaster tool
Hi Fellas, I wanted to know once you verify the BIng Webmaster tool (via xml file) for a dev site, do you have to do the verification process again for the final site? I thought I needed to do the verification again but once I added the final website (which have almost a similar URL) into the webmaster tool account, it seemed that I didn't have to verify it. I am a bit confused. Thank you for clarifying
Technical SEO | | Ideas-Money-Art0 -
.us domains vs .com - What does Google Think?
Suppose I had 2 domains, carloans.us & carloans.com with exactly the same links profiles, and content (not duplicate but you know what I mean). Would Google favour the .com domain? In my experience, yes. But I might be wrong?
Technical SEO | | Tom-R
Same with other not so standard domains like .biz etc. Am I right to believe that Google can prefer the more common domain extensions?0