Crawl Diagnostics
-
Hello Experts,
today i was analyse one of my website with moz and get issue overview and get total 212 issue 37 high all derive to this same url http://blogname.blogspot.com/search?updated-max=2013-10-30T17:59:00%2B05:30&max-results=4&reverse-paginate=true so can anyone help me how to find this url and remove all high priority error.
and even on page website get A grade then why not performing well in SE ?
-
Thanks to all of you for solve my query, and questions.
-
Hi there!
Thanks for your time in reaching out to us. Sorry to hear about your troubles with our tool Without knowing anything about your campaigns, it looks like the you are trying to find where this URL shows up in the pages on your site. To see the source of those errors, you’ll need to download your Full Crawl Diagnostics CSV and open it up in something like Excel. In the first column, perform a search for the URL of the page you are looking for. When you find the correct row, look in the last column labeled referrer. This tells you the referral URL of the page where our crawlers first found the target URL. You can then visit this URL to find the source of your errors.
For more information you can check out our help hub resource at: http://moz.com/help/guides/search-overview/crawl-diagnostics, or if you could let us know any questions you have specifically with your account at: http://moz.com/help/contact.
Hope that's helpful! Please let me know if you have any questions.
Best,
Peter
Moz Help Team. -
Hey There,
I just took a look at the campaign on your account and I am only seeing three high priority errors for duplicate content, which are for pages that do seem to be duplicates of each other. If you are running into an issue with a crawl test or a campaign on a different account, it would be helpful if you could email the full information to us at help@moz.com so we can investigate further.
As for your on-page grade, having a well optimized page does not guarantee that your page will do well in the search engines for that keyword. For example, there could be hundreds of other pages that are well optimized for the same keyword that have higher authority links, more links, and more relevant content according to the search engines, which would then be prioritized above your site in the ranking results. A great way to research those types of metrics would be our Keyword Analysis tool (http://moz.com/researchtools/keyword-difficulty). This tool will show you the top 10 results for a keyword and the metrics that can indicate why those pages are ranking well. If you run a full report, you can also include your own URL to compare against the pages in the actual rankings.
I hope this helps!
Chiaryn
-
i was download that CSV report also but i can't find this URL in my website code or any place
-
When you are looking at Crawl Diagnostics, there is an option to export to CSV. When you do that you can see the referring URLs for this problem URL. That's a good place to start.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can't Crawl Site - but deducting crawls.
Why am I being deducted crawls if MOZ keeps telling me that it can't crawl my site?
Getting Started | | BloggyMoms1 -
When I crawl my site On Moz it says it can't access the robots.txt file, but crawl is fine on SEM Rush - Anyone know any reason for this?
Hi guys, When I try to run a site crawl on Moz it returns an error saying that it has failed due to an error with the robots.txt file. However, my site can be crawled by SEM Rush with no mention of problems with roots.txt file issues. My developer has looked into it and insists their is no problem with my robots.txt and I've tried the Moz crawl at least 6 times over an 8 week period. Has anyone ever seen such a large discrepancy between Moz and SEM Rush or have any ideas why Moz has this issue with my site?? TIA everyone
Getting Started | | Webreviewadmin0 -
Does MOZ pick up every issue in one crawl?
Hi, Does MOZ pick up every error/warning in one crawl? Or does it take numerous crawls? Many thanks Lee
Getting Started | | lbagley0 -
Standard Syntax in robots.txt doesn't prevent Moz bot from crawling
A client is getting many false positive site crawl errors for things like duplicate titles and duplicate content on pages that include /tag/ in the URL. An example is https://needquest.com/place_tag/autism-spectrum-disorder/page/4/ To resolve this we have set up a disallow statement in the robots.txt file that says
Getting Started | | btreloar
Disallow: /page/ For some reason this appears not to work, as the site crawl errors continue to list pages like this. Does anyone understand why that would be and what we need to do to properly disallow crawling these pages?0 -
Does anyone know where I can find the Moz Video explaining how to use the Craw Diagnostic Feature? Thank!
I am starting to use the craw diagnostics(specifically duplicate content) and I know there was a very helpful tutorial video i saw earlier but I cant seem to find it now
Getting Started | | John-Francis0 -
Crawl test
Can anyone give me an idea how to use the MOZ crawl test results...I'm a little confused on how to read it? I have a lot of "no's"...I think this is good?
Getting Started | | sdwellers0 -
Crawl Diagnostics Help
Hi there Where can i find my campaigns crawl diagnostics? I need to find where this information can be found and specific issues? Is this possible, i cant seem to find this info. regards Ana
Getting Started | | Starsia200000 -
How to get moz to crawl a staging domain that is blocked by robots.txt
Is it possible to get Moz to do a crawl report on a domain blocked by robots.txt and actually display all errors instead of only one saying the domain was blocket in robots.txt? Anything i can add to robots.txt to make moz able to do the crawl report but still hinder google from crawling a staging domain?
Getting Started | | classifiedtech0