Crawl Diagnostics
-
My site was crawled last night and found 10,000 errors due to a Robot.txt change implemented last week in between Moz crawls. This is obviously very bad so we have corrected it this morning. We do not want to wait until next Monday (6 days) to see if the fix has worked. How do we force a Moz crawl now?
Thanks
-
Its a dotnetblogengine.com blog its open source but not sure where to start
-
Why so many duplicates? As it's a blog I suspect it's something to do with tags and/or categories.
Instead of trying to hide the problem using the robot.txt file can tackle the root cause directly?
-
Hi,
As Chris says I don't think there is a way to force a refresh on your campaign crawls, but that crawl test tool should be able to give you an indication if the relevant pages are still producing duplicate content issues or if the fix seems to be reducing them.
That being said, I don't think that robots.txt is the best way to approach duplicate content issues generally. Check out this guide for best practices. It is also worth noting that many times duplicate content issues can be solved by simply removing or adjusting the various differently formatted links that are producing them in the first place (though this depends a lot on which cms you are using and what the root cause of the duplicate content is).
-
Thanks
9000 duplicate content and duplicate page titles caused by my blog. I have added
User-agent: *Allow: /Blog/post/Disallow: /Blog
to the Robot.txt to just allow the main site and the Blog posts
Is this a good way to fix it?
-
I'm pretty sure that you're not able to force a refresh of your campaign stats in between your normal weekly crawl. This tool will crawl the site but it doesn't refresh your campaign. Specifically, what errors were found that you're trying to get rid of?
-
Hi,
I think this will do what you are after: http://pro.moz.com/tools/crawl-test, limited to 3000 pages but should give an idea if the fix is working as you expect.
-
Thanks but I require a Moz crawl first.
-
"Submit URL to Index," which allows you to submit new and updated URLs that Google themselves say they "will usually crawl that day"
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there a tool to crawl website meta tags to locate any in an incorrect language?
Example: I'm interested in crawling a regional site for Brazil to find any meta tags that are still in English with the goal being to fix any localization issues.
Moz Pro | | mattsolar0 -
Still Cant Crawl My Site
I've removed all blocks but two from our htaccess. They are for amazonaws.com to block amazon from crawling us. I did a fetch as google in our WM tools on our robots txt with success. SEOMoz crawler here hit's our site and gets a 403. I've looks in our blocked request logs and amazon is the only one in there. What is going on here?
Moz Pro | | martJ0 -
How can I see the URL's affected in Seomoz Crawl when Notices increase
Hi, When Seomoz crawled my site, my notices increased by 255. How can I only these affected urls ? thanks Sarah
Moz Pro | | SarahCollins0 -
Campaign web crawl has failed last 4 times
I have 4 websites setup in my pro dashboard. The only site that isn't getting crawled is an HTTPS site. It has worked for over a year, but the past 4 crawls (an entire month now) has returned only one page crawled. Is there something going on with the crawler? I really need to be able to see these stats. Has anyone else experienced this issue?
Moz Pro | | nbyloff0 -
Crawling One Page
I set up a profile for a site with many pages, opting for setting up as a root directory. When SEOMoz crawled, they only found one page. Any ideas for why this would be? Thanks!
Moz Pro | | Group160 -
A suggestion to help with linkscape crawling and data processing
Since you guys are understandably struggling with crawling and processing the sheer number of URLs and links, I came up with this idea: In a similar way to how SETI@Home (is that still a thing? Google says yes: http://setiathome.ssl.berkeley.edu/) works, could SEOmoz use distributed computing amongst SEO moz users to help with the data processing? Would people be happy to offer up their idle processor time and (optionally) internet connections to get more accurate, broader data? Are there enough users of the data to make distributed computing worthwhile? Perhaps those who crunched the most data each month could receive moz points or a free month of Pro. I have submitted this as a suggestion here:
Moz Pro | | seanmccauley
http://seomoz.zendesk.com/entries/20458998-crowd-source-linkscape-data-processing-and-crawling-in-a-similar-way-to-seti-home1 -
Errors on my Crawl Diagnostics
I have 51 errors on my Crawl Diagnostics tool.46 are 4xx Client Error.Those 4xx errors are links to products (or categories) that we are not selling them any more so there are inactive on the website but Google still have the links. How can I tell Google not to index them?. Can those errors (and warnings) could be harming my rankings (they went down from position 1 to 4 for the most important keywords) thanks,
Moz Pro | | cardif0 -
Ruling out subfolders in pro tool crawl
Is there a way to "rule out" a subfolder in the pro dashboard site crawl? We're working on a site that has 500,000+ pages in the forums, but its the CMS pages we're optimizing and don't want to spend the 10k limit on forum pages.
Moz Pro | | DeepRipples0