Crawl Diagnostics
-
My site was crawled last night and found 10,000 errors due to a Robot.txt change implemented last week in between Moz crawls. This is obviously very bad so we have corrected it this morning. We do not want to wait until next Monday (6 days) to see if the fix has worked. How do we force a Moz crawl now?
Thanks
-
Its a dotnetblogengine.com blog its open source but not sure where to start
-
Why so many duplicates? As it's a blog I suspect it's something to do with tags and/or categories.
Instead of trying to hide the problem using the robot.txt file can tackle the root cause directly?
-
Hi,
As Chris says I don't think there is a way to force a refresh on your campaign crawls, but that crawl test tool should be able to give you an indication if the relevant pages are still producing duplicate content issues or if the fix seems to be reducing them.
That being said, I don't think that robots.txt is the best way to approach duplicate content issues generally. Check out this guide for best practices. It is also worth noting that many times duplicate content issues can be solved by simply removing or adjusting the various differently formatted links that are producing them in the first place (though this depends a lot on which cms you are using and what the root cause of the duplicate content is).
-
Thanks
9000 duplicate content and duplicate page titles caused by my blog. I have added
User-agent: *Allow: /Blog/post/Disallow: /Blog
to the Robot.txt to just allow the main site and the Blog posts
Is this a good way to fix it?
-
I'm pretty sure that you're not able to force a refresh of your campaign stats in between your normal weekly crawl. This tool will crawl the site but it doesn't refresh your campaign. Specifically, what errors were found that you're trying to get rid of?
-
Hi,
I think this will do what you are after: http://pro.moz.com/tools/crawl-test, limited to 3000 pages but should give an idea if the fix is working as you expect.
-
Thanks but I require a Moz crawl first.
-
"Submit URL to Index," which allows you to submit new and updated URLs that Google themselves say they "will usually crawl that day"
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
Meta Tag Descriptions not being found in Moz Crawls
Hey guys, I have been managing a few websites and have input them into Moz for crawl reports, etc. For a while I have noticed that we were getting a gratuitous amount of errors when it came to the number of missing meta tags. It was numbering in the 200's. The sites were in place before I got here and a lot of the older posts no one had even attempted to include tags, links of the page or anything. As they are all Wordpress Sites and they all already had the Yoast/Wordpress SEO plug-in installed on them, I decided I would go through each post and media file one at a time and update their meta tags via the plug in. I personally did this so I know that I added and saved each one, however the Moz crawl reports continue to show that we are missing roughly 200 meta tags. I've seen a huge drop off in 404 errors and stuff since I went through and double checked everything on the sites, however the meta tag errors persist. Is this the case that Moz is not recognizing the tags when it crawls because I used the Yoast Plugin? Or would you say that the plugin is the issue and I should find another way to add meta tags to the pages and posts on the site? My main concern is that if Moz is having issues crawling the sites, is Google also seeing the same thing? The URLS include:
Moz Pro | | MOZ.info
sundancevacationsblog.com
sundancevacationsnews.com
sundancevacationscharities.com Any help would be appreciated!0 -
1 page crawled - again
Just had to let you know that it happend again. So right now we are at 2 out of the last 4 crawls. Uptime here is 99,8% for the last 30 days, with a small downtime due to an update process at the 18/5 from around 2:30 to 4:30 GMT In relation to: http://moz.com/community/q/1-page-crawled-and-other-errors
Moz Pro | | alsvik0 -
Why does SEOMoz only crawl 1 page of my site?
My site is: www.thetravelingdutchman.com. It has quite a few pages, but for some reason SEOMoz only crawls one. Please advise. Thanks, Jasper
Moz Pro | | Japking0 -
Crawl Diagnostics : Problem of display in Excell.
Hi Mozers, I've just finished watching the Crawl Diagnostics Webinar and when I try to export one of my campaign into the CSV format, I've a display problem into Microsoft Excell. Every headtitles are into the "A" column so, I can't do anything with that : I can't organize the data,... It's totally unreadable. What can I do? Thank you for yours answers. Jonathan
Moz Pro | | JonathanLeplang0 -
Crawler has stopped crawling my website
Hi all, I recently changed my site www.ipbskinning.com to ipbskinning.com. For some reason my weekly scan states that no pages have been crawled. I tried making a new campaign with the sub-domain ipbskinning.com but still the starter crawl returns nothing? Am I doing something wrong? :S Thanks all!
Moz Pro | | pezza34340 -
Crawl Diagnostics Report Lacks Information
When I look at the crawl diagnostics, SEOMoz tells me there are 404 errors. This is understandable, because some pages were removed. What this report doesn't tell me is how those pages were discovered. This is a very important piece of information, because it would tell me there are links pointing to those pages, either internal or external. I believe the internal links have been removed. If the report told me how if found the link, I would be able to take immediate action. Without that information, I have to go so a lot of investigation. And when you have a million pages, that isn't easy. Some possibilities: The crawler remembered the page from the previous crawl. There was a link from an index page - i.e. it is in the database still There was an individual link from another story - so now there are broken links Ditto, but it in on a static index page The link was from an external source - I need to make a redirect Am I missing something, or is this a feature the SEO Moz crawler doesn't have yet? What can I do (other than check all my pages) to discover this?
Moz Pro | | loopyal0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but these pages are noindex
Hello guys, our site is nearly perfect - according to SEOmoz campaign overview. But, it shows me 5200 Errors, more then 2500 Pages with Duplicate Content plus more then 2500 Duplicated Page Titles. All these pages are sites to edit profiles. So I set them "noindex, follow" with meta robots. It works pretty good, these pages aren't indexed in the search engines. But why the SEOmoz tools list them as errors? Is there a good reason for it? Or is this just a little bug with the toolset? The URLs which are listet as duplicated are http://www.rimondo.com/horse-edit/?id=1007 (edit the IDs to see more...) http://www.rimondo.com/movie-edit/?id=10653 (edit the IDs to see more...) The crawling picture is still running, so maybe the errors will be gone away in some time...? Kind regards
Moz Pro | | mdoegel0