Crawl Diagnostics
-
My site was crawled last night and found 10,000 errors due to a Robot.txt change implemented last week in between Moz crawls. This is obviously very bad so we have corrected it this morning. We do not want to wait until next Monday (6 days) to see if the fix has worked. How do we force a Moz crawl now?
Thanks
-
Its a dotnetblogengine.com blog its open source but not sure where to start
-
Why so many duplicates? As it's a blog I suspect it's something to do with tags and/or categories.
Instead of trying to hide the problem using the robot.txt file can tackle the root cause directly?
-
Hi,
As Chris says I don't think there is a way to force a refresh on your campaign crawls, but that crawl test tool should be able to give you an indication if the relevant pages are still producing duplicate content issues or if the fix seems to be reducing them.
That being said, I don't think that robots.txt is the best way to approach duplicate content issues generally. Check out this guide for best practices. It is also worth noting that many times duplicate content issues can be solved by simply removing or adjusting the various differently formatted links that are producing them in the first place (though this depends a lot on which cms you are using and what the root cause of the duplicate content is).
-
Thanks
9000 duplicate content and duplicate page titles caused by my blog. I have added
User-agent: *Allow: /Blog/post/Disallow: /Blog
to the Robot.txt to just allow the main site and the Blog posts
Is this a good way to fix it?
-
I'm pretty sure that you're not able to force a refresh of your campaign stats in between your normal weekly crawl. This tool will crawl the site but it doesn't refresh your campaign. Specifically, what errors were found that you're trying to get rid of?
-
Hi,
I think this will do what you are after: http://pro.moz.com/tools/crawl-test, limited to 3000 pages but should give an idea if the fix is working as you expect.
-
Thanks but I require a Moz crawl first.
-
"Submit URL to Index," which allows you to submit new and updated URLs that Google themselves say they "will usually crawl that day"
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hoe to crawl specific subfolders
I tried to create a campaign to crawl the subfolders of my site, but it stops at just 1 folder. Basically what I want to do is crawl everything after folder1: www.domain.com/web/folder1/* I tried to create 2 campaigns: Subfolder Campaign 1: www.domain.com/web/folder1/*
Moz Pro | | gofluent
Subfolder Campaign 2: www.domain.com/web/folder1/ In both cases, it did not crawl and folders after the last /. Can you help me ?0 -
Crawl Diagnostics Summary Problem
We added our website a Robots.txt file and there are pages blocked by robots.txt. Crawl Diagnostics Summary page shows there is no page blocked by Robots.txt. Why?
Moz Pro | | iskq0 -
Crawl diagnostics incorrectly reporting duplicate page titles
Hi guys, I have a question in regards to the duplicate page titles being reported in my crawl diagnostics. It appears that the URL parameter "?ctm" is causing the crawler to think that duplicate pages exist. In GWT, we've specified to use the representative URL when that parameter is used. It appears to be working, since when I search site:http://www.causes.com/about?ctm=home, I am served a single search result for www.causes.com/about. That begs the question, why is the SEOMoz crawler saying there is duplicate page titles when Google isn't (doesn't appear under the HTML improvements for duplicate page titles)? A canonical URL is not used for this page so I'm assuming that may be one reason why. The only other thing I can think of is that Google's crawler is simply "smarter" than the Moz crawler (no offense, you guys put out an awesome product!). Any help is greatly appreciated and I'm looking forward to being an active participant in the Q&A community! Cheers, Brad
Moz Pro | | brad_dubs0 -
Site Redesign Launch - How Can I crawl for immediate review
Just redesigned my site and want to have a crawl done to check for errors or any items which need to be cleaned up. Anyone know how I can do this as SEOMoz only crawls once per week. Thanks!
Moz Pro | | creativemobseo0 -
Did anyone else see "Rel Canonical" drop to zero after their latest SEOmoz crawl?
In the Crawl Diagnostics section of the SEOmoz reports, we get errors in red, warnings in yellow, and notices in blue. After my latest crawl, I saw the "Rel Canonical" part go from about 300 down to 0. Obviously, this isn't right, so I'm wondering if this is a bug that everyone is experiencing. U9W5I
Moz Pro | | UnderRugSwept0 -
Crawl reports, date/time error found
Hello! I need to filter out the crawl errors found before a certain date/time. I find the date and time the errors were discovered to be the same. It looks more like the time the report was generated. Fix?
Moz Pro | | AJPro0 -
How to crawl the whole domain?
Hi, I have a website an e-commerce website with more than 4.600 products. I expect that Seomoz scan check all url's. I don't know why this doesn't happens. The Campaign name is Artigos para festa and should scan the whole domain festaexpress.com. But it crels only 100 pages I even tried to create a new campaign named Festa Express - Root Domain to check if it scans but had the same problem it crawled only 199 pages. Hope to have a solution. Thanks,
Moz Pro | | EduardoCoen
Eduardo0