How to get seomoz to re-crawl a site?
-
I had a lot of duplicate content issues and have fixed all the other warnings. I want to check the site again.
-
The crawling is only for PRO members (though that does include members on a free trial). If you want to crawl more than once a week, check out our crawl test tool at http://pro.seomoz.org/tools/crawl-test.
-
What if you're not a full member?
-
By the way Adam, one thing that tripped me up, and evidently lots of other SEOMozers, is that it will report duplicate content and duplicate titles if your links of the form http://domain.com/xyz.html are not 301 redirecting to http://www.domain.com/xyz.html.
-
It crawls the site on a weekly basis if you are a full member.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
Unable to view crawl test
After doing a crawl test i get a download report. It then downloads in csv form and when I go to view it there is a curruption error or just a load of gibberish signs Can I not see the report onsite?
Moz Pro | | hantaah0 -
Moz Crawl Test: WordPress sites with and without /feed and /trackback entires?
I have multiple WP websites and on some of the websites, on my Moz Crawl test, I see an entry for every blog post but also entries for /feed and /trackback for that single blog post. For example, www...com/someArticle www....com/someArticle/feed www...com/someArticle/trackback 1. Can anyone explain why the Crawl test is picking up the /feed and /trackback items? Is it simply because they are 301 redirects to the original post (www...com/someArticle)? 2. What setting(s) in WordPress are making this information appear? Or is it just that the site(s) that have the /feed and /trackback are displaying "normal" behavior for a WP site with a lot of trackbacks and feed entires? 3. Should /fee and /trackback, as well as /author be blocked in robots.txt? Thanks in advance for your advice and input!
Moz Pro | | Titan5520 -
Open Site Explorer detects links from a site that redirects to it? How is this possible?
I was checking external links to a site: scte-brasilien.de/ and was wondering why there are so many links pointing from another domain (reisen-nach-brasilien.com/) although it redirects to scte-brasilien.de/? So when checking the redirecting domain OSE knows its redirecting... The URL you've entered redirects to another URL. We're showing results for scte-brasilien.de/ since it is likely to have more accurate link metrics. See data forreisen-nach-brasilien.com/ instead? How is this possible? best regards
Moz Pro | | inlinear
Holger0 -
Video SERP and Seomoz ranking
It seems that my keywords that lead to videoresults in SERP, is not being ranked on Seomoz! How come when i rank #3 on a certain keyword with a videoresult Seomoz is displaying me as #15 (which is the second of my pages in SERP)? What is the reason for not counting the video-result in the ranking-tool?
Moz Pro | | alsvik0 -
Third crawl of my sites back to 250 pages
Hi all, I've been waiting some days for the third crawl of my sites, but SEOMOZ only crawled 277 pages. The next phrase appeared on my crawl report: Pages Crawled: 277 | Limit: 250 My last 2 crawls were of about 10K limit. Any idea? Kind regards, Simon.
Moz Pro | | Aureka0 -
SEOMOZ Crawler unicode bug
for the last couple of weeks the SEOMOZ crawls my homepage only and gets 4xx error for most of the URL's. the crawler have no issues with English url's only with the unicode(Hebrew) ones. this is what is see in the csv export for the crawl (one sample) : http://www.funstuff.co.il/׳ž׳¡׳™׳‘׳×-׳¨׳•׳•׳§׳•׳× 404 text/html; charset=utf-8 you can see that the URL is Gibberish please help.
Moz Pro | | AsafY0 -
Open Site Explorer Question!
Hi, I have performed a search on a root domain and the page auth is higher then the domain auth? I would have thought they would have been the same or at least the other way around!
Moz Pro | | activitysuper0