Can I prevent some pages from being crawled from SEOMoz spider and still not affect Google Spider?
-
Well, basically that's the question
Can I prevent some pages from being crawled from SEOMoz spider and still not affect Google Spider?
This is, I have more than 10.000 pages on the website, and I am not interested in having reports for many of them, but I still wanna get SEO visits on them, so I want Google to crawl it easily...
Thanks!
-
Hey Martijn,
Thanks!
What about if I want to avoid rogerbot from crawling a range of pages from let's say /000001/; /000002/... /200000/ ?
Thanks!
-
Hey Martijn,
Thanks!
What about if I want to avoid rogerbot from crawling a range of pages from let's say /000001/; /000002/... /200000/ ?
Thanks!
-
Hi Matt,
Absolutely, you could do this by adding a special part to your robots.txt file specified to the user agent rogerbot which is used by SEOMoz for their spider. This could be done with for example:
User-agent: rogerbot Disallow: */anythingyouwanttoexcludeforroger/*
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can someone kindly explain what 'Crawl Issue Found: No rel="canonical" Tags' means? Is this a critical error and how can it be rectified?
Can someone kindly explain what 'Crawl Issue Found: No rel="canonical" Tags' means? Is this a critical error and how can it be rectified?
Moz Pro | | JoshMcLean0 -
Lots of 404s listed in Top Pages by Page Authority
Hi guys, I'm slowly getting to grips with all the aspects of using Moz.
Moz Pro | | giddygrafix
Looking at my link analysis, under the tab labelled Top Pages by Page Authority, there's an awful lot of 404 pages listed. Are these pages which are being linked to (either internally or externally), and should I put a 301 redirect on all the pages listed? I've attached a picture... moz-screen-shot.jpg0 -
How to find page with the link that returns a 404 error indicated in my crawl diagnostics?
Hi Newbie here - I am trying to understand what to do, step by step, after getting my initial reports back from seomoz. The first is regarding the 404 errors shown as high priority to fix, in crawl diagnostics. I reviewed the support info help on the crawl diagnostics page referring to 404 errors, but still did not understand exactly what I am supposed to do...same with the Q&A section when I searched how to fix 404 errors. I just could not understand exactly what anyone was talking about in relation to my 404 issues. It seems I would want to find the page that had the bad link that sent a visitor to a page not found, and then correct the problem by removing the link, or correcting and re-uploading the page being linked to. I saw some suggestions that seemed to indicate that seomoz itself will not let me find the page where the bad link is and that I would need to use some external program to do this. I would think that if seomoz found the bad page, it would also tell me what page the link(s) to the bad page exists on. A number of suggestions were to use a 301 redirect somehow as the solution, but was not clear when to do this versus, just removing the bad link, or repairing the page the link was pointing to. I think therefore my question is how do I find the links that lead to 404 page not founds, and fix the problem. Thanks Galen
Moz Pro | | Tetruss0 -
1 week has passed: Crawled pages still N/A
Roughly one week ago I went pro, and then I created a campaing for the smallish webshop that I'm employed at, however it doesn't seem to crawl. I've check our visitors log and while we find other bots such as google, bing, yandex and so fourth, seomoz bot hasn't been visible. Perhaps I'm looking for a normal useragent, ohwell, onwards. While I thought it might take time, as a small test I added a domain that I've owned for sometime but don't really use, that target site is only 17 pages, now this site was crawled almost within the hour, and I realised that our ~5000pages on the main campaing would take some time, but wouldn't the initial 250 pages be crawled by now? I should add, that I didn't add http:// to the original Campaing, but the one that got crawled I did. I cannot seem to change this myself inorder to spot if that's the problem or not. Anyone has any ideas, should I just wait or is there something I can activly do to force it to start rolling?
Moz Pro | | Hultin0 -
Seomoz crawling filtered pages
Hi, I just checked an seo campaign we started last week, so I opened seomoz to see the crawl diagnostics. Lot's of duplicate content & duplicate titles showing up, but that's because Rogerbot is crawling all of the filtered pages as well. How do I exclude these pages from being crawled? /product/brand-x/3969?order=brand&sortorder=ASC
Moz Pro | | nvs.nim
/product/brand-x/3969?order=popular&sortorder=ASC
/product/brand-x/3969?order=popular&sortorder=DESC&page=10
/product/brand-x/3969?order=popular&sortorder=DESC&page=110 -
How can you set SEOmoz to work with your dev site behind an htpasswd?
All sites need to be developed from the small to the grand - and this takes time. Development usually takes place on a subdomain different from our live domain. It is locked down behind an htpasswd during development so its not picked up by searching engines - that may create duplicate content issues if when the site goes live it has already scanned our site on the development server. Its also a security implementation to keep the site away from prying eyes before its ready for launch There could be security holes that have not been tweaked. Whats the best strategy to get SEOmoz involved in this scenario. Its tools are invaluable to the SEO part of the build - but the seomoz crawler bot has a different IP address (being cloud based) - so we cannot just let a single IP address through our htpasswd. Also is there a way to link the dev and live site in seomoz - so when it goes live to maintain all teh same logs without having to create two seperate site campaigns? Thanks!
Moz Pro | | dseo2410 -
Why does my crawl report show just one page result?
I just ran a crawl report on my site: http://dozoco.com The result report shows results for just one page - the home page, but no other pages. The report doesn't indicate any errors or "do not follows" so I'm unclear on the issue, although I suspect user error - mine.
Moz Pro | | b1lyon0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but these pages are noindex
Hello guys, our site is nearly perfect - according to SEOmoz campaign overview. But, it shows me 5200 Errors, more then 2500 Pages with Duplicate Content plus more then 2500 Duplicated Page Titles. All these pages are sites to edit profiles. So I set them "noindex, follow" with meta robots. It works pretty good, these pages aren't indexed in the search engines. But why the SEOmoz tools list them as errors? Is there a good reason for it? Or is this just a little bug with the toolset? The URLs which are listet as duplicated are http://www.rimondo.com/horse-edit/?id=1007 (edit the IDs to see more...) http://www.rimondo.com/movie-edit/?id=10653 (edit the IDs to see more...) The crawling picture is still running, so maybe the errors will be gone away in some time...? Kind regards
Moz Pro | | mdoegel0