Crawling password protected sites such as dev or staging areas to look at sites b4 going live ?
-
Hi
Ive instructed clients to password protect dev areas so dont get crawled and indexed but how do we set up Moz crawl software so we can crawl theses sites for final check of any issues before going live ?
Is there an option i havnt seen to add logins/passwords for crawl software to access ?
cheers
dan
-
ok thanks Chiaryn
is that the actual name of the moz crawler (to allow in Robots) simply rogerbot ? or any other characters etc ?
Also is it not the case that even when blocked by robots.txt G can still crawl/index it once password removed, think i read few comments somewhere on Moz that can still happen somehow ?
Please advise asap ?
Many Thanks
Dan
-
Hey Dan,
Unfortunately, our crawler is not able to access password protected content on your site. If you create a staging subdomain that is not password protected, you could use the robots.txt file to allow rogerbot and block other crawlers, but I'm afraid our crawler will not crawl anything that a normal search engine crawl would not be able to crawl so we cannot crawl password protected pages.
I hope this helps.
Chiaryn
-
i dont suppose either of you are able to help at all with this related question:
http://moz.com/community/q/site-crawl-errors-download-list-of-all-urls
-
i dont suppose either of you are able to help at all with this related question:
http://moz.com/community/q/site-crawl-errors-download-list-of-all-urls
-
Hi Andy
Screaming Frog does have password access feature for your info i have just tried it
All Best
Dan
-
Thanks Matt
I have got screaming frog and can confirm that it has password access feature, but i really want Moz to be able to access too, i would have thought they should have this option somewhere. Are you saying Moz crawls have more info than SF (re 'moz level' analysis) ?
Dev site better password prtected than robots arnt they i think ?
Cheers
Dan
-
Hi Dan
I was about to ask the exact same question, so will keep an eye out for an answer.
I hope it is possible, but I couldn't work it out.
-
I don't know if there's a way to do this in Moz but you could always get Screaming Frog & tell it to ignore robots.txt - that will definitely crawl it. You can check titles, descriptions, canonicals, H1s, etc. that way. It doesn't give the Moz level analysis but it's a start that def works. You can also see if you have parameter issues that way.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to find the pages in the site which is ranking for zero keyword
I have around three thousand pages in my website,how to find the list of pages which is ranking for zero keywords
Moz Bar | | srinivasan.n1 -
Canonical in Moz crawl report
I'm wondering if the moz bot is seeing my rel="canonical" on my pages. There are 2 notices that are bothering me: Overly Dynamic URL Rel Canonical Overly Dynamic URL - This notice is being generated by urls with query strings. On the main page I have the rel="canonical" tag in the header. So every page with the query string has the canonical tag that points to the page that should be indexed. So my question...Why the notice? Isn't this being handled properly with the canonical tag? I know I can use my robots.txt or the tool in Google search console but is it really necessary when I have the canonical on every page? Here is one of the links that has the "Overly Dynamic URL" notice, as you can see the the canonical in the header points to the page without the query string: https://www.vistex.com/services/training/traditional-classroom/registration-form/?values=true&course-title=DMP101 – Data Maintenance Pricing – Business Processes&date=March 14, 2016 Rel Canonical - Every page in my report has this notice "Using rel=canonical suggests to search engines which URL should be seen as canonical". I'm using the rel="canonical" tag on all of my pages by default. Is the report suggesting that I don't do this? Or is it suggesting that I should? Again...why the notice?
Moz Bar | | Brando160 -
Site Crawl report show strange duplicate pages
Beginning in early in Feb, we got a big bump in duplicate pages. The URLs of the pages are very odd: Example URL:
Moz Bar | | Neo4j
http://firstname.lastname@website.com/dir/page.php
is duplicate with http://website.com/dir/page.php I checked though the site, nginx conf files, and referral pages, and could not find what is prefixing the pages with 'http://firstname.lastname@'. Any ideas? The person whose name is 'Firstname Lastname' is stumped as well. Thanks.0 -
Why does the Moz Tool Bar show code as HTML text for my site?
When I run my site in the Moz Tool bar, all of the page elements are read correctly except for HTML text. Sample page: http://www.lifeionizers.com/blog/alkaline-water/electrolyzed-reduced-water Instead of the text on that page, the tool shows Javascript code: /* With a very high character count for text (14,247).
Moz Bar | | karasd0 -
Suggestion for Improving the Crawl Report on Canonicals
This came up in the answer to a question I gave here http://moz.com/community/q/canonicals-in-crawling-reports#reply_222623 Wanted to post here to put it in as a suggestion on how to improve the Moz Crawl reports Currently, the report shows FALSE if there is no canonical link on a page and TRUE if there is. IF you get a TRUE response, this shows up as a warning in your report. I currently use Canonical to Self on almost all my pages to help with some indexing issues. I currently use the EXACT function in excel to create a formula to see if my canonical link matches the URL of the page (as this is what I want it to do). I can then know that the canonical is implemented properly, or if I need to manually check pages to make sure the canonical that points to another page is correct. I would like to suggest that the Moz crawl tool does this. It can show FALSE is the canonical is missing, TRUE if the canonical is present and SELF if the canonical points to the URL of the page it is on. I think for the most part this would be much more actionable information. I would even suggest that TRUE would need to be more of a high priority alert, and SELF can't do any damage, so I would leave that info in the CSV but not have that as a warning in the web interface. Thanks for listening!
Moz Bar | | CleverPhD0 -
Is there a way to get Page Authority values included in the Crawl Diagnostic .csv export?
Would be nice to have these values included so that you can sort by Page Authority. 4uF6efx.png
Moz Bar | | WebReputationBuilders0 -
Open Site Explorer Problems
I've been checking open site explorer for months now. It doesn't show any changes in the number of root domain links to any of my pages, even though webmaster tools shows a substantial increase in root domain links for each of these urls. Open Site Explorer doesn't seem to be updating. According to Moz, my site is regularly indexed by Moz. But the numbers don't change. I've tested this on a few other sites as well with the same results. How do I get OSE to update the numbers for a particular website?
Moz Bar | | zagginc0 -
Moz "Crawl Diagnostics" doesn't respect robots.txt
Hello, I've just had a new website crawled by the Moz bot. It's come back with thousands of errors saying things like: Duplicate content Overly dynamic URLs Duplicate Page Titles The duplicate content & URLs it's found are all blocked in the robots.txt so why am I seeing these errors?
Moz Bar | | Vitalized
Here's an example of some of the robots.txt that blocks things like dynamic URLs and directories (which Moz bot ignored): Disallow: /?mode=
Disallow: /?limit=
Disallow: /?dir=
Disallow: /?p=*&
Disallow: /?SID=
Disallow: /reviews/
Disallow: /home/ Many thanks for any info on this issue.0