Setting up MOZ to run on Staging
-
Hi Moz,
We would like to setup Moz to run on our Staging Server. This would be extremely valuable as it alerts us to new SEO issues/risks in a controlled and secure environment that is not exposed to production.
Our internal team has recommended potentially setting up a reverse proxy server that will validate either via Moz's/Rogerbot's http header or IP and allow Moz access to our Staging environment to crawl.
Is this something that we can setup with Moz? Are there other ideas to enable Moz to crawl our Staging server?
-
I know this is old - but I was also wondering this same thing.
Our specific use case is that we would love to be able to run a on-page grader via Moz on our staging environment before going live to make sure we are not degrading on-page seo for any specific keywords when changing things around to new designs.
-
No worries!
You can add a directive to your site's robots.txt file as such:
User-agent: rogerbot
Disallow:User-agent:*
Disallow: /This will block all bots from accessing your site aside from rogerbot.
There may be other methods as well, which I am personally not aware of. Perhaps other users on the forum may be able to provide tips on what you can potentially try out.
Let me know if you have any other questions!
Eli
-
Hi Eli,
Thanks for the fast response.
Creating a public facing 'staging' instance does create a certain degree of risk as it can potentially be discovered and indexed.
Are other clients currently using this method? If so what best practices do you recommend we follow if we were to go this route to ensure that our 'Staging' site is ONLY crawled by Rogerbot and not by any other bot or service.
-
Hey!
Thanks for reaching out to us!
In order to track a new site you would indeed need to create a new campaign. You can do that here https://analytics.moz.com/campaigns/new . Each campaign tracks the exact URL entered when it's created.
Unfortunately you would not be able to validate us by IP address alone as we do not use a static IP address or range of IP addresses, as we have designed our crawler to have a dynamic approach. This means we use thousands of dynamic IP addresses which will change each time we run a crawl. We believe that this approach gives us the best dynamic view of the web!
However, one alternative I'd be able to suggest would be to identify our crawler by User-agent: rogerbot. You can read more about rogerbot in our guide. It's also worth pointing out that your site would need to be publicly available for our crawler to reach and crawl it.
I hope this helps, please do let me know if you would like more clarification or have any other queries.
You're welcome to reach out to help@moz.com if it's easier!
Eli
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What's the best way to search keywords for Youtube using Moz Keyword explorer?
I want to optimize my youtube channel using identified keywords, but I'm concerned that the keywords I'm identifying work well for SERP's but might not be how people search in Youtube. How do a distinguish my keywords to be targeted for Youtube?
Moz Pro | | Dustless0 -
How often does Moz data update when leveraging it through Supermetrics for Datastudios?
Good Afternoon, A pretty straightforward question: How often does Moz data update when leveraging it through Super-metrics for Data studios? I noticed that the "time last crawled" metric had some URLs not crawled in over 2 months... wondering if there's a way to speed it up on Moz's end.
Moz Pro | | LoweProfero0 -
New to Moz, need some probably basic answers about Keywords, Linking, Competitors and General SEO
Hi, So I have quite a lot of data colelcted about my site now, regarding keyword research, page crawling and competitor research ect. But I find myself second guessing myself about what I have done and what to do next. I have done basic research for as many relevant keywords I could think of to my site, including branded and non branded terms. If the main competitive keywords for my niche are very competitive, shall I start doing more research for long tail keywords and only try to rank for them? Does is matter how many keywords I am doing research for? Does is matter how many keywords I try to optimise for each webpage? Are the amount of branded keywords I am researching skewing my results? As they are all ranked #1, but nearly all of the non branded keywords are much further down the list... Once I have decided what keywords are worth trying to ranking for for each page, are the techniques to actually rank more highly for them - Title, H1 Tag, Description, Meta Data, Fresh Content and using the keywords on the page? Or are there more techniques I haven't heard of? Under Keyword Rankings - I noticded that some of my keywords are directing to specific pages, like "Cavity Waxes" is directing to the URL ending in .com/cavity-waxes - How do you assign the keywords im researching to specific URLs? - Or does Moz do it automatically? As most of my keywords seem to be unassigned to any URL, is that because they are not ranking highly enough? How do I best use the data collected through Moz? Good practices? Techniques? Tips and Tricks? What is the best practice for finding potential link partners and asking them for mutual linking? Techniques for finding partners that are likely to link with us, but still provides link juice. I must apologise for this long-winded set of questions, but these are troubling me! Any help would be greatly appreciated, Kind regards, Max Johnson
Moz Pro | | BiltHamber10 -
Moz crawl duplicate pages issues
Hi According to the moz crawl on my website I have in the region of 800 pages which are considered internal duplicates. I'm a little puzzled by this, even more so as some of the pages it lists as being duplicate of another are not. For example, the moz crawler considers page B to be a duplicate of page A in the urls below: Not sure on the live link policy so ive put a space in the urls to 'unlive' them. Page A http:// nuchic.co.uk/index.php/jeans/straight-jeans.html?manufacturer=3751 Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/accessories/id/92/?cat=97&manufacturer=3603 One is a filter page for Curvety Jeans and the other a filter page for Charles Clinkard Accessories. The page titles are different, the page content is different so Ive no idea why these would be considered duplicate. Thin maybe, but not duplicate. Like wise, pages B and C are considered a duplicate of page A in the following Page A http:// nuchic.co.uk/index.php/bags.html?dir=desc&manufacturer=4050&order=price Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/purses/id/98/?manufacturer=4001 Page C http:// nuchic.co.uk/index.php/coats/waistcoats.html?manufacturer=4053 Again, these are product filter pages which the crawler would have found using the site filtering system, but, again, I cannot find what makes pages B and C a duplicate of A. Page A is a filtered result for Great Plains Bags (filtered from the general bags collection). Page B is the filtered results for Chic Look Purses from the Purses section and Page C is the filtered results for Apricot Waistcoats from the Waistcoat section. I'm keen to fix the duplicate content errors on the site before it goes properly live at the end of this month - that's why anyone kind enough to check the links will see a few design issues with the site - however in order to fix the problem I first need to work out what it is and I can't in this case. Can anyone else see how these pages could be considered a duplicate of each other please? Checking ive not gone mad!! Thanks, Carl
Moz Pro | | daedriccarl0 -
What does moz trust means?
Hi guys Moz toolbar show me my 'mT' of index page of my website is 7.07. Is it good?
Moz Pro | | vahidafshari450 -
Moz email is freezing Microsoft Outlook
When I try and open an email message from a Moz Q&A response, it hangs Microsoft Outlook for more than 30 seconds. Is anyone else having this problem?
Moz Pro | | ChristopherGlaeser0 -
SEO Moz Tools - too many on the page links result driving me nuts
A while back I remember Rand and I having a conversation about how many links on the page and up until that point I had followed the NO MORE THAN 100 links on a page rule - which is what the MOZ tools are telling me now in the campaigns I have running. But then during a seminar both of us were holding this 100 link rule question came up and Rand commented that this was probably old hat now as the search engines can crawl a much greater number of links in the page. I was encouraged by his answer especially where ecommerce websites are concerned. But the MOZ tool is driving me nuts telling me that this 100 link rule is still something to be adhered too. It is especially frustrating when we are discussing ecommerce website sites with editable mega menus. Examples to support this question are www.bohemiadesign.co.uk or www.flowersbuydelivery.co.uk which are 2 ecommerce sites I am aware of using such mega menus that are editable and give a link count greater than 100. and I am sure there are many more sites like this, even Amazon for example. So, how much notice do we take of this warning in MOZ tools that is telling me about excessive numbers of links on the pages it lists as needing corrected?
Moz Pro | | ICTADVIS0