Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Moz Pro vs. Moz Analytics
-
Beta version has been out for awhile. Curious to know what people think of Moz Analytics.
Personally, I'm having a hard time making the switch from Pro to Moz Analytics because I'm so used to the navigation and layout of Pro.
What does everyone else think of the new Moz Analytics? Advantages? Disadvantages?
-
This is a great question and a topic we are really interested in hearing more about at Moz. We understand that changing workflows can be painful and challenging, and are dedicated to making Moz Analytics a worthy successor to Pro.
If you have any specific items that you find frustrating, missing, or hard to find, we'd love it of you respond in the thread, or even email me directly (adam -at- moz.com). Also let us know if there are things we could do to help, whether it be resources, education, or product changes.
Thanks!
-
Thanks! I will definitely go check it out!
-
Hey guys
I asked the same question about a week or so ago and received some really good feedback.
Thanks
Brick technology
-
I hope not. Didn't want to say it, but I'm kinda hating analytics :S I wonder if I would remain as a member if the pro version is ever removed.
-
Federico,
I do the same thing! I have been wondering if one day that option to switch back will eventually go away...
-
I feel the same way! Even though I am forced to access Analytics by default, I then hit the "Back to Pro" not only because I am used to it, but also I find it easier to read. I really don't want to spend 10 minutes loading for pages and graphs, and the PRO version gives me just that, the info I need instantly.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
Moz Pro | | NichGunn0 -
Comparing New vs. Old Keyword Difficulty Scores
We've had a few questions regarding the new Keyword Difficulty score used in Keyword Explorer, and how it compares to the old score in our stand-alone Keyword Difficulty tool. Specifically, people want to know why some scores are much lower using the new tool. There a general discussion of the math behind the tool in this post: Keyword Research in 2016: Going Beyond Guesswork One of the problems we had with the original Keyword Difficulty score is that, because it's based on our Page Authority (PA) score and PA tends toward the middle of the 0-100 range, Difficulty got a bit bunched up. A Difficulty score in the low-to-mid 20s (via the old tool) is actually very low. So, we set out to re-scale the new tool to broaden that score and use more of the 0-100 range. We hoped this would allow more granularity and better comparisons. While the logic is sound, we're concerned that we may have been too aggressive in this re-scaling, given recent feedback. So, we're going to be analyzing a large set of keywords (anonymously, of course) that people have run through the tool to see if too many Difficulty scores seem too low. If they do, we'll make some adjustments to the math. In the meantime, please be aware that low scores may appear lower in the new tool and very high scores may appear higher. We wanted to address some of the limitations in V1 and feedback over the years, and so the old and new scores really can't be compared directly in a meaningful way. We're sorry for any confusion that has caused, and we will re-evaluate if necessary.
Moz Pro | | Dr-Pete3 -
MoZ vs SEMRush - Keyword Difficulty and Keyword With Low Competition.
Hi, My question is very focussed regarding these 2 tools. Is it correct to understand that MoZ tells keyword difficulty and not which keyword is easy to compete. And SEMRush tells which keyword is easy to compete but does not tell which keywords are difficult to compete? I mean one (Moz) misses the one part and the other (SEMRush) misses the other part. Hope some will enlighten me to the point. Best
Moz Pro | | Sequelmed0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
Screaming frog, Xenu, Moz giving wrong results
Hello guys and gals, This is a very odd one, I've a client's website and most of the crawlers I'm using are giving me weird/ wrong results. For now lets focus on screaming frog, when I crawl the site it will list e.g. meta titles as missing (not all of them though), however going into the site the title is not missing, and Google seems to be indexing the site fine. The robots.txt are not affecting the site (I've also tried changing the user agent). The other odd thing is SF gives a 200 code but as a status tells me "connection refused" even though it's giving me data. I'm unable to share the clients site, has any one else seen this very odd issue? And solutions for it? Many thanks in advanced for any help,
Moz Pro | | GPainter0 -
Problem to log into moz
Every time the moz logs me out from the account and then I can not log in. It shows on the left side my name like I am logged in and then when I want go to community suddenly I am not logged in. It offen shows 502 error. It was first doing on firefox, then I manage to log in chrome and now I had to log in private browsing.
Moz Pro | | Rebeca12 -
Percentage of good links vs. bad
Hi Does anyone know the best way of determining good links from bad links using the SEO Moz tools? I bought some directory links to two or three pages on my site a few years back. The were all very obviously spammy because of the anchor text and I didn't have a high enough ratio of good links to counteract them. I read somewhere that if more than 10% of the links to a page have the same (or similar) anchor text, it's obvious that you're on the bad list.
Moz Pro | | nsjadmin0