Hi Siddharth,
It looks like you already contacted us directly at help@seomoz.org and Abe completed the cancellation and refund for you, so I am going to resolve this question.
Please let us know if you need help with anything else.
Chiaryn
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Hi Siddharth,
It looks like you already contacted us directly at help@seomoz.org and Abe completed the cancellation and refund for you, so I am going to resolve this question.
Please let us know if you need help with anything else.
Chiaryn
Hi Jimmy,
Thanks for writing in with a great question.
In regard to the "noindex" meta tag, our crawler will obey that tag as soon as we find it in the code, but we will also crawl any other source code up until we hit the tag in the code so pages with the "noindex" tag will still show up in the crawl. We just don't crawl any information past that tag. One of the notices we include is "Blocked by meta robots" and for the truthbook.com campaign, we show over 2000 pages under that notice.
For example, on the page http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010, there are six lines of code, including the title, that we would crawl before hitting the "noindex" directive. Google's crawler is much more sophisticated than ours, so they are better at handling the meta robots "noindex" tag.
As for http://truthbook.com/contemplative_prayer/, we do respect the "*" wildcard directive in the robots.txt file and we are not that page. I checked your full CSV report and there is no record of us crawling any pages with /contemplative_prayer/ in the URL (http://screencast.com/t/hMFuQnc9v1S) so we are correctly respecting the disallow directives in the robots.txt file.
Also, if you would ever like to reach out to the Help Team directly in the future, you can email us from the Help Hub here: http://www.seomoz.org/help, but we are happy to answer questions in the Q&A forum, as well.
I hope this helps. Please let me know if you have any other questions.
Chiaryn
Hey There,
Thanks for writing in with a great question! Capitalization definitely shouldn't matter to our crawler or to Google, so I looked into this keyword with one of our devs and it looks like the rankings for "jobmesse" was collected about 16 hours before the rankings for "Jobmesse" so we believe this is actually a natural fluctuation in the SERP for this keyword. I manually check the current rankings for both version of the keyword, I am seeing that your site currently ranks at #11 and the keyword has a difficulty score of 56%, so it is definitely within reason to see fluctuate in a matter of hours between the rankings.
I hope this helps. Please let me know if you have any other questions.
Chiaryn
Hey Bond,
Thanks for writing in with a great question. Both the campaign data and the data in OSE are pulled from the same Mozscape index, but OSE defaults to page metrics, so you will want to make sure that you are comparing domain metrics to domain metrics between the campaign and OSE. When I checked the domain metrics between your campaign and OSE, the data did match up correctly. I may be able to explain the comparison better through screenshots, but I wasn't sure if you wanted to share your campaign information publicly.
The only thing I am seeing that doesn't show data correctly is actually the second competitor in your campaign because it is set up as ww.competitor2.co.uk, rather than www.competitor2.co.uk, which skews the subdomain metrics in the campaign link analysis. If you update the subdomain in your campaign settings, you should start getting better data for that competitor in the rankings and link analysis.
If you would like me to send you the screenshots to explain how the campaign compares to OSE privately, please let me know and I will email that to you right away.
I hope this helps!
Chiaryn
Hi Michael,
Thanks for writing in and sorry for the confusion. Unfortunately, there is an issue with your brand rules that is causing the connection to Google Analytics to fail. You currently aren't tracking any keywords that match your brand rules, so the request to pull data for those rules fails and, after a number of failures, the connection to GA also fails. You can correct this issue by adding the brand rules as keywords and reconnecting your GA account. We will then pull in all of the historical data for the Analytics account and you should no longer see the error message on your campaign.
I hope this helps. Please let me know if you have any other questions.
Chiaryn
Hey Jay,
I checked two of the pages:
http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Seth+Green/?bioid=5554 and http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Solomon+Northup/?bioid=11758 against each other in a duplicate content checker (http://www.webconfs.com/similar-page-checker.php) and they returned a similarity percentage of 67%, which we definitely shouldn't be showing as duplicate. (We consider pages at 90% or more to be dupes.)
I went to check on your crawl to see if it might be a bug and it looks like the number of duplicate content errors has gone down a lot with the crawl that took place today and none of these pages are included as duplicates, so it may have been a temporary bug. If you see these pages counted as duplicates again. Please let us know so that we can look into it further.
Hopefully, this helps!
Chiaryn
So glad I could help clear things up! :]
Each domain is only counted once, so a domain could have 3, 10, or 100 links to your site, but it would still only count as one linking root domain.
Hi Jem,
Sorry for the confusion! External Links actually refer to links from other domains that point to that page. For example, if the page www.example.com/tools had three links pointing to it where one came from example.com, one came from presentation.com, and one came from description.com, it would have one internal link (from example.com) and two external links (from presentation.com and description.com). We do not count how many links the page has pointing to other domains in the metrics you are seeing. It is much more important to know the links that are pointing into the page rather than from the page to other sites so determine the authority of the page.
Here is a great resource if you need more information about external links: http://www.seomoz.org/learn-seo/external-link
I hope this clears things up. Please let me know if you have any other questions.
Hi Erik,
Great question. Unfortunately, you can't add tags quite as easily as you can add list of keywords and you would need to enter the tags manually. I'm really sorry about that!
The bast way to add a group of keywords with a tag from a CSV would be to sort the CSV by tag and copy all of the keywords that fall under each tag. Next, input the keywords into the Ad Keywords box and add that specific tag manually. That should cut down on the number of times you have to add tags. Though, I'm afraid it isn't as easy as importing the tag from the CSV directly.
I hope this helps. Please let me know if you have any other questions.
Chiaryn
Hi Marcos,
Thanks for writing in with a great question. You should be able to include the full URL of the product you would like to track by setting it up as a subfolder as long as the final subsection of the URL isn't set up as a file. For example, you can set up a subfolder campaign for the URL http://www.modcloth.com/shop/dresses/outright-amity-dress but not for the URL http://www.infinitecat.com/infinite/cat33.html
I hope this helps. Please let me know if you have any other questions.
CHiaryn
Hi Trenton,
Thanks for writing in. Unfortunately, we aren't able to provide search volume data for keywords within out app at this time, but we are looking for a solution to include that data in the future. I sincerely apologize for the inconvenience this causes!
Please let me know if you have any other questions.
Chiaryn
Hi Walter,
Thanks for writing in and sorry for any confusion.
It looks like you actually have the PRO level API access and there is no Low Volume subscription on your account. For the PRO level API, the account is actually rate limited at 1 request ever 5 seconds and it does look like you are getting the correct access for that plan level on your account.
If you would like to sign up for the Low Volume API subscription, you can fill out our Order Form at http://goo.gl/BTbyy and we will get you set up right away.
Let me know if you have any other questions.
Chiaryn
Hi Holger,
Sorry for the confusion! I took a look at your campaign and this is not related to your Bing & Yahoo rankings for that keyword, because the site is not ranking in the top 50 for those two search engines. This is actually a bit of a bug.
What happened is that, for some reason, the rankings did not update correctly for this keyword last week on 4/20 as they should have so the "change" filter sees that as the site dropping out of the top 50 and then showing up back at 10 in the next update. If the ranking for that keyword had updated correctly, you would see a break in the line graph rather than the connected dots between the 4/13 and 4/27 updates. I believe that this keyword is missing the update for the 4/20 rankings because we will update rankings as long as we have information for 95% of the keywords. Since this is the only keyword that seems to be affected by the issue, it means that we weren't able to pull data for that one keyword but we did have data for all the other keywords so the system updated the information.
I hope this helps. Please let me know if you have any other questions.
Chiaryn
Hey Steve,
Thanks for writing in and sorry the data is looking off in OSE. Our index is currently showing data that was collected prior to March 19th, so it may be that dianomioffers.co.uk made changes to the links since we collected data for the index. Unfortunately, this issue is a bit difficult to troubleshoot because I can only see how the links are currently formatted on the site and I the only reference I have to how the links were formatted previously would be what we are showing in the index. We did previously have an index out that was released on March 29th, but due to some data retention issues, we had to role back to the older index. We are expecting to update back to the more current release by Thursday, so you may want to check again and see if the information for these links has been updated in the more current version of the index.
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
Hi there,
I'm really sorry that you haven't received all of the emails for your rankings updates. We recently switched our emails over to a new server, and it seems the issue you are experiencing is be related to that. Our engineers should be pushing out a fix in the next day or so to correct the bug and they will be monitoring our emailing system closely to make sure this doesn't happen again. I apologize for any inconvenience this caused.
In the meantime, you should be able to view and download your most recent rankings reports from the Rankings tab of your campaigns.
-Chiaryn
HI Akram,
Rank Tracker was down between November and February while it was being rebuilt, but it has been fully up and running since February so it sounds like you may need to clear the cache and cookies in your browser if the system is still showing you the landing page. It is currently up and running and clearing the cache and cookies should let you access the new version of the tool.
Chiaryn
Hey Denelm and Mash,
I'm sorry to hear that you are having some trouble with Rank Tracker.
We don't seem to be having any issues with Rank Tracker that I can see at this time. I was able to run a few keywords through it just now and another colleague was using it earlier today with no issues. Can you let me know if you are still having problems? If so, can you let me know if you are receiving an error message or if the system is just timing out?
I look forward to hearing back from you soon.
-Chiaryn
Hi There,
Great question! Our index currently only has the ability to to show which links are from the same C-block as the target URL or which links are from different C-blocks than the target URL. So, for example, you could see that all of the links to seomoz.org that come from rogermozbot.com, moz.com, mozcast.com and a few others all come from the same C-block as seomoz.org and any other link would be from a different C-block, but I'm afraid we don't keep the data to break out the links from the external C-blocks by C-block or IP. I'm really sorry about that!
I would recommend submitting a feature request for us to keep this data in the future and make it sortable by C-block. Here's the feature request forum we use to collect ideas:
http://seomoz.zendesk.com/forums/293194-seomoz-pro-feature-requests
I'm not familiar with another tool that will provide the data that you're looking for currently, so it might be helpful to make this a discussion questions, as you might get more responses from other users.
I hope this helps.
Chiaryn
Hi There,
Thanks for writing in.
I took a look at both of your campaigns, and I'm not seeing that either one shows the Overly Dynamic URL error for any of the URLs we are crawling. That said, we do still report the errors for pages with canonical tags, we just also note that those URLs have the canonical tag. We do this to give you a full snapshot of the site.
If a URL does have a canonical tag, the search engines won't index those pages if the canonical tag so it wouldn't be as much of an issue for the search engines, but we do feel it is important to have a full snapshot of the issues that could affect your SEO.
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
Hi Regina,
Since the information for the account was sent into the Help Desk and this does seem like a bug, I'm going to mark this question as answered and I will follow up via email once I have any information from the engineers.
Thanks,
Chiaryn
Hi Henrik,
I'm afraid that Google's crawlers are much more sophisticated than our since they have much greater resources than we do, so they may not consider those URLs to be an issue, but we are always working to make our crawler better. Unfortunately, at this time there is no way to remove crawling for certain URLs in our tool without using the robots.txt file.
I'm sorry for any inconvenience that may cause. Please let me know if you have any other questions.
-Chiaryn
Hi Regina,
I'm sorry that you are having some trouble adding your competitors' Twitter handles. I'm afraid I would need to know what Twitter accounts you are trying to connect to your campaign and which campaign you are trying to add them to in order to look into this for you.
If you would like to keep that information private, you can email it into help@seomoz.org with Attn: Chiaryn in the subject line.
I look forward to hearing from you soon.
-Chiaryn
So glad I could help, Gina! Google definitely marches to the beat of their own proprietary drum. :]
Hey Gina,
I looked into the keywords that you mentioned that you are tracking ( "web design Santa Barbara" & "Santa Barbara web design") and I am not seeing your site anywhere in the top 50 results when I manually check in an incognito window, except in the ppc section. In regard to the keyword containing "in" we only show results for exact matches for the keywords that you are tracking, so we wouldn't include the results for that keyword in the rankings we show. Here are screenshots of the top 50 results for the two keywords that you are tracking where I did a search on the page for your site and only found it in the ppc section:
web design Santa Barbara:
http://screencast.com/t/kQlxP0I5
Santa Barbara web design:
http://screencast.com/t/nt8RKJhroC
It does look like the manual searches I did match with what our tool is showing, so investigated a little bit deeper into how Google calculates the average position metric and it looks like they include the personalized results, which we do not. Our tool shows what most users would see when doing a search for the keyword and, since you are viewing the average position section while signed into your account, that could certainly add bias to the position GA is showing. Here is a forum post from stackexchange.com that talks more about that metric: http://webmasters.stackexchange.com/questions/826/what-does-avg-position-from-google-webmaster-really-mean
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
Hi Gina,
Thanks for writing in. It is a bit difficult to answer this question without knowing what the keyword is and the site that you are comparing to the keyword, but there are a few reasons this may happen. First, it can depend on how the campaign is set up. For example, if you set up the campaign for www.example.com, but example.com (without the www in the URL) is actually what is ranking, we will show that the site is not in the top 50 because the campaign is only looking for URLs in the rankings containing www in front of your domain name.
Also, you mention local listings, which are not considered organic rankings so, even if you are the #1 places result for a keyword, we will not count that as within the top 50 but that we will label as a universal result.
If you can provide me with the specific keyword and website that you are having this issue with, I can look into it more directly. If you prefer to keep that information private, you can send it ti help@seomoz.org with Attn: Chiaryn in the subject line.
Thanks,
Chiaryn
Hi Sanaa,
Thanks for writing in and sorry that your having some issues with your crawl. It is difficult for us to say what may be the problem without knowing what site is that you are trying to set up a campaign for and how you are setting up the campaign.
I looked into your account and found several campaigns that are only receiving 1-2 page crawls, but all three of the URLs that those campaigns are set up for either redirect to a different domain or have a meta-robots noindex directive on them. Our crawler will not follow a redirect to a separate domain because it only crawls pages on the domain that the campaign was set up for and it will respect the noindex directive, so it is correct that we wouldn't be able to crawl any of those sites with the way the campaigns are set up.
I can look into the specific issue you are writing in about more directly if you let me know the website you are trying to have crawled and the way you are setting up the campaign for that site.
If you prefer to keep that information private, you can email us at help@seomoz.org and we can investigate it that way.
I hope this helps.
-Chiaryn
Hi Dana,
Thanks for writing in. The robots.txt file would not cause a 403 error. That type of error is actually related to the way the server responds to our crawler. Basically, this means the server for the site is telling our crawler that we are not allowed to access the site. Here is a resource that explains the 403 http status code pretty thoroughly: http://pcsupport.about.com/od/findbyerrormessage/a/403error.htm
I looked at both of the campaigns on your account and I am not seeing a 403 error for either site, though I do see a couple of 404 page not found errors on one of the campaigns, which is a different issue.
If you are still seeing the 403 error message on one of your crawls, you would just need to have the webmaster update the server to allow rogerbot to access the site.
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
Hi there,
I tried to look into this issue in your campaign and I am not seeing that you have any On-Page reports being run for the same keyword with different capitalization, so I was going to run test reports for the specific example you gave and I also don't see the keyword "air freight" with different capitalization in your keyword list, so I wasn't able to test exactly what you mentioned within the campaign.
I did run the test for Air Freight and air freight against the /air-freight/main-carriers page of your site in the stand-alone On-Page tool (http://pro.seomoz.org/tools/on-page-keyword-optimization) and I am seeing both versions of the keyword receiving A grades with exactly the same metrics counted for both, so they are being treated the same regardless of capitalization.
I was able to test within the campaign for the keywords Freight Consultants and freight consultants against the URL /air-freight/air-compliance/air-freight-consultants and those keywords also both return exactly the same reports with an A grade, so we are considering both versions of the keyword.
I'm not seeing a bug for capitalization of keywords in our system, so it would be really helpful if you could provide me with screenshot of the pages of where you are seeing this issue so that I can investigate further?
Thanks,
Chiaryn
Hey Kevin,
This is actually a great question.
Google isn't very open about how they select sites for the rankings, so we have to base this type of report on best practices and past experience so some keywords can have funky results.
For the location specific keyword, you can almost use the Private Dining report as a mirror for the Private Dining Sacramento report. While Google knows your business is in Sacramento, our tool is only looking for specific words in the code and content of the page and does not take location into account. To the On-Page tool Sacramento is just a collection of letters, but Google would infer that you are targeting users in Sacramento when looking at how well the page is optimized for Private Dining.
As for the reason why the site would not be ranking for the more broad keyword but would be for the location specific keyword, that is most likely because you are optimized for the term Private Dining and the location specific version of the keyword is not as competitive as the more nationally applicable keyword would be.
I hope that makes sense! Let me know if you need any clarification.
-Chiaryn
Hey Lawrence,
Campaigns have a 95% tolerance for duplicate content. This includes all the source code on the page and not just the viewable text. So if a URL is at least 95% similar in code and content to another URL, this warning will appear.
You can run your own tests using this tool: http://www.webconfs.com/similar-page-checker.php
We don't know what standard Google uses, but it's safe to say they are a bit more sophisticated than us - so you might be okay in this regard as long as you have a couple hundred words of unique text and some unique coding per page. Google won't say how much duplicate content is too much, so we like to be better safe than sorry.
I hope this help. Let me know if you need further assistance.
-Chiaryn
Hey Lawrence,
I checked a few of the pages that are listed as having a meta refresh on the Australia version of the site and it looks like the meta refreshes have been removed in the source code, but the last crawl was on the 14th for that site, so you would just need to wait for the next crawl to come through sometime today to see those changes reflected in the crawl. The same goes for the US and New Zealand version of the site. All three crawls will be updated within the next couple of days, so you should see those meta refresh errors drop off for those campaigns after the new crawls complete, as they did for your other three campaigns in their recent crawls.
Please let me know if I can help you with anything else.
-Chiaryn
Hi Michael,
Thanks for writing in. I already emailed you in response to the ticket you sent in to the Help Desk, but I will copy my answer here for you review.
--
I looked into your campaign and it seems that this is happening because of where your canonical tags are pointing. These pages are considered duplicates because their canonical tags point to different URLs. For example, http://www.woolovers.com/british-wool/mens/tweed-green/wool-countryman-suede-patch-sweater.aspx is considered a duplicate of http://www.woolovers.com/british-wool/mens/olive-green/wool-countryman-suede-patch-sweater.aspx?p=true&rspage=4 because the canonical tag for the first page is http://www.woolovers.com/british-wool/mens/tweed-green/wool-countryman-suede-patch-sweater.aspx while the canonical for the second URL ishttp://www.woolovers.com/british-wool/mens/olive-green/wool-countryman-suede-patch-sweater.aspx, with one URL showing tweed-green and the other showing olive-green.
Since the canonical tags point to different URLs it is assumed that http://www.woolovers.com/british-wool/mens/tweed-green/wool-countryman-suede-patch-sweater.aspx and http://www.woolovers.com/british-wool/mens/olive-green/wool-countryman-suede-patch-sweater.aspx are likely to be duplicates themselves.
Here is how our system interprets duplicate content vs. rel canonical:
Assuming A, B, C, and D are all duplicates,
If A references B as the canonical, then they are not considered duplicates
If A and B both reference C as canonical, A and B are not considered duplicates of each other
If A references C as a canonical, A and B are considered duplicated
If A references C as canonical, B references D, then A and B are considered duplicates
The examples you've provided actually fall into the fourth example I've listed above.
I hope this clears things up. Please let me know if you have any other questions.
--
-Chiaryn
Hi There,
I think we were able to resolve this issue through our email system, so I am going to mark this question as answered, but please let me know if you have any other questions.
-Chiaryn
Hi Bernardo,
You can certainly share your login credentials with the contractor. We don't put any restrictions on how many people can be logged into the account at one time and that is the only way to share access to the data, so you are more than welcome to share your credentials, but the contractor will have full access to the account.
As ColinSV mentions, you can also create a custom report that cc's the contractor directly when the report is generated, so they can review the campaign data without having full access to the account.
I hope this helps. Let me know if you have any other questions.
-Chiaryn
Hey Colum,
That forum you are looking at is a bit old and I believe we have resolved the authentication issue, so it isn't an overarching problem. I also was able to grab the sample code on your account and get data back when I ran the code in my browser, so I would recommend trying to run a call again. If you regenerated your credential and still received the error, it may be that the new credentials hadn't updated yet in our database, since it can take up to 30 minutes for the database to recognize the new credentials.
I would suggest trying both the sample code on your account and a regular call to see if either are still giving you the error message. If you are no longer getting the error message on the sample call, but you are still getting it on the other call, it may be a formatting issue. I'm definitely seeing data for the sample call now, so it shouldn't be any issue with the actual credentials.
I hope this helps. Please let me know if you are still seeing any issues.
Glad to hear everything's working for you again! Let us know if you run into any other issues.
Hi Meghan,
I'm sorry to hear you're having some trouble with the Mozbar in Firefox. I logged into your account and was able to access the PRO features of the Mozbar in Firefox with no issues. If the solutions that Mike and Oleg suggested don't work for you, I would recommend clearing your cache and cookies and giving it another try. That usually fixes the issue.
If you are still running into the issue after that, please let me know and I will look into it further.
Thanks for writing in, sorry that your universal rankings campaigns data is looking off. I checked into your account, and I'm seeing quite a few campaigns. Can you let us know which campaign you're seeing this issue in?
If you would prefer to keep this information private, feel free to email us at help@seomoz.org with the details.
I look forward to hearing back from you soon.
Hi Anuj,
Thanks for writing in and for using our Open Site Explorer! I'm so sorry that you still haven't been able to see your links in Mozscape. Most new sites and links will be indexed by our spiders and available in Mozscape and Open Site Explorer within 60 days, but some take even longer for many of reasons, including the crawl-ability of sites, the amount of inbound links to them, and the depth of pages in subdirectories.
Just so you know, here's how we compile our index:
Therefore, if the site is not linked to by one of these seed URLs (or one of the URLs linked to by them in the next update) then it won't show up in our index. Sorry!
We update our Mozscape Index every 4 weeks. Crawling the entire Internet to look for links takes 2-3 weeks, but our crawlers are always in motion. When we need to start processing, we grab all the data they have collected and start processing which can take up to 3 weeks to determine which of those links are the most important. You can see our most recently updated schedule here: http://seomoz.zendesk.com/entries/345964-linkscape-update-schedule
Mozscape focuses on a breadth-first approach. Therefore we almost always have content from the homepage of websites, externally linked-to pages, and pages higher up in a site's information hierarchy. However, deep pages that are buried beneath many layers of navigation are sometimes missed and it may be several index updates before we catch all of these.
If our crawlers or data sources are blocked from reaching those URLs, they may not be included in our index (though links that point to those pages will still be available). Finally, the URLs seen by Mozscape must be linked-to by other documents on the web or our index will not include them.
I hope this information helps! While the site and links may not be indexed yet, give it some time - maybe we'll see it in OSE next month.
Best of luck,
Chiaryn
Hi Christopher,
Thanks for writing in with a great question. Unfortunately, there is no way to manually submit links to our index manually. All of the information we collect comes from an actual crawl of data in the internet, starting with the top 10 billion URLs with the highest MozRank. We then crawl from the top down until we've crawled about 70 billion pages and then collate the data that was collected by following all of those links. The information is all automatically collected at this time.
You may consider adding this as a feature request so our product department might look into adding the ability to submit links in the future. Here's the feature request forum we use to collect ideas: http://seomoz.zendesk.com/forums/293194-seomoz-pro-feature-requests
You can add your request to submit links to the index and vote for other features you'd like to see and suggest your own, both of which help our product team in deciding what to build next.
Let me know if you have any other questions.
-Chiaryn
Hi Sevin,
Thanks for writing in with a great question. Since SEOmoz does not support geo-location, you would want to set up the campaign for http://www.relationshipcounselingcenter.org. That will give you the information about the main part of the site. If you do still want to track the pages for other locations, you would need to have links to those landing pages from somewhere on the site so we can access those URLs. I took a look at the site and I don't see any links to the landing pages for the other locations on the site so you may want to add links to the contacts and locations page if you want us to crawl all of the locations.
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
Hi There,
I'm sorry that you're just getting a reply now, but this was just assigned to us over on the Help Team. I know that you've received a few ranking updates since you posted this question, but I wanted to follow up to see if you are still seeing the issue on any of your keywords. If you are, can you please let us know which canmpaing and which keywords you are having this issue with?
If you prefer to keep the information confidential, you can email us at help@seomoz.org. I also recommend that you email us there if you run into technical issues or are having problems with our tools in the future, because those emails come directly to the Help Team and the Q&A questions can take some time to be reviewed and assigned to us. I'm really sorry about that!
Please let me know if you are still having any issues.
-Chiaryn
Hi RAIN Group,
It sounds like you are referring to the data on the Keyword Difficulty tool. That tool is actually not connected to Google Analystics, but the search volume used to be provided through the Google AdWords API. Unfortunately, Google has removed our access to their AdWords API so we are no longer able to provide that data. The good news is, we don't actually use the Google AdWords data in our calculation of keyword difficulty; it's just supplemental data, so those reports are still 100% accurate - they're just missing the information we used to pull from the API, which you can get at https://adwords.google.com/o/KeywordTool in the meantime.
We are working on finding another source to partner with to provide search volume in the future, but I'm afraid we don't have an ETA for when that data will be available again directly through the tool.
Thank you for your patience while we work on this!
-Chiaryn
Hi Yves,
Thanks for writing in and sorry for any confusion. The number of keywords sending organic search visits to your site may be much higher than the traffic from the keywords you are tracking in your campaign because we are showing the total traffic data from organic searches, even if you are not tracking the keywords that are sending search visits to your site. You can see the list of keywords that you are not tracking that are sending traffic to your site from the Find New Keywords tab of the Manage Keywords section of your campaign here: http://pro.seomoz.org/campaigns/227408/find_new_keywords
Please let me know if you have any other questions.
Chiaryn