I'm so sorry that you ran into this issue, Thomas! We were making a quick change to our campaign settings and it caused a short outage in the campaign set-up widget for the search engines. Everything should be up and running smoothly again, so please give your campaign another try and let me know if you need any further assistance with your account.
![ChiarynMiranda ChiarynMiranda](/community/q/assets/uploads/profile/32797-profileavatar-1619582853926.png)
Best posts made by ChiarynMiranda
-
RE: Error Selecting Search Engine During Moz Campaign Set-Up
-
RE: The Old Moz Pro
Hi Pania,
Thanks for writing in. Unfortunately, the old PRO application was deprecated in October of 2014 and it is no longer accessible. We did send out several notifications to people that were using the application prior to the shutdown, so I apologize if you didn't see those. Here is a transitional guide that may be helpful to you: https://moz.com/pro-to-analytics
If you can let me know specifically what features were more helpful to you in the old application, I can send them to our product team takes that feedback very seriously.
I apologize for the inconvenience this causes! Please let me know if I can help you with anything else.
-
RE: Keyword Difficulty Tool not working!
Hey Sean,
Thanks for writing in and sorry that you are running into some issues with the Keyword Difficulty tool. Unfortunately, Google has changed the way that they present rankings information, so we have had to make some changes to our collection in order to keep up with these changes, I'm afraid that this has caused some issues with providing the real time data for the Keyword Difficulty tool and the tool is currently unresponsive. At this time, unfortunately, we do not have an ETA for when that tool will be up and running again, but we are hopeful to have the issue resolved by the end of the week. I thank you for your patience while we work to resolve this issue.
I apologize for any inconvenience this may cause.
-Chiaryn
-
RE: Need help fixing the duplicate content that keeps growing
Hey There! It looks like you are trying get some assistance without specifically naming the site you are concerned about and I definitely understand that, but it is really difficult to give advice on this issue without more detailed information. However, I took a look at your campaigns and I am going to address the issue I am seeing with the site that had the largest increase of duplicate content over the last couple of crawls. I apologize if this isn't the site you are referring to.
The campaign I'm looking at is the 5 Star campaign. It looks like a large number of the pages with duplicate content are related to ?attachment parameters in the URL, such as www.site.com/?attachment_id=77899. There is very little content on these pages and it looks like they are added to the site pretty regularly, since all of the ones I looked at are dated closely together.
I'm not an SEO expert, so Bryan may have better advice for you, but I can give a few suggestion of how to resolve this issue. I don't entirely understand the purpose of these pages, so that would affect which of these options might be best for your personal strategy for the site.
You can add a canonical tag to these pages to point to one specific page as the most important page with this content. For this option, they would have to point back to the same page or our crawler will still show them as duplicates because we assume that the two canonical pages are then also likely to be duplicates. Google, however will stop indexing these pages.
You can also block these pages from being accessed using the robots.txt file for this site. For example, it would look something like this:
User-agent: *
Disallow: ?attachmentetc., until you have covered all of the parameters you would like to block. The User-agent: * blocks all crawlers from accessing those pages, but you can also use User-agent: rogerbot to specifically block only our crawler.
I hope this helps! Please let me know if I can help you with anything else.
-
RE: On page grader not working with Squarespace?
I'm so sorry! I didn't see your reply previously. I was just coming back to the thread to say that we have found a temporary workaround that should fix the issue while we work on a more permanent solution. I hope that is still helpful to you and, again, I apologize for not following up with you sooner.
-
RE: Rank Tracking results
Hi There,
Thanks for writing in.
You may be seeing this discrepancy because we only provide the option to check Google Belgium (which defaults to Dutch) in Rank Tracker, rather than breaking Google Belgium into language specific engines, as we do in the campaign. It seems that, in the campaign, you are are tracking the campaign under Google Belgium-French, which may mean that the system is pulling rankings from different datacenters and the results can be different between the two languages.
It's also important to note that Rank Tracker is an older tool that works off of a completely different system than the campaign rankings, but we are working to align it with our campaign rankings collection and include more of the search engine options that you have with the full campaign.
I hope this helps. Please let me know if you have any further questions.
-Chiaryn
-
RE: I have a client with a wordpress.com site.
Hey Cindy, you can definitely track Wordpress sites with our tools.
You don't need to add any type of tracking code for our tools, so you can just enter the homepage URL in the campaign creation and get started from there. All of our reports are generated by accessing the site directly and crawling through the links to pages we find there.
I'm afraid I don't have any advice on the Google Analytics issue, though, since I haven't managed a Wordpress site myself. So I will leave that open to our community members to answer.
Please let me know if I can help you with anything else.
-
RE: 4 days waiting for a Moz Crawl - How quick are yours?
I am so sorry that your crawl tests haven't completed yet! We are currently experiencing a major backlog with our crawl tests. Our engineers are aware of the issue and working to get these reports completed as quickly as possible, but I'm afraid we don't have a timeline for when we will get through the backlog.
The issue is now reported on health.moz.com if you would like to follow there for further updates.
I sincerely apologize for the inconvenience this causes and I thank you for bearing with us while we work to get this resolved!
-
RE: Open Site Explorer Social Media?
Hi there,
Thanks for writing in with a great question.
The social media information is based specifically on the URL you are inputting, so it would not include the likes on the Facebook page for the site unless you are specifically looking up facebook.com/examplebusiness in Open Site Explorer. For example, for www.seomoz.org OSE shows 155 Facebook likes (http://screencast.com/t/m0curvDuwsVU) while for www.facebook.com/SEOmoz it shows over 79,000 likes (http://screencast.com/t/34eHBJ4IEu). All of the metrics in the top section of OSE are specifically related to the exact URL you are researching and would not pull in information for other URLs.
I hope this clears things up. Please let me know if you have any other questions.
-Chiaryn
-
RE: I have a client with a wordpress.com site.
The traffic section is an integration that you can add to your campaign, but it is not necessary for the majority of the reports we run. Pretty much all of our reports are generated with proprietary data collected by our own crawlers and do not rely on the integration to be set up. Having a GA account connected to your campaign can certainly add color and insight to these other reports, but it is not required for our tools to run on the site.
-
RE: 4 days waiting for a Moz Crawl - How quick are yours?
Hey there!
The backlog of reports is clearing out now. It looks all of your reports that were submitted prior to today have been completed now and the newest report should complete within the normal 24 hour time period.
Please do let me know if I can help you with anything else.
-
RE: Greek keyword
Hi Everyone,
We actually have a known issue with some non-Latin character sets that aren't displaying properly in On Page reports, CSV files, and the Web App IU. We have had complaints about Greek, Cyrillic, Hebrew, Arabic, and some others character sets. It only affects specific characters, so we still support those languages sets in our tools. As of right now, we aren't sure when this will be fixed, but our engineers are working on the issue.
For now, you can download the export the data to a CSV, you should be able to open UTF-8 CSV files in Excel using the Text Import Wizard, which allows you to specify the encoding of the file you're opening. (Note: this may only work on PC versions of Excel)
a. To start the Text Import Wizard, on the Data tab, in the Get External Data group, click From Text. Then, in the Import Text File dialog box, double-click the text file that you want to import.
b. Under ‘Original data type’, select Delimited
c. In the ‘File origin’ drop-down, select UTF-8
d. Click [Next >]
e. Under Delimiters, select ‘Comma’
f. Click [Next >], then [Finish] and [OK] to save it under the existing worksheet
You can read more about the issue in our Known Issues forum here:
https://seomoz.zendesk.com/entries/21080226-encoding-issues-with-non-latin-characters
I hope this helps. Please let us know if you have any other questions.
Chiaryn
-
RE: Page Count per campaign - Crawl Usage 500,000 Pages
This is only true for the Large plan level and above. The Standard and Medium plans levels have a set limit of 50k pages per campaign and the limits aren't adjustable.
-
RE: I'm getting an error when I try to preview my custom report
I know this was also reported in a message to our help team, so I just wanted to follow up here to note that this was a bug in our system that has been resolved by our engineers. We apologize for the inconvenience this caused!
-
RE: How accurate is SEOMoz's keyword analysis tool?
Hi Mathew,
Thanks for writing in with a great question.
We actually use different parameters than Adwords to rate the keyword difficulty. We find our numbers with Mozrank, Moztrust, Domain Authority and Page Authority to calculate the keyword difficulty. These metrics are calculated by our own crawl and algorithm and we use them to calculate the Keyword Difficulty score based o the first 20 results so the number that they display can sometimes be different then what Adwords shows.
If you want to know more about the metrics we use, you can take a look at this resource: http://www.seomoz.org/learn-seo/domain-authority
I went ahead and marked this as a discussion questions so that others can recommend the tools they use.
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
-
RE: SEOMoz is finding jpegs on my site and reporting them as pages with missing meta titles
Hi Everyone,
We recently started using a new crawler that had a few shortcomings we've been working to improve. This issue may be related to changes in the new crawler. If you write into us at help@seomoz.org with your campaign information and the email address associated with your account, we can look into the issue for you.
We look forward hearing from you soon.
Chiaryn
-
RE: Error Code 612: Error response for robots.txt
Hey David, thanks for your question.
I took a look at your campaign and it seem that this is a bit different than the case in the previous post that Thomas linked to in his reply.
It actually looks like you have a redirect loop in place which could be confusing our bot, Roger. The robots.txt page redirects to the www version of the homepage, which redirects to an /en/home subfolder, which redirects to /en/home?r=US. You can verify this using the third party tool https://httpstatus.io/ (http://www.screencast.com/t/pk4fvGXJ1).
I can't say with entire certainty that this is causing the error message you are seeing, as I have never seen a redirect loop on the robots.txt file for a site, but I do know that the crawler will only follow two redirects and any more redirects than that will prevent us from accessing the page, which would likely be reported as an error with the robots.txt.
I would recommend fixing that so that you have only one 301 in place that points to a 200 page or by having the robots.txt file for the site respond with a 200 status. This will need to be done by your site administrator or developer.
-
RE: How to set the crawler or reports to ignore
Hey Str8,
Donford is correct. You would need to specifically block our crawler from the mobile pages in your robots.txt file. Unfortunately, we don't currently have a way to disregard specific pages or errors in the web app, but we are looking to add that function sometime in the future.
I hope this helps. Please let me know if you have any other questions.
-
RE: Pdf page titles and descriptions errors
Hi Perri,
Thanks for writing in and sorry your crawl diagnostics are looking off.
This is actually a small bug with our crawler counting images and PDFs as actually pages. We are aware of the issue and our engineers have figured out what has caused the issue, but it may take them some to make the necessary changes to the crawler to correct the issue for the images and PDFs returning errors in the crawl diagnostics report. Thank you for your patience while we look into this.
I'm sorry for any inconvenience.
-Chiaryn
-
RE: Site Not On Google, SEOmoz shows as 43
Hey Dave,
Thanks for writing in.
There are a few reasons why you may be seeing slightly different results than what we are showing in the web app. Since the search engines all maintain multiple indices that run across multiple datacenters, you can get somewhat different ranking results from different queries for the same keyword. Other elements, such as personalization, geography and search history (even if you're logged out of your Google/search engine account) can also influence ranking positions. There can definitely be a lot of variation in what different people, searching from different computers/locations might see in the rankings. We try to provide ranking information for what most users searching in your specified search engine would see.
I actually ran the keyword "plus size lingerie" through Google in an incognito window and I am seeing www.plussizeplum.com on the 5th page of search results in position 46 (http://screencast.com/t/vp6laUxQO).
It is also important to note that, when looking at that far down, the results can be unstable and experience a lot of fluctuation.
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
-
RE: SEO Campign Wont Synch up with facebook
Hey David,
It looks like our engineers found an update that Facebook has made some changes to their API that has disabled the type of authentication that we use to connect the Facebook page to your campaign. We are looking for a solution to this issue right now, but I'm afraid it may take a while for us to find a way to work with the changes that Facebook has made, so thank you for your patience in the meantime.
If you have any other questions about this issue, I would recommend that you write in to help@seomoz.org, where the Help Team will be able to answer you directly.
-Chiaryn
-
RE: On Page Grading Not Working
Hey Samuel,
Sorry for the confusion about this feature. The On Page Tool only automatically runs reports on pages that are already ranking in the top 50 of your default search engine for one or more of your keywords. Thus, your On Page Rankings Reports update after your rankings update each week. If you have no pages ranking in the top fifty, then you'll have no automatically generated On Page Reports. If you have 30 pages ranking in the top fifty, then you should have 30 On Page reports.
You can always manually generate reports for any URL and one of your keywords. Simply go to the Report Card tab in the On Page section, and cut and paste or type a new URL into the URL drop box. Then select your keyword from the drop down and select "Grade My Optimization." After that, you can tell the Web App to start tracking that combination in the future, and you should start seeing weekly reports.
I hope this helps! Please let me know if you have any other questions.
-Chiaryn
-
RE: Open site explorer varied results according to country
Hi Steve,
Thanks for writing in. You can't really specify localities in the Compare Link Metrics section of Open Site Explorer, but if you are checking your competitors using different TLDs, that would affect the metrics you are seeing for the competitors. If you can give me some exact examples of the sites you are comparing in that section, I can look directly into the issue you are seeing.
Our system isn't set to show different results based on different regions; it just shows different results based on the number of links to a domain, which would change between TLDs since example.com vs example.ca would be considered two different domains.
I look forward to hearing back soon.
-Chiaryn
-
RE: Is there a way you can determine the time SEOMoz crawls your website
Hi Simon,
Great question! Unfortunately, there isn't a way to determine when we will crawl your site. The campaign crawls are put into a general queue set for the next scheduled crawl date when the previous crawl finishes and we crawl your site when it is the next campaign in the queue for that date. The time of day can depend on the number of other crawls scheduled for that date and the length of time required to complete those crawls. We don't have a way to schedule the crawl for a specific time.
I'm really sorry for any inconvenience this causes. Please let me know if you have any other questions.
-Chiaryn
-
RE: Crawl Diagnostics 403 on home page...
Hi Dana,
Thanks for writing in. The robots.txt file would not cause a 403 error. That type of error is actually related to the way the server responds to our crawler. Basically, this means the server for the site is telling our crawler that we are not allowed to access the site. Here is a resource that explains the 403 http status code pretty thoroughly: http://pcsupport.about.com/od/findbyerrormessage/a/403error.htm
I looked at both of the campaigns on your account and I am not seeing a 403 error for either site, though I do see a couple of 404 page not found errors on one of the campaigns, which is a different issue.
If you are still seeing the 403 error message on one of your crawls, you would just need to have the webmaster update the server to allow rogerbot to access the site.
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
-
RE: I'm shocked! KEYWORD SERPs: GA avg. position vs. SEOmoz
Hi Gina,
Thanks for writing in. It is a bit difficult to answer this question without knowing what the keyword is and the site that you are comparing to the keyword, but there are a few reasons this may happen. First, it can depend on how the campaign is set up. For example, if you set up the campaign for www.example.com, but example.com (without the www in the URL) is actually what is ranking, we will show that the site is not in the top 50 because the campaign is only looking for URLs in the rankings containing www in front of your domain name.
Also, you mention local listings, which are not considered organic rankings so, even if you are the #1 places result for a keyword, we will not count that as within the top 50 but that we will label as a universal result.
If you can provide me with the specific keyword and website that you are having this issue with, I can look into it more directly. If you prefer to keep that information private, you can send it ti help@seomoz.org with Attn: Chiaryn in the subject line.
Thanks,
Chiaryn
-
RE: I'm shocked! KEYWORD SERPs: GA avg. position vs. SEOmoz
Hey Gina,
I looked into the keywords that you mentioned that you are tracking ( "web design Santa Barbara" & "Santa Barbara web design") and I am not seeing your site anywhere in the top 50 results when I manually check in an incognito window, except in the ppc section. In regard to the keyword containing "in" we only show results for exact matches for the keywords that you are tracking, so we wouldn't include the results for that keyword in the rankings we show. Here are screenshots of the top 50 results for the two keywords that you are tracking where I did a search on the page for your site and only found it in the ppc section:
web design Santa Barbara:
http://screencast.com/t/kQlxP0I5Santa Barbara web design:
http://screencast.com/t/nt8RKJhroCIt does look like the manual searches I did match with what our tool is showing, so investigated a little bit deeper into how Google calculates the average position metric and it looks like they include the personalized results, which we do not. Our tool shows what most users would see when doing a search for the keyword and, since you are viewing the average position section while signed into your account, that could certainly add bias to the position GA is showing. Here is a forum post from stackexchange.com that talks more about that metric: http://webmasters.stackexchange.com/questions/826/what-does-avg-position-from-google-webmaster-really-mean
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
-
RE: I cannot add a specific twitter account
Hi Regina,
I'm sorry that you are having some trouble adding your competitors' Twitter handles. I'm afraid I would need to know what Twitter accounts you are trying to connect to your campaign and which campaign you are trying to add them to in order to look into this for you.
If you would like to keep that information private, you can email it into help@seomoz.org with Attn: Chiaryn in the subject line.
I look forward to hearing from you soon.
-Chiaryn
-
RE: Can't log into Firefox MozBar
Glad to hear everything's working for you again! Let us know if you run into any other issues.
-
RE: Competitive Link Finder
Hi There,
Thanks for writing in and sorry that Competitive Link Finder isn't working for you.
Unfortunately, Competitive Link Finder is an older labs tool that doesn't always work, but that we are no longer providing support for. As it mentions on the tools page, with SEOmoz Labs tools, PRO users can test-drive tool prototypes and give feedback on the the latest SEOmoz technology and product design, but we do not offer support or guarantee data accuracy for these prototype tools. If you’d like to leave feedback, please do so in the SEOmoz Feature Request Forum.
This tool will sometimes work perfectly fine, as it seems to be working for DesignerBoutiqueMenswear, but it also has a lot of issues and, again, we can't guarantee that it will be consistent at this time.
Thanks for your understanding. If you have any other questions about this or any other tools, we recommend that you send them to help@seomoz.org.
-Chiaryn
-
RE: OK Crawl Test Link Question Again!
So glad I could help clear things up! :]
Each domain is only counted once, so a domain could have 3, 10, or 100 links to your site, but it would still only count as one linking root domain.
-
RE: Crawl Errors and Notices drop to zero
Hey Vanessa,
Thanks for writing in.
I looked into your account and I think you are referring to the Sparky campaign. Unfortunately, I can only see the most recent crawl data, so I don't have a way to compare the crawls from prior to April 24th to see why the number of errors and warnings would have dropped off around that time.
I do see that we picked up a noindex, nofollow tag on the blog pages on April 16th, so it may be that we were crawling other pages on the blog that had errors and warnings before the tag was added. But once the noindex, nofollow tags were added, we weren't able to crawl those pages and report back on the errors.
If you can think of any other changes that may have taken place around April or if you have an old report that shows some of the URLs that were reported as having errors, I can look into this further for you. If you prefer not to include the error report on this public forum, you can always email it to help@seomoz.org and include my name in the subject line.
I hope this helps.
Chiaryn
-
RE: Crawl Errors and Notices drop to zero
Hey Vanessa,
Every URL that is in the report you forwarded is on the blog, so it definitely looks like the noindex tag on the blog is the reason for the drop in crawl errors and warnings. If you prefer that we begin crawling the blog again, you can have that tag removed, but the tag also means that the search engines aren't indexing those pages or finding those errors any longer either.
Let me know if you have any other questions.
Chiaryn
-
RE: What are the factors that have an impact on the google search results?
Hey Sauspiel,
Thanks for writing in and sorry that your rankings are looking off in the Web App. I'm not sure why you would be getting different results in the Web App rankings than you would be getting when you run the searches yourself; it could be a lot of things. The search engines - Google, Yahoo! & Bing - all maintain multiple indices that run across multiple datacenters. This means if you query different datacenters, you can get somewhat different ranking results. Other elements, such as personalization, geography and search history (even if you're logged out of your Google/search engine account) can also influence ranking positions. Hence, there can be a lot of variation in what different people, searching from different computers/locations might see in the rankings.
Our solution has been to run searches from a variety of IP addresses and IP blocks using non-personalized, search history/location agnostic requests. In our experience, these have provided the most accurate results, showing what the "most" users see, though we know there's still quite a bit of room for fluctuation.
In the meantime, I can try to see if it's a bug on our side if you can let me know...
1. The campaign that is having this issue
2. The keyword and URL combination that is having discrepancies
3. The ranking that you're getting in Google versus what the Web App tells you.If you prefer to provide this information privately, you can email us at help@moz.com.
Chiaryn
Help Team Ninja -
RE: Crawl Diagnostics - Historical Summary
Hi Dean,
Thanks for writing in with a great question. Unfortunately, we don't have a way to export the graph data for your historical crawls. When we complete a crawl, we add the data point to the graph, but when the next crawl completes, we replace the old crawl information. The data point remains on the graph, but we don't have the exact number for the errors, warning or notices stored in our system any longer. I really apologize for the inconvenience this causes.
It would definitely be helpful if we retained this information in more detail in the future, so I would recommend submitting a request to our Feature Request forum. Here's the forum we use to collect requests:
http://seomoz.zendesk.com/forums/293194-seomoz-pro-feature-requestsYou can add your request for an exportable history of the crawl diagnostics and vote for other features you'd like to see, both of which help our product team in deciding what to build next. Add your request there and hopefully we'll see it come to fruition sometime down the line.
I'm sorry that we don't have a better solution for you at this time. Please let me know if I can help you with anything else.
Chiaryn
-
RE: FollowerWonk: How long have I been following someone?
Hey David,
Great question! Unfortunately, there isn't a way to pull this data through Followerwonk. I'm sorry about that!
I would recommend submitting a request for this data to our Feature Request forum, but we would only be able to track this information for accounts that are being tracked in Followerwonk already at the time one user starts following the other. So, if someone started following you before you were tracking your Twitter account in Followerwonk, we wouldn't be able to provide when they started following you.
Here's the forum we use to collect feature ideas:
http://seomoz.zendesk.com/forums/293194-seomoz-pro-feature-requestsI hope this helps. Please let me know if you have any other questions.
Chiaryn
Help Team Ninja -
RE: Link report that is broken down by C Block?
Hi There,
Great question! Our index currently only has the ability to to show which links are from the same C-block as the target URL or which links are from different C-blocks than the target URL. So, for example, you could see that all of the links to seomoz.org that come from rogermozbot.com, moz.com, mozcast.com and a few others all come from the same C-block as seomoz.org and any other link would be from a different C-block, but I'm afraid we don't keep the data to break out the links from the external C-blocks by C-block or IP. I'm really sorry about that!
I would recommend submitting a feature request for us to keep this data in the future and make it sortable by C-block. Here's the feature request forum we use to collect ideas:
http://seomoz.zendesk.com/forums/293194-seomoz-pro-feature-requestsI'm not familiar with another tool that will provide the data that you're looking for currently, so it might be helpful to make this a discussion questions, as you might get more responses from other users.
I hope this helps.
Chiaryn