1 page crawled ... and other errors
-
1. Why is only one (1) page crawled every second time you crawl my site?
2. Why do your bot not obey the rules specified in the robots.txt?
3. Why does your site constantly loose connection to my facebook account/page? This means that when ever i want to compare performance i need to re-authorize, and therefor can not see any data until next time. Next time i also need to re-authorize ...
4. Why cant i add a competitor twitter account? What ever i type i get an "uh oh account cannot be tracked" - and if i randomly succeed, the account added never shows up with any data.
It has been like this for ages. If have reported these issues over and over again.
We are part of a large scandinavian company represented by Denmark, Sweden, Norway and Finland. The companies are also part of a larger worldwide company spreading across England, Ireland, Continental Europe and Northern Europe. I count at least 10 accounts on Seomoz.org
We, the Northern Europe (4 accounts) are now reconsidering our membership at seomoz.org. We have recently expanded our efforts and established a SEO-community in the larger scale businees spanning all our countries. Also in this community we are now discussing the quality of your services. We'll be meeting next time at 27-28th of june in London.
I hope i can bring some answers that clarify the problem we have seen here on seomoz.org. As i have written before: I love your setup and you tools - when they work. Regretebly, that is only occasionally the case!
-
Hi there!
Thanks for your patience, if you need a list of all the keyword and their labels, there's a few ways to accomplish that:
1. From our rankings dashboard, simply look to the right of the screen and find the drop down. From there select "full rankings report to CSV." http://screencast.com/t/iVMLE1UZvcTk . After that simply hit export then we will compile a CSV for you with your latest list of keywords plus all the data associated with them.
2. Same method as above, you can also export your whole entire history of into a CSV. Simply choose the last option on the drop down menu stated "Entire keyword ranking history to CSV." Simply hit export, our system will then take a few hours and produce every keyword you ever track since the beginning of your campaign.
Hope that was helpful, please let me know if you have any more questions about our tool
~Peter
SEOmoz Help Team. -
And Joel, one last thing: Could you be so kind to send me all my keywords and labels in a spreadsheet? Thomas was sure you would: http://www.seomoz.org/q/export-keywords-and-labels
That would be really very nice. Thanks
-
Hi Joel
Sorry for the exaggerated timeframe. I must have been carried away. If it only happens a couple of times a year, I surely have no reason to complain...You seem to be a very experienced support. They must really appreciate you at Seomoz.org.The issues we have discussing before, once again turn out to be: It's not Seomoz, it's Facebook. It's not Seomoz, it's Twitter.It's not Rogerbot, it's Googlebot.Strange that Googlebot obey our rules. But Rogerbot is apparently more delicate than Googlebot. Is there a reason for this? Perhaps google could use some help to tweak their bot?I do come to think of a recent episode of Gordon Ramsays kitchen nightmares. I'm now to choose between watching that particular episode or reading your response out loud at our next SEO-meeting. If you don't watch Gordon Ramsays kitchen nightmare, and you have no idea of what episode I'm referring to, here's a little help:http://eater.com/archives/2013/05/13/gordon-ramsay-kitchen-nightmares-amys-baking-company.phpSorry for the lack of style and line breaks in my reply. It must be an error on my side, by writing from my iPad. Don't worry. It's not Seomoz. It's Apple.
-
Hey Ture,
I'll go ahead and address your questions one by one.
I took a look at your campaign and it is by no means giving a 1 page crawl every second time. You do seem to have an issue with your server that causes it to give this response (you can find the response in your CSV file) every 3-5 months:
Connection was refused by other side: 111: Connection refused.
Unfortunately I could not tell you what causes it as I don't do web or server support. You'll likely need to speak with your admin to see what can be done to avoid that.
As far as the robots.txt, RogerBot definitely does obey properly formatted robots.txt directives. If you feel that he's doing otherwise, you should email help [at] seomoz.org with specific details.
The re-auth issue with facebook is them expiring your auth token and wanting you to renew it. It is not us, we'd obviously much rather they not expire the token. You also see that sometimes with Google, especially if you have a large number of sites.
I remember we talked about this issue with your twitter handles before. This issue was resolved at the time, twitter was reporting that account as invalid in their API, we reached out to twitter and had them fix that for you. If you're seeing this again email us the information so we can look at it.
Remember, the Q&A is meant more for community based questions and is not really a forum for seeking technical support as we can't discuss any details of your account. In the future, email technical support questions directly to help [at] seomoz.org.
Cheers,
Joel.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
API for On Page tool
I'm looking for a tool similar to On Page Grader (Moz) or Focus Keyword (Yoast) with API. We are building out or internal CRM system. Even though none of these tools can replace manual on page analysis, it will be used as a metric and to catch human mistakes.
Moz Pro | | OscarSE0 -
How to Avoid Duplicate Page Content errors when using Wordpress Categories & Tags?
I get a lot of duplicate page errors on my crawl diagnostics reports from 'categories' and 'tags' on my wordpress sites. The post is 1x link and then the content is 'duplicated' on the 'category' or 'tag' that is added to the page. Should I exclude the tags and categories from my sitemap or are these issues not that important? Thanks for your help Stacey
Moz Pro | | skehoe1 -
Crawl Diagnostics Summary Problem
We added our website a Robots.txt file and there are pages blocked by robots.txt. Crawl Diagnostics Summary page shows there is no page blocked by Robots.txt. Why?
Moz Pro | | iskq0 -
In my errors I have 2 different products on the same page?
Hello, I have 2039 duplicate page errors and most of them are 2 different products on 1 page, I haven't set it up in the CMS, how has this happened? here's 2 examples, the 1st example has ghd's on the back of a different brand and the 2nd has gift packs on the back of the same brand 'rockaholic'? and what does 'norec' mean? http://www.thehairroom.co.uk/Tigi-Rockaholic-797658/ghd-straightening-irons/norec http://www.thehairroom.co.uk/Tigi-Rockaholic-797658/tigi-bed-head-gift-packs/norec Thanks Mark
Moz Pro | | smoki6660 -
Dynamic URL pages in Crawl Diagnostics
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages. Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site. The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site. These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories. So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
Moz Pro | | Visually0 -
Crawl Diagnostic Errors
Hi there, Seeing a large number of errors in the SEOMOZ Pro crawl results. The 404 errors are for pages that look like this: http://www.example.com/2010/07/blogpost/http:%2F%2Fwww.example.com%2F2010%2F07%2Fblogpost%2F I know that t%2F represents the two slashes, but I'm not sure why these addresses are being crawled. The site is a wordpress site. Anyone seen anything like this?
Moz Pro | | rosstaylor0 -
Most of the time getting error.
Hi, i am getting this error most of the time in linkscape since last month. Sorry dude, no inlinks found matching this criteria. Pl guide is this a bug and the sites I am trying to use linkscape for were having lot of pages crawled earlier by SEOMOZ. Thanks, Preet
Moz Pro | | PreetSibia0 -
Crawl complete, but nothing changed?
Hi everyone, According to my account, the crawl diagnostics were completed yesterday. However, the duplicate page titles that it mentions aren't correct. The changes that I implemented several days ago are not being shown in the report. When I click the duplicate page title links, the latest date in the graph is 3/26. However, it says the crawl was completed on 3/30. Does it take a few days for the reports to match what the crawl actually discovered?
Moz Pro | | beeneeb0