1 page crawled ... and other errors
-
1. Why is only one (1) page crawled every second time you crawl my site?
2. Why do your bot not obey the rules specified in the robots.txt?
3. Why does your site constantly loose connection to my facebook account/page? This means that when ever i want to compare performance i need to re-authorize, and therefor can not see any data until next time. Next time i also need to re-authorize ...
4. Why cant i add a competitor twitter account? What ever i type i get an "uh oh account cannot be tracked" - and if i randomly succeed, the account added never shows up with any data.
It has been like this for ages. If have reported these issues over and over again.
We are part of a large scandinavian company represented by Denmark, Sweden, Norway and Finland. The companies are also part of a larger worldwide company spreading across England, Ireland, Continental Europe and Northern Europe. I count at least 10 accounts on Seomoz.org
We, the Northern Europe (4 accounts) are now reconsidering our membership at seomoz.org. We have recently expanded our efforts and established a SEO-community in the larger scale businees spanning all our countries. Also in this community we are now discussing the quality of your services. We'll be meeting next time at 27-28th of june in London.
I hope i can bring some answers that clarify the problem we have seen here on seomoz.org. As i have written before: I love your setup and you tools - when they work. Regretebly, that is only occasionally the case!
-
Hi there!
Thanks for your patience, if you need a list of all the keyword and their labels, there's a few ways to accomplish that:
1. From our rankings dashboard, simply look to the right of the screen and find the drop down. From there select "full rankings report to CSV." http://screencast.com/t/iVMLE1UZvcTk . After that simply hit export then we will compile a CSV for you with your latest list of keywords plus all the data associated with them.
2. Same method as above, you can also export your whole entire history of into a CSV. Simply choose the last option on the drop down menu stated "Entire keyword ranking history to CSV." Simply hit export, our system will then take a few hours and produce every keyword you ever track since the beginning of your campaign.
Hope that was helpful, please let me know if you have any more questions about our tool
~Peter
SEOmoz Help Team. -
And Joel, one last thing: Could you be so kind to send me all my keywords and labels in a spreadsheet? Thomas was sure you would: http://www.seomoz.org/q/export-keywords-and-labels
That would be really very nice. Thanks
-
Hi Joel
Sorry for the exaggerated timeframe. I must have been carried away. If it only happens a couple of times a year, I surely have no reason to complain...You seem to be a very experienced support. They must really appreciate you at Seomoz.org.The issues we have discussing before, once again turn out to be: It's not Seomoz, it's Facebook. It's not Seomoz, it's Twitter.It's not Rogerbot, it's Googlebot.Strange that Googlebot obey our rules. But Rogerbot is apparently more delicate than Googlebot. Is there a reason for this? Perhaps google could use some help to tweak their bot?I do come to think of a recent episode of Gordon Ramsays kitchen nightmares. I'm now to choose between watching that particular episode or reading your response out loud at our next SEO-meeting. If you don't watch Gordon Ramsays kitchen nightmare, and you have no idea of what episode I'm referring to, here's a little help:http://eater.com/archives/2013/05/13/gordon-ramsay-kitchen-nightmares-amys-baking-company.phpSorry for the lack of style and line breaks in my reply. It must be an error on my side, by writing from my iPad. Don't worry. It's not Seomoz. It's Apple.
-
Hey Ture,
I'll go ahead and address your questions one by one.
I took a look at your campaign and it is by no means giving a 1 page crawl every second time. You do seem to have an issue with your server that causes it to give this response (you can find the response in your CSV file) every 3-5 months:
Connection was refused by other side: 111: Connection refused.
Unfortunately I could not tell you what causes it as I don't do web or server support. You'll likely need to speak with your admin to see what can be done to avoid that.
As far as the robots.txt, RogerBot definitely does obey properly formatted robots.txt directives. If you feel that he's doing otherwise, you should email help [at] seomoz.org with specific details.
The re-auth issue with facebook is them expiring your auth token and wanting you to renew it. It is not us, we'd obviously much rather they not expire the token. You also see that sometimes with Google, especially if you have a large number of sites.
I remember we talked about this issue with your twitter handles before. This issue was resolved at the time, twitter was reporting that account as invalid in their API, we reached out to twitter and had them fix that for you. If you're seeing this again email us the information so we can look at it.
Remember, the Q&A is meant more for community based questions and is not really a forum for seeking technical support as we can't discuss any details of your account. In the future, email technical support questions directly to help [at] seomoz.org.
Cheers,
Joel.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Site Crawl Stalled and Can't Restart
In my GreenSeed campaign, the site crawl continues to say "in progress." I can't figure out how to stop it or how to restart the site crawl. Can you please help?
Moz Pro | | Winger1 -
Multiple Countries, Same Language: Receiving Duplicate Page & Content Errors
Hello! I have a site that serves three English-speaking countries, and is using subfolders for each country version: United Kingdom: https://site.com/uk/ Canada: https://site.com/ca/ United States & other English-speaking countries: https://site.com/en/ The site displayed is dependent on where the user is located, and users can also change the country version by using a drop-down flag navigation element in the navigation bar. If a user switches versions using the flag, the first URL of the new language version includes a language parameter in the URL, like: https://site.com/uk/blog?language=en-gb In the Moz crawl diagnostics report, this site is getting dinged for lots of duplicate content because the crawler is finding both versions of each country's site, with and without the language parameter. However, the site has rel="canonical" tags set up on both URL versions and none of the URLs containing the "?language=" parameter are getting indexed. So...my questions: 1. Are the Duplicate Title and Content errors found by the Moz crawl diagnostic really an issue? 2. If they are, how can I best clean this up? Additional notes: the site currently has no sitemaps (XML or HTML), and is not yet using the hreflang tag. I intend to create sitemaps for each country version, like: .com/en/sitemap.xml .com/ca/sitemap.xml .com/uk/sitemap.xml I thought about putting a 'nofollow' tag on the flag navigation element, but since no sitemaps are in place I didn't want to accidentally cut off crawler access to alternate versions. Thanks for your help!
Moz Pro | | Allie_Williams0 -
How do a run a MOZ crawl of my site before waiting for the scheduled weekly crawl?
Greetings: I have just updated my site and would like to run a crawl immediately. How can I do so before waiting for the next MOZ crawl? Thanks,
Moz Pro | | Kingalan1
Alan0 -
Functionality of SEOmoz crawl page reports
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration! Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file? For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
Moz Pro | | jimmyzig
<meta name="robots" content="noindex"> </meta name="robots" content="noindex"> This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004 and also at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex"> So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too. Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear. Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described. Anyone? Jim0 -
Can I prevent some pages from being crawled from SEOMoz spider and still not affect Google Spider?
Well, basically that's the question 😄 Can I prevent some pages from being crawled from SEOMoz spider and still not affect Google Spider? This is, I have more than 10.000 pages on the website, and I am not interested in having reports for many of them, but I still wanna get SEO visits on them, so I want Google to crawl it easily... Thanks!
Moz Pro | | MattDG0 -
Only few pages (308 pages of 1000 something pages) have been crawled and diagnosed in 4 days, how many days till the entire website is crawled complete?
Setup campaign about 4-5 days ago and yesterday rogerbot said 308 pages were crawled and the diagnostics were provided. This website has over 1000+ pages and would like to know how long it would take for roger to crawl the entire website and provide diagnostics. Thanks!
Moz Pro | | TejaswiNaidu0 -
Does the page authority data also considers the on page factors like the presence of keyword in the title,meta text, and keyword frequency ??
The moz difficulty score considers four factors for the top websites. are the on page factors included in the page authority data ?
Moz Pro | | iQuanti0 -
Status 404-pages
Hi all, One of my websites has been crawled by SEOmoz this week. The crawl showed me 3 errors: 1 missing title and 2 client errors (4XX). One of these client errors is the 404-page itself! What's your suggestion about this error? Should a 404-page have the 404 http status? I'd like to hear your opinion about this one! Thanks all!
Moz Pro | | Partouter0