1 page crawled ... and other errors
-
1. Why is only one (1) page crawled every second time you crawl my site?
2. Why do your bot not obey the rules specified in the robots.txt?
3. Why does your site constantly loose connection to my facebook account/page? This means that when ever i want to compare performance i need to re-authorize, and therefor can not see any data until next time. Next time i also need to re-authorize ...
4. Why cant i add a competitor twitter account? What ever i type i get an "uh oh account cannot be tracked" - and if i randomly succeed, the account added never shows up with any data.
It has been like this for ages. If have reported these issues over and over again.
We are part of a large scandinavian company represented by Denmark, Sweden, Norway and Finland. The companies are also part of a larger worldwide company spreading across England, Ireland, Continental Europe and Northern Europe. I count at least 10 accounts on Seomoz.org
We, the Northern Europe (4 accounts) are now reconsidering our membership at seomoz.org. We have recently expanded our efforts and established a SEO-community in the larger scale businees spanning all our countries. Also in this community we are now discussing the quality of your services. We'll be meeting next time at 27-28th of june in London.
I hope i can bring some answers that clarify the problem we have seen here on seomoz.org. As i have written before: I love your setup and you tools - when they work. Regretebly, that is only occasionally the case!
-
Hi there!
Thanks for your patience, if you need a list of all the keyword and their labels, there's a few ways to accomplish that:
1. From our rankings dashboard, simply look to the right of the screen and find the drop down. From there select "full rankings report to CSV." http://screencast.com/t/iVMLE1UZvcTk . After that simply hit export then we will compile a CSV for you with your latest list of keywords plus all the data associated with them.
2. Same method as above, you can also export your whole entire history of into a CSV. Simply choose the last option on the drop down menu stated "Entire keyword ranking history to CSV." Simply hit export, our system will then take a few hours and produce every keyword you ever track since the beginning of your campaign.
Hope that was helpful, please let me know if you have any more questions about our tool
~Peter
SEOmoz Help Team. -
And Joel, one last thing: Could you be so kind to send me all my keywords and labels in a spreadsheet? Thomas was sure you would: http://www.seomoz.org/q/export-keywords-and-labels
That would be really very nice. Thanks
-
Hi Joel
Sorry for the exaggerated timeframe. I must have been carried away. If it only happens a couple of times a year, I surely have no reason to complain...You seem to be a very experienced support. They must really appreciate you at Seomoz.org.The issues we have discussing before, once again turn out to be: It's not Seomoz, it's Facebook. It's not Seomoz, it's Twitter.It's not Rogerbot, it's Googlebot.Strange that Googlebot obey our rules. But Rogerbot is apparently more delicate than Googlebot. Is there a reason for this? Perhaps google could use some help to tweak their bot?I do come to think of a recent episode of Gordon Ramsays kitchen nightmares. I'm now to choose between watching that particular episode or reading your response out loud at our next SEO-meeting. If you don't watch Gordon Ramsays kitchen nightmare, and you have no idea of what episode I'm referring to, here's a little help:http://eater.com/archives/2013/05/13/gordon-ramsay-kitchen-nightmares-amys-baking-company.phpSorry for the lack of style and line breaks in my reply. It must be an error on my side, by writing from my iPad. Don't worry. It's not Seomoz. It's Apple.
-
Hey Ture,
I'll go ahead and address your questions one by one.
I took a look at your campaign and it is by no means giving a 1 page crawl every second time. You do seem to have an issue with your server that causes it to give this response (you can find the response in your CSV file) every 3-5 months:
Connection was refused by other side: 111: Connection refused.
Unfortunately I could not tell you what causes it as I don't do web or server support. You'll likely need to speak with your admin to see what can be done to avoid that.
As far as the robots.txt, RogerBot definitely does obey properly formatted robots.txt directives. If you feel that he's doing otherwise, you should email help [at] seomoz.org with specific details.
The re-auth issue with facebook is them expiring your auth token and wanting you to renew it. It is not us, we'd obviously much rather they not expire the token. You also see that sometimes with Google, especially if you have a large number of sites.
I remember we talked about this issue with your twitter handles before. This issue was resolved at the time, twitter was reporting that account as invalid in their API, we reached out to twitter and had them fix that for you. If you're seeing this again email us the information so we can look at it.
Remember, the Q&A is meant more for community based questions and is not really a forum for seeking technical support as we can't discuss any details of your account. In the future, email technical support questions directly to help [at] seomoz.org.
Cheers,
Joel.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why did Moz crawl our development site?
In our Moz Pro account we have one campaign set up to track our main domain. This week Moz threw up around 400 new crawl errors, 99% of which were meta noindex issues. What happened was that somehow Moz found the development/staging site and decided to crawl that. I have no idea how it was able to do this - the robots.txt is set to disallow all and there is password protection on the site. It looks like Moz ignored the robots.txt, but I still don't have any idea how it was able to do a crawl - it should have received a 401 Forbidden and not gone any further. How do I a) clean this up without going through and manually ignoring each issue, and b) stop this from happening again? Thanks!
Moz Pro | | MultiTimeMachine0 -
Moz Crawl Test error
Moz crawl test show blank report for my website test - guitarcontrol.com. Why??? Please suggest.
Moz Pro | | zoe.wilson170 -
Moz is treating my pages as duplicate content but the pages have different content in reality
Attached here is a screenshot of links with duplicate content. Here are some links that is also based on the screenshot http://federalland.ph/construction_updates/paseo-de-roces-as-of-october-2015 http://federalland.ph/construction_updates/sixsenses-residences-tower-2-as-of-october-2015/ http://federalland.ph/construction_updates/sixsenses-residences-tower-3-as-of-october-2015 The links that I have placed here have different content. So I don't why they are treated as duplicates BWWJuvQ
Moz Pro | | clestcruz0 -
Seomoz crawl: 4XX (Client Error) How to find were the error are?
I got eight 404 errors with the Seomoz crawl, but the report does not says where the 404 page is linked from (like it does for dup content), or I'm I missing something? Thanks
Moz Pro | | PaddyDisplays0 -
No Crawl data in dashboard
For the second straight week, I have had no crawl data in my dashboard. It seems like the crawler erased all my results in the pro dashboard. Is there a way to manually recrawl my site, since I will have to wait another week to see if it comes back to earth? Thanks
Moz Pro | | bedwards0 -
Duplicate content pages
Crawl Diagnostics Summary shows around 15,000 duplicate content errors for one of my projects, It shows the list of pages with how many duplicate pages are there for each page. But i dont have a way of seeing what are the duplicate page URLs for a specific page without clicking on each page link and checking them manually which is gonna take forever to sort. When i export the list as CSV, duplicate_page_content column doest show any data. Can anyone please advice on this please. Thanks <colgroup><col width="1096"></colgroup>
Moz Pro | | nam2
| duplicate_page_content |1 -
How long is a full crawl?
It's been now over 3 days that the dashboard for one of our campaigns shows "Next Crawl in Progress!". I am not complaining about the length... but I have to agree that SEOMoz is quite addictive, and it's quite frustrating to see that everyday 🙂 Thanks
Moz Pro | | jgenesto0 -
SEOmoz crawl error questions
I just got my first seomoz crawl report and was shocked at all the errors it generated. I looked into it and saw 7200 crawl errors. Most of them are duplicate page titles and duplicate page content. I clicked into the report and found that 97% of the errors were going off of one page It has ttp://legendzelda.net/forums/index.php/members/page__sort_key__joined__sort_order__asc__max_results__20 http://legendzelda.net/forums/index.php/members/page__sort_key__joined__sort_order__asc__max_results__20__quickjump__A__name_box__begins__name__A__quickjump__E etc Has 20 pages of slight variations of this link. It is all my members list or a search of my members list so it is not really duplicate content or anything. How can I get these errors to go away and make search my site is not taking a hit? The forum software I use is IPB.
Moz Pro | | NoahGlaser780