Download all GSC crawl errors: Possible today?
-
Hey guys:
I tried to download all the crawl data from Google Search Console using the API and solutions like this one: https://github.com/eyecatchup/php-webmaster-tools-downloads but seems that is not longer working (or I made something wrong, I just receive a blank page when running the PHP file after some load time)... I needed to download more than 1.000 URLs long time ago, so I didn't tried to use this method since then.
Is there any other solution using the API to grab all the crawl errors, or today this is not possible anymore?
Thanks!
-
Hi Antonio,
Not sure which language you prefer - but you can find some sample codes here: https://developers.google.com/webmaster-tools/v3/samples - I tried the python example which was quite well documented inside the code, I guess it's the same for the other languages. If I have some time I could give it a try - but it won't be before the end of next week (and based on python)
Dirk
-
Thanks Dirk. At the moment I couldn't find any alternative, so maybe will be a good idea put some hands on this.
If any other person solved this, would be great if can share it with us the solution -
The script worked for the previous version of the API - it won't work on the current version.
You try to search to check if somebody else has created the same thing for the new API - or build something your self - the API is quite well documented so it shouldn't be to difficult to do. I build a Python script for the Search Analytics part in less than a day (without previous knowledge of Python) so it's certainly feasible.rgds
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Possible to expand organic reach in multiple countries/markets without localized content?
Hi everyone, I was recently hired as Content Lead for a SaaS company. We are based in Germany with plans to expand into the UK, Ireland, Spain, and the Netherlands. All of our website content is entirely in English and we don't have plans to localize content for any of the new markets. At least not yet.
Intermediate & Advanced SEO | | localyze_mason
One of my responsibilities will be to expand our organic reach through, mostly through SEO content. Though I'm comfortable with the fundamentals of SEO, I'm no expert and I certainly don't have experience with international SEO. I consulted a couple of resources like this guide to international SEO from Moz and this video from Semrush. In a nutshell, this is what I gather: if you want to expand organic reach in foreign countries/markets, you need to 1) decide what kind of domain you want to use and then implement the necessary technical configurations and 2) create localized content in the target market's language. As I mentioned, we won't be localizing any content at first. My question, then, is can we go about creating content in English and hope to gain any kind of meaningful organic exposure in non-English speaking markets? If so, what's the best approach? I apologize in advance if any of this isn't clear or if the answer is super obvious. Happy to provide further details upon request. Thanks in advance for any help that can be offered!0 -
News Errors In Google Search Console
Years ago a site I'm working on was publishing news as one form of content on the site. Since then, has stopped publishing news, but still has a q&a forum, blogs, articles... all kinds of stuff. Now, it triggers "News Errors" in GWT under crawl errors. These errors are "Article disproportionately short" "Article fragmented" on some q&a forum pages "Article too long" on some longer q&a forum pages "No sentences found" Since there are thousands of these forum pages and it's problem seems to be a news critique, I'm wondering what I should do about it. It seems to be holding these non-news pages to a news standard: https://support.google.com/news/publisher/answer/40787?hl=en For instance, is there a way and would it be a good idea to get the hell out of Google News, since we don't publish news anymore? Would there be possible negatives worth considering? What's baffling is, these are not designated news urls. The ones we used to have were /news/title-of-the-story per... https://support.google.com/news/publisher/answer/2481373?hl=en&ref_topic=2481296 Or, does this really not matter and I should just blow it off as a problem. The weird thing is that we recently went from http to https and The Google News interface still has us as http and gives the option to add https, which I am reluctant to do sine we aren't really in the news business anymore. What do you think I should do? Thanks!
Intermediate & Advanced SEO | | 945010 -
How to find all 404 deadlinks - webmaster only allows 1000 to be downloaded...
Hi Guys I have a question...I am currently working on a website that was hit by a spam attack. The website was hacked and 1000's of adult censored pages were created on the wordpress site. The hosting company cleared all of the dubious files - but this has left 1000's of dead 404 pages. We want to fix the dead pages but Google webmaster only shows and allows you to download 1000. There are a lot more than 1000....does any know of any Good tools that allows you to identify all 404 pages? Thanks, Duncan
Intermediate & Advanced SEO | | CayenneRed890 -
How does Tripadviser ensure all their user reviews get crawled?
Tripadvisor has a LOT of user generated content. Searching for a random hotel always seems to return a paginated list of 90+ pages. However once the first page is clicked and "#REVIEWS" is appended to the URL, the URL never changes with any subsequent clicks of the paginated links. How do they ensure that all this review content gets crawled? Thanks, linklater
Intermediate & Advanced SEO | | linklater0 -
Crawl budget
I am a believer in this concept, showing google less pages will increase their importance. here is my question: I manage a website with millions of pages, high organic traffic (lower than before). I do believe that too many pages are crawled. there are pages that I do not need google to crawl and followed. noindex follow does not save on the mentioned crawl budget. deleting those pages is not possible. any advice will be appreciated. If I disallow those pages I am missing on pages that help my important pages.
Intermediate & Advanced SEO | | ciznerguy2 -
Can't crawl website with Screaming frog... what is wrong?
Hello all - I've just been trying to crawl a site with Screaming Frog and can't get beyond the homepage - have done the usual stuff (turn off JS and so on) and no problems there with nav and so on- the site's other pages have indexed in Google btw. Now I'm wondering whether there's a problem with this robots.txt file, which I think may be auto-generated by Joomla (I'm not familiar with Joomla...) - are there any issues here? [just checked... and there isn't!] If the Joomla site is installed within a folder such as at e.g. www.example.com/joomla/ the robots.txt file MUST be moved to the site root at e.g. www.example.com/robots.txt AND the joomla folder name MUST be prefixed to the disallowed path, e.g. the Disallow rule for the /administrator/ folder MUST be changed to read Disallow: /joomla/administrator/ For more information about the robots.txt standard, see: http://www.robotstxt.org/orig.html For syntax checking, see: http://tool.motoricerca.info/robots-checker.phtml User-agent: *
Intermediate & Advanced SEO | | McTaggart
Disallow: /administrator/
Disallow: /bin/
Disallow: /cache/
Disallow: /cli/
Disallow: /components/
Disallow: /includes/
Disallow: /installation/
Disallow: /language/
Disallow: /layouts/
Disallow: /libraries/
Disallow: /logs/
Disallow: /modules/
Disallow: /plugins/
Disallow: /tmp/0 -
Crawling issue
Hello, I am working on 3 weeks old new Magento website. On GWT, under index status >advanced, I can only see 1 crawl on the 4th day of launching and I don't see any numbers for indexed or blocked status. | Total indexed | Ever crawled | Blocked by robots | Removed |
Intermediate & Advanced SEO | | sedamiran
| 0 | 1 | 0 | 0 | I can see the traffic on Google Analytic and i can see the website on SERPS when i search for some of the keywords, i can see the links appear on Google but i don't see any numbers on GWT.. As far as I check there is no 'no index' or robot block issue but Google doesn't crawl the website for some reason. Any ideas why i cannot see any numbers for indexed or crawled status on GWT? Thanks Seda | | | | |
| | | | |0 -
How to Set Custom Crawl Rate in Google Webmaster Tools?
This is really silly question to set custom crawl rate in Google webmaster tools. Any one can find out that section under setting tab. But, I have confusion to decide number for request per second and second between requests text field. I want to set custom crawl rate for my eCommerce website. I checked my Google webmaster tools and find out as attachment. So, Can I use this facility to improve my crawling? 6233755578_33ce83bb71_b.jpg
Intermediate & Advanced SEO | | CommercePundit0