I can't crawl the archive of this website with Screaming Frog
-
Hi
I'm trying to crawl this website (http://zeri.info/) with Screaming Frog but because of some technical issue with their site (i can't find what is causing it) i'm able to crawl only the first page of each category (ex. http://zeri.info/sport/) and then it will go to crawl each page of their archive (hundreds of thousands of pages) but it won't crawl the links inside these pages.
Thanks a lot!
-
I think the issue comes from the way you handle the pagination and or the way your render archived pages.
Example: First archive page of Aktualehttp://zeri.info/arkiva/?formkey=7301c1be1634ffedb1c3780e5063819b6ec19157&acid=aktuale
Clicking on page 2 adds the date
http://zeri.info/arkiva/?from=2016-06-01&until=2016-06-16&acid=aktuale&formkey=cc0a40ca389eb511b1369a9aa9da915826d6ca44&faqe=2#archive-results => I assume that you're only listing the articles published from June 1st till today.
If I check all the different section & the number of articles listed in each archive I get approx. 1200 pages - add some additional pages linked on these pages and you get to the 2K pages you mentioned.
There seems to be no possibility to reach the previously published content without executing a search - which Screaming Frog can't do. It's quite possible that this is causing issues for Google bot as well so I would try to fix this.
If you really want to crawl the full site in the mean time - add another rule in url rewriting - this time selecting 'regex replace' -
add regex: from=2016-06-01
replace regex from=2010-01-01 (replace by the earliest date of publishing)This way - the system will call url http://zeri.info/arkiva/?from=2010-06-01&until=2016-06-16&acid=kultura&formkey=5932742bd5dd77799524ba31b94928810908fc07&faqe=2 rather than the original one - listing all the articles instead of only the june articles.
Hope this helps.
Dirk
-
I can't make it work. After removing 'fromkey' parameter i was able to crawl 1.7k and it stopped there. The site has more than 400k pages so .. something must be wrong
I want to crawl only the root domain without subdomains and all i can crawl is around 2k pages.
Do you have any idea what might be happening?
-
Great it worked. Just a small note - if Screaming Frog is getting confused by all these parameters, it could well be that Googlebot (while more sophisticated) is also having these issues. You could consider to exclude the formkey parameter in the Search Console (Crawl > URL parameters)
DIrk
-
Dirk, thanks a lot.
I just added "formkey" to be removed as a parameter and it seems to be working. I crawled 1k pages until now and i'm going to monitor how it goes.
The site has more than 400k pages so the process to crawl them all will take time (and i'm going to have to crawl each sector so i can create sitemaps for them).
Thanks again
Gjergj -
In the menu 'url rewriting' you can simply put the parameters the site should ignore (like date, formkey,..). I removed the formkey parameter and I checked the pages of the archive in Screaming Frog.
It is clearly able to detect all the internal links on the page - so will crawl them.
How are you certain that the pages below are not crawled - could you give a specific example of page that should be crawled but isn't?
Dirk
-
I've tried changing settings to respect noindex & canonical .. it will stop crawling the archive pages but still it won't crawl the links inside those pages. (i've added NOINDEX, FOLLOW in all archive pagination pages)
What do you mean by rewriting the url to ignore the formkey? How do you think it should be.
Gjergji
-
It think Screaming Frog is going nuts on the formkey value in the url which is constantly changing when changing pages.
Could you modify the settings of the spider to respect noindex & respect canonical - looks like this is solving the issue.
Alternatively you could rewrite the url to ignore the formkey (remove parameter)
Dirk
-
Hi Logan
I've tried going back to default configuration but it didn't help .. still i don't believe Screaming Frog is to blame, i think there is something wrong with the way the site has been developed (they are using a custom CMS) .. but i can't find the reason why this is happening. As soon as i find the solution then i can ask the guys who developed this site to make the necessary changes.
Thanks a lot.
-
Hi Dirk
Thanks a lot for replying back. The issue is that Screaming Frog is crawling the archive pages (like these examples) but it won't crawl the articles that are listed inside these pages.
The hierarchy of the site goes like this:
Homepage
- Categories (with about 20 articles in them)
- Archive of that category (with all the remaining articles, which in this case means thousands since they are a news website)Screaming Frog will crawl the homepage and categories ... but after it goes to the archive it won't crawl the articles inside archive, instead it will only crawl the pages (pagination) of that archive.
Thanks again.
-
Try going to File > Default Conif > Clear Default Configuration. This happens to me sometimes as well as I've edited settings over time. Clearing it out and going back to default settings is usually quicker than clicking through the settings to identify which one is causing the problem.
-
Did you put in some special filters - just tried to crawl the site & it seems to work just fine?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I ensure that colour variant products aren't flagged for being duplicate?
I have a site with 12 colour variants of 1 style. How do I ensure that these are not flagged as duplicate content as they currently have been?
Technical SEO | | Ashcastle0 -
How to handle pages I can't delete?
Hello Mozzers, I am using wordpress and I have a small problem. I have two sites, I don't want but the dev of the theme told me I can't delete them. /portfolio-items/ /faq-items/ The dev said he can't find a way to delete it because these pages just list faqs/portfolio posts. I don't have any of these posts so basically what I have are two sites with just the title "Portfolio items" and "FAQ Items". Furthermore the dev said these sites are auto-generated so he can't find a way to remove them. I mean I don't believe that it's impossible, but if it is how should I handle them? They are indexed by search engines, should I remove them from the index and block them from robots.txt? Thanks in advance.
Technical SEO | | grobro0 -
Using Wix and the keyword grader tool isn't working.
After reading other posts about Wix and SEO I think I need to change the web design provider to something I have more control of SEO options. Does anyone have any suggestions of something I can use?
Technical SEO | | benjacksoncook0 -
How Many Words To Make Content 'unique?'
Hi All, I'm currently working on creating a variety of new pages for my website. These pages are based upon different keyword searches for cars, for example used BMW in London, Used BMW in Edinburgh and many many more similar kinds of variations. I'm writing some content for each page so that they're completely unique to each other (the cars displayed on each page will also be different so this would not be duplicated either). My question is really, how much content do you think that I'll need on each page? or what is optimal? What would be the minimum you might need? Thank for your help!
Technical SEO | | Sandicliffe0 -
How can I fix this home page crawl error ?
My website shows this crawl error => 612 : Home page banned by error response for robots.txt. I also did not get any page data in my account for this website ... I did get keyword rankings and traffic data, I am guessing from the analytics account. url = www.mississaugakids.com Not sure really what to do with this ! Any help is greatly appreciated.
Technical SEO | | jlane90 -
What should I do with a large number of 'pages not found'?
One of my client sites lists millions of products and 100s or 1000s are de-listed from their inventory each month and removed from the site (no longer for sale). What is the best way to handle these pages/URLs from an SEO perspective? There is no place to use a 301. 1. Should we implement 404s for each one and put up with the growing number of 'pages not found' shown in Webmaster Tools? 2. Should we add them to the Robots.txt file? 3. Should we add 'nofollow' into all these pages? Or is there a better solution? Would love some help with this!
Technical SEO | | CuriousCatDigital0 -
How is this possible? A 200 response and 'nothing' to be seen? Need help!
On checking this website http://dogtraining.org.uk/ I get a 200 response. But an Oops! Google Chrome could not find dogtraining.org.uk . Same with Firefox (Server not found). Obviously there is a problem - I just don't know where to 'start' investigating to spot the error. Can someone help me? Thank you!
Technical SEO | | patrihernandez0 -
Website hacked
Hi I've been asked to help a colleague with his website. It seems to be hacked. He recently received an e-mail from Google saying his adwords account was suspended 'due to high probability his site may be hosting or distributing malicious software' I just checked his source and there seems to loads of weird on code on his pages, this would not have been but on by any members of the website owners. Please image attached when we try to access his website via google search I just contacted the hosting provider - does anyone have experience with this and how to prevent such hacking in the future. The site is build using HTML with no CMS. IjW19.jpg
Technical SEO | | Socialdude0