I can't crawl the archive of this website with Screaming Frog
-
Hi
I'm trying to crawl this website (http://zeri.info/) with Screaming Frog but because of some technical issue with their site (i can't find what is causing it) i'm able to crawl only the first page of each category (ex. http://zeri.info/sport/) and then it will go to crawl each page of their archive (hundreds of thousands of pages) but it won't crawl the links inside these pages.
Thanks a lot!
-
I think the issue comes from the way you handle the pagination and or the way your render archived pages.
Example: First archive page of Aktualehttp://zeri.info/arkiva/?formkey=7301c1be1634ffedb1c3780e5063819b6ec19157&acid=aktuale
Clicking on page 2 adds the date
http://zeri.info/arkiva/?from=2016-06-01&until=2016-06-16&acid=aktuale&formkey=cc0a40ca389eb511b1369a9aa9da915826d6ca44&faqe=2#archive-results => I assume that you're only listing the articles published from June 1st till today.
If I check all the different section & the number of articles listed in each archive I get approx. 1200 pages - add some additional pages linked on these pages and you get to the 2K pages you mentioned.
There seems to be no possibility to reach the previously published content without executing a search - which Screaming Frog can't do. It's quite possible that this is causing issues for Google bot as well so I would try to fix this.
If you really want to crawl the full site in the mean time - add another rule in url rewriting - this time selecting 'regex replace' -
add regex: from=2016-06-01
replace regex from=2010-01-01 (replace by the earliest date of publishing)This way - the system will call url http://zeri.info/arkiva/?from=2010-06-01&until=2016-06-16&acid=kultura&formkey=5932742bd5dd77799524ba31b94928810908fc07&faqe=2 rather than the original one - listing all the articles instead of only the june articles.
Hope this helps.
Dirk
-
I can't make it work. After removing 'fromkey' parameter i was able to crawl 1.7k and it stopped there. The site has more than 400k pages so .. something must be wrong
I want to crawl only the root domain without subdomains and all i can crawl is around 2k pages.
Do you have any idea what might be happening?
-
Great it worked. Just a small note - if Screaming Frog is getting confused by all these parameters, it could well be that Googlebot (while more sophisticated) is also having these issues. You could consider to exclude the formkey parameter in the Search Console (Crawl > URL parameters)
DIrk
-
Dirk, thanks a lot.
I just added "formkey" to be removed as a parameter and it seems to be working. I crawled 1k pages until now and i'm going to monitor how it goes.
The site has more than 400k pages so the process to crawl them all will take time (and i'm going to have to crawl each sector so i can create sitemaps for them).
Thanks again
Gjergj -
In the menu 'url rewriting' you can simply put the parameters the site should ignore (like date, formkey,..). I removed the formkey parameter and I checked the pages of the archive in Screaming Frog.
It is clearly able to detect all the internal links on the page - so will crawl them.
How are you certain that the pages below are not crawled - could you give a specific example of page that should be crawled but isn't?
Dirk
-
I've tried changing settings to respect noindex & canonical .. it will stop crawling the archive pages but still it won't crawl the links inside those pages. (i've added NOINDEX, FOLLOW in all archive pagination pages)
What do you mean by rewriting the url to ignore the formkey? How do you think it should be.
Gjergji
-
It think Screaming Frog is going nuts on the formkey value in the url which is constantly changing when changing pages.
Could you modify the settings of the spider to respect noindex & respect canonical - looks like this is solving the issue.
Alternatively you could rewrite the url to ignore the formkey (remove parameter)
Dirk
-
Hi Logan
I've tried going back to default configuration but it didn't help .. still i don't believe Screaming Frog is to blame, i think there is something wrong with the way the site has been developed (they are using a custom CMS) .. but i can't find the reason why this is happening. As soon as i find the solution then i can ask the guys who developed this site to make the necessary changes.
Thanks a lot.
-
Hi Dirk
Thanks a lot for replying back. The issue is that Screaming Frog is crawling the archive pages (like these examples) but it won't crawl the articles that are listed inside these pages.
The hierarchy of the site goes like this:
Homepage
- Categories (with about 20 articles in them)
- Archive of that category (with all the remaining articles, which in this case means thousands since they are a news website)Screaming Frog will crawl the homepage and categories ... but after it goes to the archive it won't crawl the articles inside archive, instead it will only crawl the pages (pagination) of that archive.
Thanks again.
-
Try going to File > Default Conif > Clear Default Configuration. This happens to me sometimes as well as I've edited settings over time. Clearing it out and going back to default settings is usually quicker than clicking through the settings to identify which one is causing the problem.
-
Did you put in some special filters - just tried to crawl the site & it seems to work just fine?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Old pages not mobile friendly - new pages in process but don't want to upset current traffic.
Working with a new client. They have what I would describe as two virtual websites. Same domain but different coding, navigation and structure. Old virtual website pages fail mobile friendly, they were not designed to be responsive ( there really is no way to fix them) but they are ranking and getting traffic. New virtual website pages pass mobile friendly but are not SEO optimized yet and are not ranking and not getting organic traffic. My understanding is NOT mobile friendly is a "site" designation and although the offending pages are listed it is not a "page" designation. Is this correct? If my understanding is true what would be the best way to hold onto the rankings and traffic generated by old virtual website pages and resolve the "NOT mobile friendly" problem until the new virtual website pages have surpassed the old pages in ranking and traffic? A proposal was made to redirect any mobile traffic on the old virtual website pages to mobile friendly pages. What will happen to SEO if this is done? The pages would pass mobile friendly because they would go to mobile friendly pages, I assume, but what about link equity? Would they see a drop in traffic ? Any thoughts? Thanks, Toni
Technical SEO | | Toni70 -
Internal link is creating duplicate content issues and generating 404s from website crawl.
Not sure what the best way to describe it but the site is built with Elementor page builder. We are finding out that a feature that is included with a pop modal window renders an HTML code as so: Click So when crawled I think the crawling is linking itself for some reason so the crawl returns something like this: xyz.com/builder/listing/ - what we want what we don't want xyz.com/builder/listing/ xyz.com/builder/listing/%23elementor-action%3Aaction%3Dpopup%3Aopen%26settings%3DeyJpZCI6Ijc2MCIsInRvZ2dsZSI6ZmFsc2V9/ xyz.com/builder/listing/%23elementor-action%3Aaction%3Dpopup%3Aopen%26settings%3DeyJpZCI6Ijc2MCIsInRvZ2dsZSI6ZmFsc2V9//%23elementor-action%3Aaction%3Dpopup%3Aopen%26settings%3DeyJpZCI6Ijc2MCIsInRvZ2dsZSI6ZmFsc2V9/ so you'll notice how that string in the HREF is appended each time and it loops a couple times. Could I 301 this issue, what's the best way to go about handling something like this? It's causing duplicate meta descriptions/content errors for some listing pages we have. I did add a rel='nofollow' to the anchor tag with JavaScript but not sure if that'll help.
Technical SEO | | JoseG-LP0 -
My website is currently failing Google's mobile friendly test. What are my options?
What can I tell my developer so I pass this test? What will they need to develop A web mockup? Is there an easy code to implement?
Technical SEO | | pmull0 -
Can increase in crawl errors in GWT) be caused by input fields and jquery?
Dear Mozzerz We took over www.urgiganten.dk not long ago and last week we opened up for indexation, after having taken the old website down for a couple of months. One week after opening for indexation we saw a huge increase in crawl errors.Google is discovering some weird links to e.g http://www.urgiganten.dk/30-garmin-urremme/ which returns a 404. In GWT we are told that we are linking to this url from http://www.urgiganten.dk/garmin-urremme. But nowhere on http://www.urgiganten.dk/garmin-urremme will you find this link. However you will find the following script in the source code, which is the only code part that contains "/30-garmin-urremme/":Can it be true that google take the id and adds it to our tld to form a url? We have seen quite a lot of these errors not only on Urgiganten.dk but also some of our other websites!
Technical SEO | | urgiganten0 -
Why Doesn't All Structured Data Show in Google Webmaster?
We have more than 80k products, each of them with data-vocabulary.org markup on them, but only 17k are being reported as having the markup in Google Webmaster (GW). If I run a page that GW isn't showing as having the structure data in the structured data testing tool (http://www.google.com/webmasters/tools/richsnippets), it passes. Any thoughts on why this would be happening? Is it because we should switch from data-vocabulary.org to schema.org? Example of page that GW is reporting that has structured data: https://www.etundra.com/restaurant-equipment/refrigeration/display-cases/coutnertop/vollrath-40862-36-inch-cubed-glass-refrigerated-display-cabinet/ Example of page that isn't showing in GW as having structured data: https://www.etundra.com/kitchen-supplies/cutlery/sandwich-spreaders/mundial-w5688-4-and-half-4-and-half-sandwich-spreader/
Technical SEO | | eTundra0 -
Why isn't my site not searchable from google?
I am having a hard time figuring out why is it that when I search for my website name, it didn't show up in google's search result? Here's a link to my site. I've been twiddling for days looking for answers in my google webmaster tools. Here's a link of the crawl stats from google webmaster tool. As you can see it is actually crawling some pages. However my looking at my indexed status, I am getting 0 as you can see here (http://cl.ly/image/3G1R1p0b3k1P). I've double checked for my robots.txt and nothing seemed to be out of the ordinary there. I am not blocking anything. Any ideas why?
Technical SEO | | herlamba0 -
I always get this error "We have detected that the domain or subfolder does not respond to web requests." I don't know why. PLEASE help
subdomain www.nwexterminating.com subfolder www.nwexterminating.com/pest_control www.nwexterminating.com/termite_services www.nwexterminating.com/bed_bug_services
Technical SEO | | NWExterminating0 -
How can affect the website redesign to my ranking position in Search Engines?
Hi, I have a few questions for you: I’ll will update my booking system and my website design. Now, I'm ranked in number one position with the keyword HOTELES EN CHIAPAS. In fact, several urls of my webiste appear in the search engines. Internal URLs like this: www.hotelesenchiapas.com.mx/obmp30/hotel/villa_mercedes_palenque/1/es/ My question is: I need to conserve this link structure or may i change it for something more friendly like this: www.hotelesenchiapas.com.mx/Palenque/Hoteles/Villa-mercedes-palenque/ And how affect this change to my rank position ?
Technical SEO | | hotelesenchiapas0