Exclude status codes in Screaming Frog
-
I have a very large ecommerce site I'm trying to spider using screaming frog. Problem is I keep hanging even though I have turned off the high memory safeguard under configuration.
The site has approximately 190,000 pages according to the results of a Google site: command.
- The site architecture is almost completely flat. Limiting the search by depth is a possiblity, but it will take quite a bit of manual labor as there are literally hundreds of directories one level below the root.
- There are many, many duplicate pages. I've been able to exclude some of them from being crawled using the exclude configuration parameters.
- There are thousands of redirects. I haven't been able to exclude those from the spider b/c they don't have a distinguishing character string in their URLs.
Does anyone know how to exclude files using status codes? I know that would help.
If it helps, the site is kodylighting.com.
Thanks in advance for any guidance you can provide.
-
Thanks for your help. It literally was just the fact that it had to be done before the crawl began and could not be changed during the crawl. Hopefully this is changed because sometimes during a crawl you find things you want to exclude that you may have not known of their existence before hand.
-
Are you sure it's just on Mac,have you tried on PC? Do you have any other rules in include or perhaps a conflicting rule in exclude? Try running a single exclude rule, also on another small site to test.
Also from support if failing on all fronts:
- Mac version, please make sure you have the most up to date version of the OS which will update Java.
- Please uninstall, then reinstall the spider ensuring you are using the latest version and try again.
To be sure - http://www.youtube.com/watch?v=eOQ1DC0CBNs
-
does the exclude function work on mac. i have tried every possible way to exclude folders and have not been successful while running an analysis
-
That's exactly the problem, the redirects are disbursed randomly throughout the site. Although, and the job's still running, it now appears as though there's almost a 1-2-1 correlation between pages and redirects on the site.
I also heard from Dan Sharp via Twitter. He said "You can't, as we'd have to crawl a URL to see the status code You can right click and remove after though!"
Thanks again Michael. Your thoroughness and follow through is appreciated.
-
Took another look, also looked at documentation/online and don't see any way to exclude URLs from crawl based on response codes. As I see it you would only want to exclude on name or directory as response code is likely to be random throughout a site and impede a thorough crawl.
-
Thank you Michael.
You're right. I was on a 64 bit machine running a 32 bit verson of java. I updated it and the scan has been running for more than 24 hours now without hanging. So thank you.
If anyone else knows of a way to exclude files using status codes I'd still like to learn about it. So far the scan is showing me 20,000 redirected files which I'd just as soon not inventory.
-
I don't think you can filter out on response codes.
However, first I would ensure you are running the right version of Java if you are on a 64bit machine. The 32bit version functions but you cannot increase the memory allocation which is why you could be running into problems. Take a look at http://www.screamingfrog.co.uk/seo-spider/user-guide/general/ under Memory.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can bad html code hurt your website from ranking ?
Hello,For example if I search for “ Bike Tours in France” I am looking for a page with a list of tours in France.Does it mean that if my html doesn’t have list * in the code but only that apparently doesn’t have any semantic meaning for a search engine my page won’t rank because of that ?Example on this page : https://bit.ly/2C6hGUn According to W3schools: "A semantic element clearly describes its meaning to both the browser and the developer. Examples of non-semantic elements: <div> and - Tells nothing about its content. Examples of semanticelements: <form>, , and- Clearly defines its content."Has anyone any experience with something similar ?Thank you, </form>
Technical SEO | | seoanalytics0 -
Product Code Error in Volusion
I started working with about 800+ 404 errors in September after we migrated our site to Volusion 13. There is a recurring 404 error that I can't trace inside of our source code or in our Sitemap. I don't know what is causing this error so I have no way of knowing how to fix it. Tech support at Volusion has been less than helpful so any feed back would be appreciated. | http://www.apelectric.com/Generac-6438-Guardian-Series-11kW-p/{1} | The error is seemingly starting with the product code. The addendum at the end of the URL "p/" should be followed by the product code. In this example, 6438. Instead, the code is being automatically populated with %7B1%7D Has anyone else this issue with Volusion or does this look familiar across any other platform?
Technical SEO | | MonicaOConnor0 -
Repeating Content Within Code On Many Pages
Hi, This is sort of a duplicate content issue, but not quite. I'm concerned with the way our code is written and whether or not it can cause problems in the future. On many of our pages (thousands), our users will have the option to post comments. We have a link which opens a JavaScript pop-up with our comments guidelines. It's a 480 word document of original text, but it's preloaded in the source code of every page it appears on. The content on these pages will be relatively thin immediately, and many will have thin content throughout. I'm afraid so many of our pages look the same in both code and on-site content that we'll have issues down the line. Admittedly, I've never dealt with this issue before, so I'm curious. Is having a 480 word piece of text in the source code on so many pages an issue, or will Google consider it part of the template, similar to footer/sidebar/headers? If it's an issue, we can easily make it an actual pop-up hosted on a SINGLE page, but I'm curious if it's a problem. Thanks!
Technical SEO | | kirmeliux0 -
HTTP Status showing up in opensiteexplorer top pages as blocked by robot.txt file
I am trying to find an answer to this question it has alot of url on this page with no data when i go into the data source and search for noindex or robot.txt but the site is visible in the search engines ?
Technical SEO | | ReSEOlve0 -
What if an old site goes into PENDINGDELETE status
Hi, I have an old domain which accidentally was set as PENDINGDELETE by the registry. It's now not resolving to any ip address any more. Actually I was relocating from the old domain to a new domain. Just one month before it become PENDINGDELETE, I have submitted a "Chang of Address" in Google Webmasters Tools as well as setup the web server to 301 redirect all urls on old domain to the new domain. I have some sub-questions for this case: 1. What will happen to the effectiveness of the "Change of Address" in Google Webmasters Tool after old domain is dropped. As a domain is deleted, I have no way to maintain the verified ownership of the it in case Google asks me to reverify. 2. Suppose during last month before it's deleted, Googlebot had crawled 50% of urls on old domains, detected the 301 redirects and save them to its index. When Googlebot crawls those 50% urls again after the old domain is deleted, as those urls are not resolving to any web server, will Googlebot retain the last 301 redirects or drop the 301 redirects as well? 3. After a domain is deleted, how soon will Google purge urls on old domain from its index? Thank you. Best regards Jack Zhao
Technical SEO | | Bull1350 -
What have i done wrong with this facebook comment code
Hi, i have a page here http://www.in2town.co.uk/trip-advisor/african-safari-is-a-dream-holiday-destination and i am trying to have a comment section through facebook on there, and although i have got it on, i think i have done something wrong as i can manage or delete the comments. i have read the instructions but not managing to follow them. i have tried going into the developers section but that just gives me the option of website apps on facebook mobile apps now i have tried going into these but there is no code or anything, so i am not sure what i am doing wrong. the way i have done it is, by just using a joomla module as i work in joomla 1.5 but where it says Facebook comments application ID i have just put my webpage details in there. the module i am using is
Technical SEO | | ClaireH-184886
mod_phoca_facebook_comments any help in finding the application id or the correct way of doing it would be great thanks0 -
Exclude Child URLs from XML Sitemap Generator (Wordpress)
Hi all, I was recommended the XML Sitemap Generator for Wordpress by the very helpful Keith Bloemendaal and John Pring - however I can't seem to exclude child URLs. There is a section Exclude items and a subsection Exclude posts. I have tried inputting the URLs for the pages I don't want in the sitemap, however that didn't work. So I read that you have to include a list of "IDs" - not sure where on earth to find that info, tried the page name and the post= number from the URL, however neither worked. I hope somebody can point me in the right direction - and apologies, I am a Wordpress novice, and I got no answers from the Wordpress forums so turned right back to SEOmoz! Cheers.
Technical SEO | | markadoi840 -
How can I exclude display ads from robots.txt?
Google has stated that you can do this to get spiders to content only, and faster. Our IT guy is saying it's impossible.
Technical SEO | | GregBeddor
Do you know how to exlude display ads from robots.txt? Any help would be much appreciated.0