Exclude status codes in Screaming Frog
-
I have a very large ecommerce site I'm trying to spider using screaming frog. Problem is I keep hanging even though I have turned off the high memory safeguard under configuration.
The site has approximately 190,000 pages according to the results of a Google site: command.
- The site architecture is almost completely flat. Limiting the search by depth is a possiblity, but it will take quite a bit of manual labor as there are literally hundreds of directories one level below the root.
- There are many, many duplicate pages. I've been able to exclude some of them from being crawled using the exclude configuration parameters.
- There are thousands of redirects. I haven't been able to exclude those from the spider b/c they don't have a distinguishing character string in their URLs.
Does anyone know how to exclude files using status codes? I know that would help.
If it helps, the site is kodylighting.com.
Thanks in advance for any guidance you can provide.
-
Thanks for your help. It literally was just the fact that it had to be done before the crawl began and could not be changed during the crawl. Hopefully this is changed because sometimes during a crawl you find things you want to exclude that you may have not known of their existence before hand.
-
Are you sure it's just on Mac,have you tried on PC? Do you have any other rules in include or perhaps a conflicting rule in exclude? Try running a single exclude rule, also on another small site to test.
Also from support if failing on all fronts:
- Mac version, please make sure you have the most up to date version of the OS which will update Java.
- Please uninstall, then reinstall the spider ensuring you are using the latest version and try again.
To be sure - http://www.youtube.com/watch?v=eOQ1DC0CBNs
-
does the exclude function work on mac. i have tried every possible way to exclude folders and have not been successful while running an analysis
-
That's exactly the problem, the redirects are disbursed randomly throughout the site. Although, and the job's still running, it now appears as though there's almost a 1-2-1 correlation between pages and redirects on the site.
I also heard from Dan Sharp via Twitter. He said "You can't, as we'd have to crawl a URL to see the status code You can right click and remove after though!"
Thanks again Michael. Your thoroughness and follow through is appreciated.
-
Took another look, also looked at documentation/online and don't see any way to exclude URLs from crawl based on response codes. As I see it you would only want to exclude on name or directory as response code is likely to be random throughout a site and impede a thorough crawl.
-
Thank you Michael.
You're right. I was on a 64 bit machine running a 32 bit verson of java. I updated it and the scan has been running for more than 24 hours now without hanging. So thank you.
If anyone else knows of a way to exclude files using status codes I'd still like to learn about it. So far the scan is showing me 20,000 redirected files which I'd just as soon not inventory.
-
I don't think you can filter out on response codes.
However, first I would ensure you are running the right version of Java if you are on a 64bit machine. The 32bit version functions but you cannot increase the memory allocation which is why you could be running into problems. Take a look at http://www.screamingfrog.co.uk/seo-spider/user-guide/general/ under Memory.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Location of Content within the Code Structure
Hi guys,
Technical SEO | | artdivision
When working with advanced modern websites it many times means that in order to achieve the look and feel we end up with pages that has almost 1000 lines of code or more. In some cases it is impossible to avoid it if we are to reach the Client's visual and technical specifications. Say the page is 1000 lines of code, and our content only starts at line 450 onwards, will that have an impact from a Google crawlability, hence affect our SEO making it harder to rank? Thoughts? Dan.0 -
Meta HTML tag code
I have been instructed by Moz that I have some missing meta description tags; however, this is what comes up when I searched for more help on this site: "The proper coding for a meta HTML tag is These Meta descriptions can be nested anywhere in the element." Obviously the actual coding is missing... so can anyone tell me what the proper coding for a meta HTML tag is? Thanks!
Technical SEO | | marissaRT0 -
405 HTTP Status instead of 404
Hi We need to block some www1-pages from being indexed. Now IT has resolved this but pages like http://www1.swisscom.ch/fr/business/pme.html return a 405 status instead of a 404. The pages are currently still indexed in Google. Must the status be changed to 404 or should I just wait and see if Google de-indexes them anyhow?
Technical SEO | | zeepartner0 -
What if an old site goes into PENDINGDELETE status
Hi, I have an old domain which accidentally was set as PENDINGDELETE by the registry. It's now not resolving to any ip address any more. Actually I was relocating from the old domain to a new domain. Just one month before it become PENDINGDELETE, I have submitted a "Chang of Address" in Google Webmasters Tools as well as setup the web server to 301 redirect all urls on old domain to the new domain. I have some sub-questions for this case: 1. What will happen to the effectiveness of the "Change of Address" in Google Webmasters Tool after old domain is dropped. As a domain is deleted, I have no way to maintain the verified ownership of the it in case Google asks me to reverify. 2. Suppose during last month before it's deleted, Googlebot had crawled 50% of urls on old domains, detected the 301 redirects and save them to its index. When Googlebot crawls those 50% urls again after the old domain is deleted, as those urls are not resolving to any web server, will Googlebot retain the last 301 redirects or drop the 301 redirects as well? 3. After a domain is deleted, how soon will Google purge urls on old domain from its index? Thank you. Best regards Jack Zhao
Technical SEO | | Bull1350 -
Help with 301 redirect code
Hi, I can't work out how to make this one work and would apreciate if someone could help.
Technical SEO | | Paul_MC
i have a series of folders from a old site that are in the structure:
/c/123456/bags.html (the "123456" changes and is any series of 6 digit numbers), and the "bags.html" changes depending on the product.
I need that to be 301 redirected to the following format:
/default/bags/bags.html0 -
Schema coding
Hi, I was wondering if you may know if you have to keep to the and coding when adding schema code to the site. For example if I'm already using H and P tags can I add the "itemprop" to those or do they have to be in aor as in the example below: <span itemprop="name">Kenmore White 17" Microwavespan>
Technical SEO | | DragonSearch
Product description:
<span itemprop="description">0.7 cubic feet countertop microwave. Has six preset cooking categories and convenience features like Add-A-Minute and Child Lock.span> So could I code it like this? <h1 itemprop="name">Kenmore White 17" Microwaveh1>
Product description:
<p itemprop="description">0.7 cubic feet countertop microwave. Has six preset cooking categories and convenience features like Add-A-Minute and Child Lock.p> Thank you,
Etela0 -
Warnings on Pages excluded from Search Engines
I am new to this, so my question may seem a little rookie type... When looking at my crawl diagnostic errors there are 1604 warnings for "302 redirects". Of those 1604 warnings 1500 of them are for the same page with different product ID's on them such as: www.soccerstop.com/EMailproduct.aspx?productid=999
Technical SEO | | SoccerStop
www.soccerstop.com/EMailproduct.aspx?productid=998 In our robots.txt file we have Disallow: /emailproduct.aspx Wouldn't that take care of this problem? If so, why is it still giving me these warning errors? It does take into account our robots.txt file when generating this report does it not? Thanks for any help you can provide.
James0 -
301 Redirect for homepage with language code
In my multilingual Magento store, I want to redirect the hompage URL with an added language code to the base URL. For example, I want to redirect http://www.mysite.com/tw/ to http://www.mysite.com/ which has the exact same content. Using a canonical URL will help with search engines, but I would just rather nip the problem in the butt by not showing http://www.mysite.com/tw/ to visitors in the first place. Problem is that I don't want (can't have) all /tw/ removed from URLs due to Magento limitations, so I just want to know how to redirect this single URL. Since rewrites are on, adding Redirect 301 /tw http://www.88kbbq.com would redirect all URLs with the /tw/ language code to ones without. Not an option. Hope folks can lend a hand here.
Technical SEO | | kwoolf0