Exclude status codes in Screaming Frog
-
I have a very large ecommerce site I'm trying to spider using screaming frog. Problem is I keep hanging even though I have turned off the high memory safeguard under configuration.
The site has approximately 190,000 pages according to the results of a Google site: command.
- The site architecture is almost completely flat. Limiting the search by depth is a possiblity, but it will take quite a bit of manual labor as there are literally hundreds of directories one level below the root.
- There are many, many duplicate pages. I've been able to exclude some of them from being crawled using the exclude configuration parameters.
- There are thousands of redirects. I haven't been able to exclude those from the spider b/c they don't have a distinguishing character string in their URLs.
Does anyone know how to exclude files using status codes? I know that would help.
If it helps, the site is kodylighting.com.
Thanks in advance for any guidance you can provide.
-
Thanks for your help. It literally was just the fact that it had to be done before the crawl began and could not be changed during the crawl. Hopefully this is changed because sometimes during a crawl you find things you want to exclude that you may have not known of their existence before hand.
-
Are you sure it's just on Mac,have you tried on PC? Do you have any other rules in include or perhaps a conflicting rule in exclude? Try running a single exclude rule, also on another small site to test.
Also from support if failing on all fronts:
- Mac version, please make sure you have the most up to date version of the OS which will update Java.
- Please uninstall, then reinstall the spider ensuring you are using the latest version and try again.
To be sure - http://www.youtube.com/watch?v=eOQ1DC0CBNs
-
does the exclude function work on mac. i have tried every possible way to exclude folders and have not been successful while running an analysis
-
That's exactly the problem, the redirects are disbursed randomly throughout the site. Although, and the job's still running, it now appears as though there's almost a 1-2-1 correlation between pages and redirects on the site.
I also heard from Dan Sharp via Twitter. He said "You can't, as we'd have to crawl a URL to see the status code You can right click and remove after though!"
Thanks again Michael. Your thoroughness and follow through is appreciated.
-
Took another look, also looked at documentation/online and don't see any way to exclude URLs from crawl based on response codes. As I see it you would only want to exclude on name or directory as response code is likely to be random throughout a site and impede a thorough crawl.
-
Thank you Michael.
You're right. I was on a 64 bit machine running a 32 bit verson of java. I updated it and the scan has been running for more than 24 hours now without hanging. So thank you.
If anyone else knows of a way to exclude files using status codes I'd still like to learn about it. So far the scan is showing me 20,000 redirected files which I'd just as soon not inventory.
-
I don't think you can filter out on response codes.
However, first I would ensure you are running the right version of Java if you are on a 64bit machine. The 32bit version functions but you cannot increase the memory allocation which is why you could be running into problems. Take a look at http://www.screamingfrog.co.uk/seo-spider/user-guide/general/ under Memory.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl errors - 2,513 not found. Response code 404
Hi,
Technical SEO | | JamesHancocks1
I've just inherited a website that I'll be looking after. I've looked in the Search Console in the Crawl errors section and discovered thousands of urls that point to non- existent pages on Desktop. There's 1,128 on Smartphone.
Some are odd and make no sense. for example: | bdfqgnnl-z3543-qh-i39634-imbbfuceonkqrihpbptd/ | Not sure why these have are occurring but what's the best way to deal with them to improve our SEO? | northeast/ | 404 | 8/29/18 |
| | 2 | blog/2016/06/27/top-tips-for-getting-started-with-the-new-computing-curriculum/ | 404 | 8/10/18 |
| | 3 | eastmidlands | 404 | 8/21/18 |
| | 4 | eastmidlands/partner-schools/pingle-school/ | 404 | 8/27/18 |
| | 5 | z3540-hyhyxmw-i18967-fr/ | 404 | 8/19/18 |
| | 6 | northeast/jobs/maths-teacher-4/ | 404 | 8/24/18 |
| | 7 | qfscmpp-z3539-i967-mw/ | 404 | 8/29/18 |
| | 8 | manchester/jobs/history-teacher/ | 404 | 8/5/18 |
| | 9 | eastmidlands/jobs/geography-teacher-4/ | 404 | 8/30/18 |
| | 10 | resources | 404 | 8/26/18 |
| | 11 | blog/2016/03/01/world-book-day-how-can-you-get-your-pupils-involved/ | 404 | 8/31/18 |
| | 12 | onxhtltpudgjhs-z3548-i4967-mnwacunkyaduobb/ | Cheers.
Thanks in advance,
James.0 -
Can you use Screaming Frog to find all instances of relative or absolute linking?
My client wants to pull every instance of an absolute URL on their site so that they can update them for an upcoming migration to HTTPS (the majority of the site uses relative linking). Is there a way to use the extraction tool in Screaming Frog to crawl one page at a time and extract every occurrence of _href="http://" _? I have gone back and forth between using an x-path extractor as well as a regex and have had no luck with either. Ex. X-path: //*[starts-with(@href, “http://”)][1] Ex. Regex: href=\”//
Technical SEO | | Merkle-Impaqt0 -
Having javascript in the top of the source code
Dear Moz-community, In our company, we are torn about the influence of having a ton of javascript on the top of our source code - while our Tech guys are downplaying it's influence, us marketeers aren't quite sure. The link is here: view-source:http://www.bettingexpert.com/tips/football/italy/serie-a It is the javascript that is loaded right after the Would this be a problem with Google? Thank you very much,
Technical SEO | | BetterCollective
William0 -
When doing internal linking back to your home/index file what is the best coding course of action?
When doing internal linking back to your home/index page is it best to set the code as linked to "www.thedomain.com" or "www.thedomain.com/" or just "/" - I'm attempting some canonicalization and our programmer is concerned about linking to just the URL as he's saying it's going to be viewed as an external source. We have www redirects in place that come back to just www.thedomain.com and a redirect to send the www.thedomain.com/index.php back to just www.thedomain.com . Any help would be appreciated, thank you!
Technical SEO | | CharlesDaniels0 -
Webmaster Tools vs Screaming from for 404's
Hey guys, I was just wondering which is better to use to find the 404's effecting your site. I have been using webmaster tools and just purchased screaming frog which has given me a totally different list of 404's compared to WMT. Which do I use, or do I use both? Cheers
Technical SEO | | Adamshowbiz0 -
Help with 301 redirect code
Hi, I can't work out how to make this one work and would apreciate if someone could help.
Technical SEO | | Paul_MC
i have a series of folders from a old site that are in the structure:
/c/123456/bags.html (the "123456" changes and is any series of 6 digit numbers), and the "bags.html" changes depending on the product.
I need that to be 301 redirected to the following format:
/default/bags/bags.html0 -
What have i done wrong with this facebook comment code
Hi, i have a page here http://www.in2town.co.uk/trip-advisor/african-safari-is-a-dream-holiday-destination and i am trying to have a comment section through facebook on there, and although i have got it on, i think i have done something wrong as i can manage or delete the comments. i have read the instructions but not managing to follow them. i have tried going into the developers section but that just gives me the option of website apps on facebook mobile apps now i have tried going into these but there is no code or anything, so i am not sure what i am doing wrong. the way i have done it is, by just using a joomla module as i work in joomla 1.5 but where it says Facebook comments application ID i have just put my webpage details in there. the module i am using is
Technical SEO | | ClaireH-184886
mod_phoca_facebook_comments any help in finding the application id or the correct way of doing it would be great thanks0 -
301 Redirect for homepage with language code
In my multilingual Magento store, I want to redirect the hompage URL with an added language code to the base URL. For example, I want to redirect http://www.mysite.com/tw/ to http://www.mysite.com/ which has the exact same content. Using a canonical URL will help with search engines, but I would just rather nip the problem in the butt by not showing http://www.mysite.com/tw/ to visitors in the first place. Problem is that I don't want (can't have) all /tw/ removed from URLs due to Magento limitations, so I just want to know how to redirect this single URL. Since rewrites are on, adding Redirect 301 /tw http://www.88kbbq.com would redirect all URLs with the /tw/ language code to ones without. Not an option. Hope folks can lend a hand here.
Technical SEO | | kwoolf0