Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Can anyone recommend a tool that will identify unused and duplicate CSS across an entire site?
-
Hi all,
So far I have found this one: http://unused-css.com/ It looks like it identifies unused, but perhaps not duplicates? It also has a 5,000 page limit and our site is 8,000+ pages....so we really need something that can handle a site larger than their limit.
I do have Screaming Frog. Is there a way to use Screaming Frog to locate unused and duplicate CSS?
Any recommendations and/or tips would be great. I am also aware of the Firefix extensions, but to my knowledge they will only do one page at a time?
Thanks!
-
I read your post at Mstoic Hemant and noticed your comment about Firefox 10. Since I couldn't get Dust-Me Spider to work in my current version of Firefox I tried downloading and installing the older version 10 as you suggested. When I did so, I received the message that the Dust-Me Spider was not compatible with this version of Firefox and it was disabled.
We are considering purchasing the paid version of Unused CSS (http://unused-css.com/pricing) - Do you have any experience using the upgraded version? Does it deliver what it promises?
Thanks!
-
Hi Hemant,
I tried using Dust-Me in Firefox, but for some reason it won't work on this sitemap: http://www.ccisolutions.com/rssfeeds/CCISolutions.xml
Could it be that this sitemap is too large? I even tried setting up a local folder to store the data, but everytime I try the spider I get the message "The sitemap has no links."
I am using Firefox 27.0.1
-
Hi Dana, did either of these responses help? What did you end up settling on? We'd love an update! Thanks.
Christy
-
I have an article on that here. An extension for firefox called Dust-Me selectors can help you identify unused CSS on multiple pages. It tracks all the pages you visit of a website and tracks classes and ids which were never used. Moreover, you can also give it a sitemap and it will figure out the CSS which was never used.
-
This sounds like it might just do the trick. You'll need to have Ruby installed for it to work. If you have a Mac, it's already on there. If you have a Windows you'll need this. It's pretty easy, I installed Ruby on my Windows gaming rig. If you're running a Linux flavor, try this.
Just take your URLs from the site crawl and make a txt file. You can compare that with your CSS file. I've never tried it on a large site, let me know how it goes for you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Has anyone ever tested to see if having an ads.txt file provided any SEO lift?
I know that the ads.txt system is designed to prevent ad fraud and technically has nothing to do with search. That said, the presence of such a file would seem to be an indicator of overall site quality because it would show that a site owner wants to participate in a fraud-free system. Has anyone ever tested that? If so, they don't seem to have published their results. Maybe it's a secret weapon that some pros are using and not sharing?
Web Design | | scodtt0 -
How to fix non-crawlable pages affected by CSS modals?
I stumbled across something new when doing a site audit in SEMRUSH today ---> Modals. The case: Several pages could not be crawled because of (modal:) in the URL. What I know: "A modal is a dialog box/popup window that is displayed on top of the current page" based on CSS and JS. What I don't know: How to prevent crawlers from finding them.
Web Design | | Dan-Louis0 -
My news site not showing in "In the news" list on Google Web Search
I got a news website (www.tapscape.com) which is 6 years old and has been on Google News since 2012. However, whenever I publish a news article, it never shows up "In the news" list on Google Web Search. I have already added the schema.org/NewsArticle on the website and have checked it if it's working or not on Google structured data testing tool. I see everything shows on on the structured data testing tool. The site already has a news sitemap (http://www.tapscape.com/news-sitemap.xml) and has been added to Google webmaster tools. News articles show perfectly fine in the News tab, but why isn't the articles being shown on "In the news" list on the Google web search? My site has a strong backlink background already, so I don't think I need to work on the backlinks. Please let me know what I'm doing wrong, and how can I get it to the news articles on "In the news" list. Below is a screenshot that I have attached to this question to help you understand what I mean to say. 1qoArRs
Web Design | | hakhan2010 -
Problems preventing Wordpress attachment pages from being indexed and from being seen as duplicate content.
Hi According to a Moz Crawl, it looks like the Wordpress attachment pages from all image uploads are being indexed and seen as duplicate content..or..is it the Yoast sitemap causing it? I see 2 options in SEO Yoast: Redirect attachment URLs to parent post URL. Media...Meta Robots: noindex, follow I set it to (1) initially which didn't resolve the problem. Then I set it to option (2) so that all images won't be indexed but search engines would still associate those images with their relevant posts and pages. However, I understand what both of these options (1) and (2) mean, but because I chose option 2, will that mean all of the images on the website won't stand a chance of being indexed in search engines and Google Images etc? As far as duplicate content goes, search engines can get confused and there are 2 ways for search engines
Web Design | | SEOguy1
to reach the correct page content destination. But when eg Google makes the wrong choice a portion of traffic drops off (is lost hence errors) which then leaves the searcher frustrated, and this affects the seo and ranking of the site which worsens with time. My goal here is - I would like all of the web images to be indexed by Google, and for all of the image attachment pages to not be indexed at all (Moz shows the image attachment pages as duplicates and the referring site causing this is the sitemap url which Yoast creates) ; that sitemap url has been submitted to the search engines already and I will resubmit once I can resolve the attachment pages issues.. Please can you advise. Thanks.0 -
How long should an old site redirecting to a new site remain activated on a server?
Once I switch a site to a new domain (with links to corresponding/relative pages), will I have to keep the old site live forever for those links to work, or how long should I wait before I inactivate the old site on our server?
Web Design | | jwanner0 -
What is the best tool to view your page as Googlebot?
Our site was done with asp.net and a lot of scripting. I want to see what Google can see and what it can't. What is the best tool that duplicates Googlebot? I have found several but they seem old or inaccurate.
Web Design | | EcommerceSite0 -
Anyone Got An Honest Opinion Of WPengine
Hi Guys, I wonder if any of you have used / are using WPengine. I am had thought about using them before and was really impressed by there load speeds. They are pretty expensive though when compared to others. Now, I already have ok hosting but everything is run through a cdn and S3 for streaming HD Videos. Is it worthwhile shelling out for WP engine (pro) or is it better to just stick with an OK host and cdn. Opinions are greatly appreciated. Thanks Jenson
Web Design | | jensonseo0 -
Infinite Scrolling vs. Pagination on an eCommerce Site
My company is looking at replacing our ecommerce site's paginated browsing with a Javascript infinite scroll function for when customers view internal search results--and possibly when they browse product categories also. Because our internal linking structure isn't very robust, I'm concerned that removing the pagination will make it harder to get the individual product pages to rank in the SERPs. We have over 5,000 products, and most of them are internally linked to from the browsing results pages in the category structure: e.g. Blue Widgets, Widgets Under $250, etc. I'm not too worried about removing pagination from the internal search results pages, but I'm concerned that doing the same for these category pages will result in de-linking the thousands of product pages that show up later in the browsing results and therefore won't be crawlable as internal links by the Googlebot. Does anyone have any ideas on what to do here? I'm already arguing against the infinite scroll, but we're a fairly design-driven company and any ammunition or alternatives would really help. For example, would serving a different page to the Googlebot in this case be a dangerous form of cloaking? (If the only difference is the presence of the pagination links.) Or is there any way to make rel=next and rel=prev tags work with infinite scrolling?
Web Design | | DownPour0