Does anyone know what the correct code loadCSS is for this?
-
-
I'm trying to fix these 3 blocking CSS resources
- https://fonts.googleapis.com/css?family=Audiowide
- https://fonts.googleapis.com/css?family=Raleway:300,400,700
- https://www.legendslimotn.com/code/css/style.css?ver=3.1
I tried this:
<link rel="preload" href="path/to/mystylesheet.css" as="style" onload="this.rel='stylesheet'"> <noscript><link rel="stylesheet" href="path/to/mystylesheet.css">noscript> <script> /*! loadCSS. [c]2017 Filament Group, Inc. MIT License */ (function(){ ... }()); /*! loadCSS rel=preload polyfill. [c]2017 Filament Group, Inc. MIT License */ (function(){ ... }()); </script> Adding this to the code improved the load speed results but it caused other page problems on the browers other than the Chrome browser.
-
What's wrong with that line? Seems perfect to me
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Has anyone had issues with Bazaarvoice and schema.org?
About a year ago we started using Bazaarvoice to get reviews for our products, and it has been great as far as accumulating content, but Google is not taking the schema.org data and displaying it on the SERP. Someone has told me it is because we are offering multiple products, or that our schema.org tags are incorrect but when I compare our code to other travel sites it seems like everyone is doing something different. This is especially annoying since the Google schema markup check says everything is fine. Does anyone have any advice or similar experiences? Thanks.
Technical SEO | | tripcentral0 -
Does anyone have experience using Attracta with a Miva Merchant website?
I am looking for more information on Attracta, which is now a certified app for Plesk and Parallels Automation. You can apparently install Attracta for Parallels from the Apps Catalog.
Technical SEO | | djlittman0 -
Has anyone had their Google manual spam penalty lifted without notice?
In June of this year, our company submitted a reconsideration request, which Google rejected and confirmed that they had a manual spam penalty placed on us. After cleaning up our extensive link portfolio, we submitted our 2nd reconsideration at the end of this month (July) and received this response: Dear site owner or webmaster of domain, We received a request from a site owner to reconsider domain for compliance with Google's Webmaster Guidelines. We reviewed your site and found no manual actions by the webspam team that might affect your site's ranking in Google. There's no need to file a reconsideration request for your site, because any ranking issues you may be experiencing are not related to a manual action taken by the webspam team. Of course, there may be other issues with your site that affect your site's ranking. Google's computers determine the order of our search results using a series of formulas known as algorithms. We make hundreds of changes to our search algorithms each year, and we employ more than 200 different signals when ranking pages. As our algorithms change and as the web (including your site) changes, some fluctuation in ranking can happen as we make updates to present the best results to our users. If you've experienced a change in ranking which you suspect may be more than a simple algorithm change, there are other things you may want to investigate as possible causes, such as a major change to your site's content, content management system, or server architecture. For example, a site may not rank well if your server stops serving pages to Googlebot, or if you've changed the URLs for a large portion of your site's pages. This article has a list of other potential reasons your site may not be doing well in search. If you're still unable to resolve your issue, please see our Webmaster Help Forum for support. Sincerely, Google Search Quality Team Has anyone else or their clients experienced this recently? Can this be attributed to the "softer" Panda update? Any other additional information is greatly appreciated.
Technical SEO | | eugeneku0 -
Anyone See This Before? Google Following Links that are Not Hyperlinks
Today I was going through my Google Webmaster URL Errors (404s) info. I came across two links in my URL Errors report that are NOT actually hyperlinks on the source page. Both of these links are from two different forum-type websites. In both cases, the post references a URL on my website (incorrectly, hence the 404 error) in the text of the post but did NOT actually link to my site. I looked at the source code...no href. Both forum posts simply had a tag or tag around the incorrect URL text referencing my site. I have never seen this before or heard that Google will follow a URL that is not actually a hyperlink. Anyone else?
Technical SEO | | cajohnson0 -
Anyone know how to fix duplicate content and titles with news section?
We use django for out site and it's working really well, but we're having an issue with duplicate titles and content via the news section. The news is basically stories sourced from other sites and we link to them via our news section. I'm not sure how to fix the duplicate title issue in this case. I noticed people recommend archiving or using a canonical, but because the news section is set up how it is I don't think that would work. Does anyone have a way around this?A
Technical SEO | | KateGMaker0 -
Omniture tracking code URLs creating duplicate content
My ecommerce company uses Omniture tracking codes for a variety of different tracking parameters, from promotional emails to third party comparison shopping engines. All of these tracking codes create URLs that look like www.domain.com/?s_cid=(tracking parameter), which are identical to the original page and these dynamic tracking pages are being indexed. The cached version is still the original page. For now, the duplicate versions do not appear to be affecting rankings, but as we ramp up with holiday sales, promotions, adding more CSEs, etc, there will be more and more tracking URLs that could potentially hurt us. What is the best solution for this problem? If we use robots.txt to block the ?s_cid versions, it may affect our listings on CSEs, as the bots will try to crawl the link to find product info/pricing but will be denied. Is this correct? Or, do CSEs generally use other methods for gathering and verifying product information? So far the most comprehensive solution I can think of would be to add a rel=canonical tag to every unique static URL on our site, which should solve the duplicate content issues, but we have thousands of pages and this would take an eternity (unless someone knows a good way to do this automagically, I’m not a programmer so maybe there’s a way that I don’t know). Any help/advice/suggestions will be appreciated. If you have any solutions, please explain why your solution would work to help me understand on a deeper level in case something like this comes up again in the future. Thanks!
Technical SEO | | BrianCC0 -
Is a 302 redirect the correct redirect from a root URL to a detail page?
Hi guys The widely followed SEO best practice is that 301 redirects should be used instead of 302 redirects when it is a permanent redirect that is required. Matt Cutts said last year that 302 redirects should "only" be used for temporary redirects. http://www.seomoz.org/blog/whiteboard-interview-googles-matt-cutts-on-redirects-trust-more For a site that I am looking at the SEO Moz Crawll Diagnostics tool lists as an issue that the URL / redirects to www.abc.com/Pages/default.aspx with a 302 redirect. On further searching I found that on a Google Support forum (http://www.google.com/support/forum/p/Webmasters/thread?tid=276539078ba67f48&hl=en) that a Google Employee had said "For what it's worth, a 302 redirect is the correct redirect from a root URL to a detail page (such as from "/" to "/sites/bursa/"). This is one of the few situations where a 302 redirect is preferred over a 301 redirect." Can anyone confirm if it is the case that "a 302 redirect is the correct redirect from a root URL to a detail page"? And if so why as I haven't found an explanation. If it is the correct best practice then should redirects of this nature be removed from displaying as issues in the SEO Moz Crawll Diagnostics tool Thanks for your help
Technical SEO | | CPU0 -
Up to my you-know-what in duplicate content
Working on a forum site that has multiple versions of the URL indexed. The WWW version is a top 3 and 5 contender in the google results for the domain keyword. All versions of the forum have the same PR, but but the non-WWW version has 3,400 pages indexed in google, and the WWW has 2,100. Even worse yet, there's a completely seperate domain (PR4) that has the forum as a subdomain with 2,700 pages indexed in google. The dupe content gets completely overwhelming to think about when it comes to the PR4 domain, so I'll just ask what you think I should do with the forum. Get rid of the subdomain version, and sometimes link between two obviously related sites or get rid of the highly targeted keyword domain? Also what's better, having the targeted keyword on the front of Google with only 2,100 indexed pages or having lower rankings with 3,400 indexed pages? Thanks.
Technical SEO | | Hondaspeder0