CSS Issue or not?
-
Hi Mozzers,
I am doing an audit for one of my clients and would like to know if actually the website I am dealing with has any issues when disabling CSS. So I installed Web developer google chrome extension which is great for disabling cookies, css...
When executing "Disable CSS", I can see most of the content of the page but what is weird is that in certain sections images appear in the middle of a sentence. Another image appears to be in the background in one of the internal link section(attached pic)
Since I am not an expert in CSS I am wondering if this represents a CSS issue, therefore a potential SEO issue? If yes why it can be an SEO issue?
Can you guys tell me what sort of CSS issues I should expect when disabling it? what should I look at? if content and nav bar are present or something else?
Thank you
-
Thank you John for your help!
-
The point I'm trying to make is that a CSS problem likely won't result in any huge changes in your SEO. There's a CSS problem if you can visually see something positioned or sized incorrectly on your pages with CSS enabled, not disabled.
Search bots will do some CSS/Javascript rendering, but more towards seeing how large things are on the pages (& trying to find your headers), making sure you're not hiding text (setting text colors the same as background colors), and things of that nature.
-
Hey John,
So how can I judge if there is a CSS problem? Should I go through all the CSS code? Can you give me an example. A screenshot would help.
Thanks!
-
Not to identify anything in particular, it's just that's what the bots are reading and indexing. The bots read the source of the page, and will attempt to do some CSS and javascript rendering, but they're not reading the page like it's seen within a browser.
-
Hey John,
I realized I never answered you sorry about that! Thanks for the help!
One quick question though: "If you want to see how a page looks to search bots, view the source of the page, don't disable the CSS."
View page of the source and identify what exactly?
Thanks John!
-
CSS positions things on the page, so if you remove it, it's not surprising that lots of elements overlap. The page isn't going to look good. This is nothing to worry about.
If you want to see how a page looks to search bots, view the source of the page, don't disable the CSS.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Issue with GA tracking and Native AMP
Hi everyone, We recently pushed a new version of our site (winefolly.com), which is completely AMP native on WordPress (using the official AMP for WordPress plugin). As part of the update, we also switched over to https. In hindsight we probably should have pushed the AMP version and HTTPS changes in separate updates. As a result of the update, the traffic in GA has dropped significantly despite the tracking code being added properly. I'm also having a hard time getting the previous views in GA working properly. The three views are: Sitewide (shop.winefolly.com and winefolly.com) Content only (winefolly.com) Shop only (shop.winefolly.com) The sitewide view seems to be working, though it's hard to know for sure, as the traffic seems pretty low (like 10 users at any given time) and I think that it's more that it's just picking up the shop traffic. The content only view shows maybe one or two users and often none at all. I tried a bunch of different filters to only track to the main sites content views, but in one instance the filter would work, then half an hour later it would revert to no traffic. The filter is set to custom > exclude > request uri with the following regex pattern: ^shop.winefolly.com$|^checkout.shopify.com$|/products/.|/account/.|/checkout/.|/collections/.|./orders/.|/cart|/account|/pages/.|/poll/.|/?mc_cid=.|/profile?.|/?u=.|/webstore/. Testing the filter it strips out anything not related to the main sites content, but when I save the filter and view the updated results, the changes aren't reflected. I did read that there is a delay in the filters being applied and only a subset of the available data is used, but I just want to be sure I'm adding the filters correctly. I also tried setting the filter to predefined, exclude host equal to shop.winefolly.com, but that didn't work either. The shop view seems to be working, but the tracking code is added via Shopify, so it makes sense that it would continue working as before. The first thing I noticed when I checked the views is that they were still set to http, so I updated the urls to https. I then checked the GA tracking code (which is added as a json object in the Analytics setting in the WordPress plugin. Unfortunately, while GA seems to be recording traffic, none of the GA validators seem to pickup the AMP tracking code (adding using the amp-analytics tag), despite the json being confirmed as valid by the plugin. This morning I decided to try a different approach and add the tracking code via Googles Tag Manager, as well as adding the new https domain to the Google Search Console, but alas no change. I spent the whole day yesterday reading every post I could on the topic, but was not able to find any a solution, so I'm really hoping someone on Moz will be able to shed some light as to what I'm doing wrong. Any suggestions or input would be very much appreciated. Cheers,
Technical SEO | | winefolly
Chris (on behalf of WineFolly.com)0 -
Email and landing page duplicate content issue?
Hi Mozers, my question is, if there is a web based email that goes to subscribers, then if they click on a link it lands on a Wordpress page with very similar content, will Google penalize us for duplicate content? If so is the best workaround to make the email no index no follow? Thanks!
Technical SEO | | CalamityJane770 -
Canonical issues using Screaming Frog and other tools?
In the Directives tab within Screaming Frog, can anyone tell me what the difference between "canonicalised", "canonical", and "no canonical" means? They're found in the filter box. I see the data but am not sure how to interpret them. Which one of these would I check to find canonical issues within a website? Are there any other easy ways to identify canonical issues?
Technical SEO | | Flock.Media0 -
Duplicate content issue. Delete index.html and replace with www.?
I have a duplicate content issue. On my site the home button goes to the index.html and not the www. If I change it to the www will it impact my SERPS? I don't think anyone links to the index.html.
Technical SEO | | bronxpad1 -
Drupal 1.5 Issue: Taxonomy
Hi there I have a domain which is built in Drupal 1.5 . We managed to redirect all nodes to the actial SEF URL. The one issue we have no is redirecting the taxonomy urls to the SEF url. The obviuos answr is to do a manual 301 redirect n the htaccess file but this will a long process as there are over 500 urls affected. Is there a better way to do this automatically within Drupal? Your thoughts and ideas are welcome.
Technical SEO | | stefanok0 -
We have been hit with the "Doorway Page" Penalty - fixed the issue - Got MSG that will still do not meet guidelines.
I have read the FAQs and checked for similar issues: YES / NO
Technical SEO | | LVH
My site's URL (web address) is:www.recoveryconnection.org
Description (including timeline of any changes made): We were hit with the Doorway Pages penalty on 5/26/11. We have a team of copywriters, and a fast-working dev dept., so we were able to correct what we thought the problem was, "targeting one-keyword per page" and thin content. (according to Google) Plan of action: To consolidate "like" keywords/content onto pages that were getting the most traffic and 404d the pages with the thin content and that were targeting singular keywords per page. We submitted a board approved reconsideration request on 6/8/11 and received the 2nd message (below) on 6/16/11. ***NOTE:The site was originally designed by the OLD marketing team who was let go, and we are the NEW team trying to clean up their mess. We are now resorting to going through Google's general guidelines page. Help would be appreciated. Below is the message we received back. Dear site owner or webmaster of http://www.recoveryconnection.org/, We received a request from a site owner to reconsider http://www.recoveryconnection.org/ for compliance with Google's Webmaster Guidelines. We've reviewed your site and we believe that some or all of your pages still violate our quality guidelines. In order to preserve the quality of our search engine, pages from http://www.recoveryconnection.org/ may not appear or may not rank as highly in Google's search results, or may otherwise be considered to be less trustworthy than sites which follow the quality guidelines. If you wish to be reconsidered again, please correct or remove all pages that are outside our quality guidelines. When such changes have been made, please visit https://www.google.com/webmasters/tools/reconsideration?hl=en and resubmit your site for reconsideration. If you have additional questions about how to resolve this issue, please see our Webmaster Help Forum for support. Sincerely, Google Search Quality Team Any help is welcome. Thanks0 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0