CSS Issue or not?
-
Hi Mozzers,
I am doing an audit for one of my clients and would like to know if actually the website I am dealing with has any issues when disabling CSS. So I installed Web developer google chrome extension which is great for disabling cookies, css...
When executing "Disable CSS", I can see most of the content of the page but what is weird is that in certain sections images appear in the middle of a sentence. Another image appears to be in the background in one of the internal link section(attached pic)
Since I am not an expert in CSS I am wondering if this represents a CSS issue, therefore a potential SEO issue? If yes why it can be an SEO issue?
Can you guys tell me what sort of CSS issues I should expect when disabling it? what should I look at? if content and nav bar are present or something else?
Thank you
-
Thank you John for your help!
-
The point I'm trying to make is that a CSS problem likely won't result in any huge changes in your SEO. There's a CSS problem if you can visually see something positioned or sized incorrectly on your pages with CSS enabled, not disabled.
Search bots will do some CSS/Javascript rendering, but more towards seeing how large things are on the pages (& trying to find your headers), making sure you're not hiding text (setting text colors the same as background colors), and things of that nature.
-
Hey John,
So how can I judge if there is a CSS problem? Should I go through all the CSS code? Can you give me an example. A screenshot would help.
Thanks!
-
Not to identify anything in particular, it's just that's what the bots are reading and indexing. The bots read the source of the page, and will attempt to do some CSS and javascript rendering, but they're not reading the page like it's seen within a browser.
-
Hey John,
I realized I never answered you sorry about that! Thanks for the help!
One quick question though: "If you want to see how a page looks to search bots, view the source of the page, don't disable the CSS."
View page of the source and identify what exactly?
Thanks John!
-
CSS positions things on the page, so if you remove it, it's not surprising that lots of elements overlap. The page isn't going to look good. This is nothing to worry about.
If you want to see how a page looks to search bots, view the source of the page, don't disable the CSS.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If some rooted domains providing back links to a website are from the same server, would it cause an issue?
My client with alliedautotransport.com has a brother that owns hundreds of relevant websites that has great content on there, however, if we have him do some back linkings from those pages from the same server, would it hurt the rankings or make a difference?
Technical SEO | | SeobyKP1 -
How do I avoid this issue of duplicate content with Google?
I have an ecommerce website which sells a product that has many different variations based on a vehicle’s make, model, and year. Currently, we sell this product on one page “www.cargoliner.com/products.php?did=10001” and we show a modal to sort through each make, model, and year. This is important because based on the make, model, and year, we have different prices/configurations for each. For example, for the Jeep Wrangler and Jeep Cherokee, we might have different products: Ultimate Pet Liner - Jeep Wrangler 2011-2013 - $350 Ultimate Pet Liner - Jeep Wrangler 2014 - 2015 - $350 Utlimate Pet Liner - Jeep Cherokee 2011-2015 - $400 Although the typical consumer might think we have 1 product (the Ultimate Pet Liner), we look at these as many different types of products, each with a different configuration and different variants. We do NOT have unique content for each make, model, and year. We have the same content and images for each. When the customer selects their make, model, and year, we just search and replace the text to make it look like the make, model, and year. For example, when a custom selects 2015 Jeep Wrangler from the modal, we do a search and replace so the page will have the same url (www.cargoliner.com/products.php?did=10001) but the product title will say “2015 Jeep Wrangler”. Here’s my problem: We want all of these individual products to have their own unique urls (cargoliner.com/products/2015-jeep-wrangler) so we can reference them in emails to customers and ideally we start creating unique content for them. Our only problem is that there will be hundreds of them and they don’t have unique content other than us switching in the product title and change of variants. Also, we don’t want our url www.cargoliner.com/products.php?did=10001 to lose its link juice. Here’s my question(s): My assumption is that I should just keep my url: www.cargoliner.com/products.php?did=10001 and be able to sort through the products on that page. Then I should go ahead and make individual urls for each of these products (i.e. cargoliner.com/products/2015-jeep-wrangler) but just add a “nofollow noindex” to the page. Is this what I should do? How secure is a “no-follow noindex” on a webpage? Does Google still index? Am I at risk for duplicate content penalties? Thanks!
Technical SEO | | kirbyfike0 -
Is My Boilerplate Product Description Causing Duplicate Content Issues?
I have an e-commerce store with 20,000+ one-of-a-kind products. We only have one of each product, and once a product is sold we will never restock it. So I really have no intention to have these product pages showing up in SERPs. Each product has a boilerplate description that the product's unique attributes (style, color, size) are plugged into. But a few sentences of the description are exactly the same across all products. Google Webmaster Tools doesn't report any duplicate content. My Moz Crawl Report show 29 of these products as having duplicate content. But a Google search using the site operator and some text from the boilerplate description turns up 16,400 product pages from my site. Could this duplicate content be hurting my SERPs for other pages on the site that I am trying to rank? As I said, I'm not concerned about ranking for these products pages. Should I make them "rel=canonical" to their respective product categories? Or use "noindex, follow" on every product? Or should I not worry about it?
Technical SEO | | znagle0 -
Campaign Issue: Rel Canonical - Does this mean it should be "on" or "off?"
Hello, somewhat new to the finer details of SEO - I know what canonical tags are, but I am confused by how SEOmoz identifies the issue in campaigns. I run a site on a wordpress foundation, and I have turned on the option for "canonical URLs" in the All in one SEO plugin. I did this because in all cases, our content is original and not duplicated from elsewhere. SEOmoz has identified every one of my pages with this issue, but the explanation of the status simply states that canonical tags "indicate to search engines which URL should be seen as the original." So, it seems to me that if I turn this OFF on my site, I turn off the notice from SEOmoz, but do not have canonical tags on my site. Which way should I be doing this? THANK YOU.
Technical SEO | | mrbradleyferguson0 -
Severe Health issue on my site through Webmaster tools
I use Go Daddy Website Tonight. I keep getting a severe health message in Google Webmaster tools stating that my robots.txt file is blocking some important page. When I try to get more details the blocked file will not open. When I asked the Go Daddy peeps they told me that it was just image and backup files that do not need to be crawled. But if Google spiders keep thinking an important page is blocked will this hurt my SERPS?
Technical SEO | | VictorVC0 -
Htaccess issue
I have some urls in my site due to a rating counter. These are like: domain.com/?score=4&rew=25
Technical SEO | | sesertin
domain.com/?score=1&rew=28
domain.com/?score=5&rew=95 These are all duplicate content to my homepage and I want to 301 redirect them there. I tried so far: RedirectMatch 301 /[a-z]score[a-z] http://domain.com
RedirectMatch 301 /.score. http://domain.com
RedirectMatch 301 /^score$.* http://domain.com
RedirectMatch 301 /.^score$.* http://domain.com
RedirectMatch 301 /[a-z]score[a-z] http://domain.com
RedirectMatch 301 score http://domain.com
RedirectMatch 301 /[.]score[.] http://domain.com
RedirectMatch 301 /[.]score[.] http://domain.com
RedirectMatch 301 /[a-z,0-9]score[a-z,0-9] http://domain.com
RedirectMatch 301 /[a-z,0-9,=,&]score[a-z,0-9,=,&] http://domain.com
RedirectMatch 301 /[a-z,0-9,=&?/.]score[a-z,0-9,=&] http://domain.com None of them works. Anybody? Solution? Would be very much appriciated0 -
See any issues with this tabbed content page?
When I view source, and view as Googlebot it's showing as 1 long page of content = good. However, the developer uses some redirects and dynamic page generation to pull this off. I didn't see any issues from a Search perspective but would appreciate a second opinion: Click here Thanks!
Technical SEO | | 540SEO0 -
Developing a drop down menu: Do I use javascript or pure css?
I am developing a drop down menu and am trying to decide if using javascript instead of just css is worth it. I've done some research on the topic and the opinions seem dated. I know that at one time not using javascript for a drop down menu was important but now less so. Google constantly says that they will not discount the links just because they are not shown until javascript is ran. What I want to know is has anyone discovered from testing that using javascript instead of css for a drop down makes a difference? Note: the links will not be located in an external javascript file.
Technical SEO | | seozachz0