No Java, No Content..?
-
Hello Mozers!
I have a question for you: I am working on a site and while doing an audit I disabled JavaScript via the Web Developer plugin for Chrome. The result is that instead of seeing the page content, I see the typical “loading circle” but nothing else. I imagine this not a good thing but what does this implies technically from crawler perspective?
Thanks
-
Hello Jimmy and Andy,
Thank you for your answers! Yes, the cached version of Google displays correctly, and the text snippets are indexed, so everything looks fine and working as it should
Thanks
-
I would suspect that unless something has been done differently, that Google can read what is there, but as Jimmy said, check the page cache to see what Google can see.
You can also do a search for a snippet of text from the page and see if Google has it indexed.
-Andy
-
Hi,
The best way to see what the crawlers are seeing is to check their cache.
If you to go the cached page on Google there is a text only link, through that you will be able to see exactly what content Google has cached of the page.Usually text that appears in the source code of the page is visible to the search engines, but without knowing the what the javascript does, it is hard to say whether it would affect this.
Hope this helps.
Kind Regards
Jimmy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Indexing without content
Hello. I have a problem of page indexing without content. I have website in 3 different languages and 2 of the pages are indexing just fine, but one language page (the most important one) is indexing without content. When searching using site: page comes up, but when searching unique keywords for which I should rank 100% nothing comes up. This page was indexing just fine and the problem arose couple of days ago after google update finished. Looking further, the problem is language related and every page in the given language that is newly indexed has this problem, while pages that were last crawled around one week ago are just fine. Has anyone ran into this type of problem?
Technical SEO | | AtuliSulava1 -
Does having no content on a mobile page have effect on the ranking.
We are about to go live with a mobile version of our webshop, mobile users will be shown an alternative version of the desktop page. At the moment we have little to no content on the mobile pages; how will this effect our ranking? (desktop and mobile page have the same URL and meta, only the "body" is different)
Technical SEO | | G.School0 -
Mobile website content optimisation
Hi there, someone I know is going to put their site to a mobile version with a mobile sub domain (m.). I have recommended responsive but for now this is their only way forward to cope with the 21st April update by Google. My question is what is the best practice for content, as its a different url will there need to be a canonical tag in to stop duplication and thus being penalised from the Google panda update? Any advice much appreciated.
Technical SEO | | tdigital0 -
Duplicate Content Issue
My issue with duplicate content is this. There are two versions of my website showing up http://www.example.com/ http://example.com/ What are the best practices for fixing this? Thanks!
Technical SEO | | OOMDODigital0 -
Syndicated content outranks my original article
I have a small site and write original blog content for my small audience. There is a much larger, highly relevant site that is willing to accept guest blogs and they don't require original content. It is one of the largest sites within my niche and many potential customers of mine are there. When I create a new article I first post to my blog, and then share it with G+, twitter, FB, linkedin. I wait a day. By this time G has seen the links that point to my article and has indexed it. Then I post a copy of the article on the much larger site. I have a rel=author tag within the article but the larger site adds "nofollow" to that tag. I have tried putting a link rel=canonical tag in the article but the larger site strips that tag out. So G sees a copy of my content on this larger site. I'm hoping they realize it was posted a day later than the original version on my blog. But if not will my blog get labeled as a scraper? Second: when I Google the exact blog title I see my article on the larger site shows up as the #1 search result but (1) there is no rich snippet with my author creds (maybe because the author tag was marked nofollow?), and (2) the original version of the article from my blog is not in the results (I'm guessing it was stripped out as duplicate). There are benefits for my article being on the larger site, since many of my potential customers are there and the article does include a link back to my site (the link is nofollow). But I'm wondering if (1) I can fix things so my original article shows up in the search results, or (2) am I hurting myself with this strategy (having G possibly label me a scraper)? I do rank for other phrases in G, so I know my site hasn't had a wholesale penalty of some kind.
Technical SEO | | scanlin0 -
Remotely Loaded Content
Hi Folks, I have a two part question. I'd like to add a feature to our website where people can click on an ingredient (we manufacture skin care products) and a tool-tip style box pops up and describes information about the ingredient. Because many products share some of the same ingredients, I'm going to load this data from a source file via AJAX. My questions are: Does this type of remotely-fetched content have any effect on how a search engines views and indexes the page? Can it help contribute to the page's search engine ranking? If there are multiple pages fetching the same piece of remotely-fetched content, will this be seen as duplicated content? Thanks! Hal
Technical SEO | | AlabuSkinCare0 -
If two websites pull the same content from the same source in a CMS, does it count as duplicate content?
I have a client who wants to publish the same information about a hotel (summary, bullet list of amenities, roughly 200 words + images) to two different websites that they own. One is their main company website where the goal is booking, the other is a special program where that hotel is featured as an option for booking under this special promotion. Both websites are pulling the same content file from a centralized CMS, but they are different domains. My question is two fold: • To a search engine does this count as duplicate content? • If it does, is there a way to configure the publishing of this content to avoid SEO penalties (such as a feed of content to the microsite, etc.) or should the content be written uniquely from one site to the next? Any help you can offer would be greatly appreciated.
Technical SEO | | HeadwatersContent0 -
Duplicate Content Caused By Blog Filters
We are getting some duplicate content warnings based on our blog. Canonical URL's can work for some of the pages, but most of the duplicate content is caused by blog posts appearing on more than 1 URL. What is the best way to fix this?
Technical SEO | | Marketpath0