Can Googlebot crawl the content on this page?
-
Hi all,
I've read the posts in Google about Ajax and javascript (https://support.google.com/webmasters/answer/174992?hl=en) and also this post: http://moz.com/ugc/can-google-really-access-content-in-javascript-really.
I am trying to evaluate if the content on this page, http://www.vwarcher.com/CustomerReviews, is crawlable by Googlebot? It appears not to be. I perused the sitemap and don't see any ugly Ajax URLs included as Google suggests doing. Also, the page is definitely indexed, but appears the content is only indexed via its original source (Yahoo!, Citysearch, Google+, etc.).
I understand why they are using this dynamic content, because it looks nice to an end-user and requires little to no maintenance. But, is it providing them any SEO benefit? It appears to me that it would be far better to take these reviews and simply build them into HTML.
Thoughts?
-
Thanks so much Bill and Brian. This is exactly what I was thinking. I did the same thing Bill suggested initially and took a snippet from one of the reviews and did a verbatim search and got nothing. What I thought this told me was that yes, the page was indexed, but not the content. The fact that the cached version renders the content from the javascript only shows that the script was executed, not necessarily that any of the content it contains was actually indexed.
From an SEO standpoint I think this is valuable content that the dealer would very much want indexed. While the service providing the javascript might be very convenient, and the majority of end users might be able to consume the content, the fact that it's not searchable, to me, means it's an opportunity lost.
Thanks again everyone.
-
Bill is right. The page is indexed and cached, however googlebot cannot read the reviews. If you view the cache (cache:http://www.vwarcher.com/CustomerReviews) and then click "Text-only version" in the upper right, you'll see that those reviews are not there (google can't crawl them).
-
Dana,
Yes, Google has indexed the page. However, if you view the source code of the page you won't see any text of the customer reviews. Even if you view the cached version, there is text there: but it's not necessarily text on the page. If you view the cached version, and then see the source code of the page, again, the content is not there. The reviews are not there in the source code.
Let's take this one step further. Search Google for one of the testimonials or a part of one: "Marc Palermo is a great customer service" which is NOT indexed in Google.
Google sees the source code but doesn't appear to be indexing the content.
-
Hehe no problem at all Dana. Glad to have helped stand in for your coffee
-Andy
-
Hah! Thanks Andy. Must not have had enough coffee this morning. I didn't even think of looking at the cache...so obvious, lol! Thanks so much. You are spot on.
-
Google appears to have the page cached, so I would say there are no real issues.
Just do a cache:http://www.vwarcher.com/CustomerReviews and you can see what Google currently has.
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content on a Page Due to Responsive Version
What are the implications if a web designer codes the content of the site twice into the page in order to make the site responsive? I can't add the url I'm afraid but the H1 and the content appear twice in the code in order to produce both a responsive version and a desktop version. This is a Wordpress site. Is Google clever enough to distinguish between the 2 versions and treat them individually? Or will Google really think that the content has been repeated on the same page?
Technical SEO | | Wagada0 -
How can i make Google to consider my News pages
How can i make Google to consider my News pages as News and place them in the Google News section? Is there some syntax i need to mention in all my news pages?
Technical SEO | | AlexisWithers0 -
I really need some help with Magento and Duplicate Page Content results I;m getting
Hi, We use Magento for our eCommerce platform and I'm getting a number of duplicate page content results. It mainly concerns the duplicate page content errors for our category pages. Firstly It seems like the product type and filter options highlighted in the picture are causing duplicate page content Also one particularity category is getting a lot from duplicate page content errors , http://www.tidy-books.co.uk/shop-all-products I understand that this category page is using duplicate pages of other category pages so I set this to exclude them from the site map but it looks likes its till being picked up? I've attached the csv file showing these errors as well. - > Any help would be massively appreciated Thanks filter.png moz-tidy-books-uk-crawl_issues-01-OCT-2014.csv
Technical SEO | | tidybooks0 -
Pages appear fine in browser but 404 error when crawled?
I am working on an eCommerce website that has been written in WordPress with the shop pages in E commerce Plus PHP v6.2.7. All the shop product pages appear to work fine in a browser but 404 errors are returned when the pages are crawled. WMT also returns a 404 error when ‘fetch as Google’ is used. Here is a typical page: http://www.flyingjacket.com/proddetail.php?prod=Hepburn-Jacket Why is this page returning a 404 error when crawled? Please help?
Technical SEO | | Web-Incite0 -
Moz Crawl Diagnostic shows lots of duplicate content issues
Hi my client's website uses URL with www and without www. In page/title both website shows up. The one with www has page authority of 51 and the one without 45. In Moz diagnostic I can see that the website shows over 200 duplicate content which are not found in , e.g. Webmaster. When I check each page and add/remove www then the website shows the same content for both www and no www. It is not redirect - in search tab it actually shows www and then if you use no www it doesn't show www. Is the www issue to blame? or could it be something else? and what do I do since both www URL and no-www URL have high authority, just set up redirect from lower authority URL to higher authority URL?
Technical SEO | | GardenPet0 -
Content on top-level-domain vs. content on subpage
Hello Seomoz community, I just built a new website, mainly for a single affiliate programm and it ranks really well at google. Unfortunately the merchant doesn’t like the name of my domain, that’s why I was thrown out of the affiliate program. So suppose the merchant is a computer monitor manufacturer and his name is “Digit”. The name of my domain is something like monitorsdigital.com at the moment. (It’s just an example, I don’t own this URL). The structure of my website is: 1 homepage with much content on it + a blog. The last 5 blog entries are displayed on the homepage. Because I got kicked out of the affiliate program I want to permanent redirect monitorsdigital.com to another domain. But what should the new website look like? I have two possibilities: Copy the whole monitorsdigital website to a new domain, called something like supermonitors.com. Integrate the monitorsdigital website into my existing website about different monitor manufacturers. E.g.: allmonitors.com/digit-monitors.html (that url is permitted by the merchant) What do you think is the better way? I just got the impression, that it seems to be a little easier to rank high with a top-level-domain (www.supermonitors.com) than with a subpage (www.allmonitors.com/digit-monitors.html). However the subpage can benefit from the domain authority, that was generated by other subpages. Thanks for your help and best regards MGMT
Technical SEO | | MGMT0 -
WMT - Googlebot can't access your site
Hi On our new website which is just a few weeks old upon logging into Webmaster tools I am getting the following message Googlebot can't access your site - The overall error rate for DNS queries is 50% What do I need to do to resolve this, I have never had this problem before with any of the sites - where the domains are with Fasthosts (UK) and hosting is with Dreamhosts. What is the recommended course of action Google mention contacting your host in my case Dreamhost - but what do you need to ask them in a support ticket. When doing a fetch in WMT the fetch status is a success?
Technical SEO | | ocelot0 -
What to do when you want the category page and landing page to be the same thing?
I'm working on structuring some of my content better and I have a dilemma. I'm using wordpress and I have a main category called "Therapy." Under therapy I want to have a few sub categories such as "physical therapy" "speech therapy" "occupational therapy" to separate the content. The url would end up being mysite/speech-therapy. However, those are also phrases I want to create a landing page for. So I'd like to have a page like mysite.com/speech-therapy that I could optimize and help people looking for those terms find some of the most helpful content on our site for those certain words. I know I can't have 2 urls that are the same, but I'm hoping someone can give me some feedback on the best way to about this. Thanks.
Technical SEO | | NoahsDad0