Technical SEO question re: java
-
Hi,
I have an SEO question that came my way, but it's a bit too technical for me to handle. Our entire ecom site is in java, which apparently writes to a page after it has loaded and is not SEO-friendly.
I was presented with a work-around that would basically consist of us pre redering an html page to search engines and leaving the java page for the customer. It sounds like G's definition of "cloaking" to me, but I wanted to know if anyone has any other ideas or work-arounds (if there are any) on how we can make the java based site more SEO-friendly.
Any thoughts/comments you have would be much appreciated. Thanks!!
-
Oooh no thank you - I'm not a big risk-taker when it comes to SEO. he-he. Thanks again for your help!
-
With the AJAX crawlability guide implementation, Google knows they're requesting a different page than the one being shown to users, so it's not quite the same as cloaking. That being said, you could go black hat and return a completely different page, but Google has their ways of finding these things out.
-
Hi John, One more question for you if you don't mind... Creating an html snapshot (as noted in the ajax link) is different than a "serving up an html page" as described here http://support.google.com/webmasters/bin/answer.py?hl=en&answer=66355 ? Is that true?
-
Thought you were talking about "Java". Needless to say JavaScript can cause all sorts of issues with SEO.
-
Hi John,
You're right, I meant JavaScript. Thank you so much for the response. This definitely helps!
-
Hi, Thanks for the reply. To begin, if you turn off javascript, you can't see any of the content on our pages - not the text, navigation, etc. I'm trying to figure out how to make the content displayable without having to re-do the entire system (which isn't plausible).
Does that make sense? The site is improvementscatalog.com if you want to see it. We're in the process of building content for it, but we just recently switched platforms and these new issues popped up.
-
Hi, Thanks for the reply. To begin, if you turn off javascript, you can't see any of the content on our pages - not the text, navigation, etc. I'm trying to figure out how to make the content displayable without having to re-do the entire system (which isn't plausible).
Does that make sense? The site is improvementscatalog.com if you want to see it. We're in the process of building content for it, but we just recently switched platforms and these new issues popped up.
-
I think you mean JavaScript and not Java. What you're suggesting is what Google recommends in their AJAX crawling guide here http://code.google.com/web/ajaxcrawling/. They want you to create a static HTML page to serve to Googlebot instead of your regular page.
Google is getting better at crawling JavaScript content that's loaded asynchronously, so you might want to dedicate your resources elsewhere. On one of my sites, Google is indexing text that's loaded asynchronously (Bing isn't yet), and Matt Cutts has said that Google is crawling some comments that are loaded asynchronously, like Facebook comments (see http://www.searchenginejournal.com/google-indexing-facebook-comments/35594/)
-
Yeah I agree the work-around sounds like it may be interpreted as black hat cloaking and get you in trouble.
Can you explain further how your application is working and why its not SEO friendly?
Cheers
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO for a a static content website
Hi everyone, We would like to ask suggestions on how to improve our SEO for our static content help website. With the release of each new version, our company releases a new "help" page, which is created by an authoring system. This is the latest page: http://kilgray.com/memoq/2015/help-en/ I have a couple of questions: 1- The page has an index with many links that open up new subpages with content for users. It is impossible to add title tags to this subpages, as everything is held together by the mother page. So it is really hard to for users to find these subpage information when they are doing a google search. 2- We have previous "help" pages which usually rank better in google search. They also have the same structure (1 page with big index and many subpages) and no metadata. We obviously want the last version to rank better, however, we are afraid exclude them from bots search because the new version is not easy to find. These are some of the previous pages: http://kilgray.com/memoq/2014R2/help-en/ http://kilgray.com/memoq/62/help-en/ I would really appreciate suggestions! Thanks
Technical SEO | | Kilgray0 -
No Java, No Content..?
Hello Mozers! 🙂 I have a question for you: I am working on a site and while doing an audit I disabled JavaScript via the Web Developer plugin for Chrome. The result is that instead of seeing the page content, I see the typical “loading circle” but nothing else. I imagine this not a good thing but what does this implies technically from crawler perspective? Thanks
Technical SEO | | Pipistrella0 -
Curious Keyword Tags Question...
In 2012, we got hit with something... I have always assumed Panda... We have hundreds of thousands of products on our site. Prior to the traffic drop, our old site design listed a small number of keywords tags on the product pages. About 10 or so... After the site re-design, we allowed all of the keyword tags to appear on these product pages and also linked them to our search results pages. I know that one thing this did is cause a lot of these Search Results pages to be indexed. But our traffic has been constantly declining since then... I wonder what would happen if I just went back to the old with a smaller number of keywords listed and not linked? Any thoughts? Thanks! Craig
Technical SEO | | TheCraig0 -
Redirect Question
We have a client that just did a redesign and development and the new design didn't really match their current structure. They said they didn't want to worry about matching site structure and never put any effort into SEO. Here is the situation: They had a blog located on a subdomain such as blog.domain.com - now there blog is located like domain.com/blog They want to create redirects for all the old the blog urls that used to be on the subdomain and not point to the domain.com/blog/post-name What is the best way of doing that - Through .htaccess?
Technical SEO | | Beardo0 -
When to re-submit for reconsideration?
Hi! We received a manual penalty notice. We had an SEO company a couple of years ago build some links for us on blogs. Currently we have only about 95 of these links which are pretty easily identifiable by the anchor text used and the blogs or directories they originate from. So far, we have seen about 35 of those removed and have made 2 contacts to each one via removeem.com. So, how many contacts do you think need to be made before submitting a reconsideration request? Is 2 enough? Also, should we use the disavow tool on these remaining 65 links? Every one of the remaining links is from either a filipino blog page or a random article directory. Finally, do you think we are still getting juice from these links? i.e. if we do remove or disavow these anchor text links are we actually going to see a negative impact? Thanks for your help and answers!! Craig
Technical SEO | | TheCraig0 -
When Should You Start SEO?
I am launching a new website (related to IT services) on Monday 6th May 2013. What should be my SEO/SMO/PPC strategy for a brand new website with new domain ? I have a blog within the website as well. Is it better to promote internal blog or should i focus on external bogs like wordpress ?
Technical SEO | | afycon0 -
Duplicate content question with PDF
Hi, I manage a property listing website which was recently revamped, but which has some on-site optimization weaknesses and issues. For each property listing like http://www.selectcaribbean.com/property/147.html there is an equivalent PDF version spidered by google. The page looks like this http://www.selectcaribbean.com/pdf1.php?pid=147 my question is: Can this create a duplicate content penalty? If yes, should I ban these pages from being spidered by google in the robots.txt or should I make these link nofollow?
Technical SEO | | multilang0 -
Sub Domain SEO
I am thinking to Add Sub Domains to get better rankings for Local Searches. So I will develop City Specific Sites with Specific Language. For Example qatar.wisnetsol.com. IT will be in Arabic. If my Good standing and Ranking on Google for wisnetsol.com will help my subdomain to rank better? if we setup wisnetsol.com/qatar, how it can target Qatar in Google Webmaster tools? Will links for qatar.wisnetsol.com and wisnetsol.com are seprate? What do you think about this strategy? Is it good or bad?
Technical SEO | | Khuram0