How can the search engines can crawl my java script generated web pages
-
For example when I click in a link of this movie from the home page, the link send me to this page http://www.vudu.mx/movies/#!content/293191/Madagascar-3-Los-Fugitivos-Madagascar-3-Europes-Most-Wanted-Doblada but in the source code I can't see the meta tittle and description and I think the search engines wont see that too, am I right? I guess that only appears the source code of that "master template" and that it is not usefull for me. So, my question is, how can I add dynamically this data to every page of each movie to allow crawl all the pages to the search engines?
Thank you.
-
Hi Jose - I'd suggest reading http://www.seomoz.org/ugc/can-google-really-access-content-in-javascript-really which lays out what Google is picking up in javascript files. You might also want to try some of the tactics specifically designed to make JS content and hashbang URLs more accessible:
http://coding.smashingmagazine.com/2011/09/27/searchable-dynamic-content-with-ajax-crawling/ and http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html (Google's original post on the subject). Folks like Twitter and Rapgenius have been making use of these for a while now, and they can help to make that dynamic data directly indexable.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I can't crawl the archive of this website with Screaming Frog
Hi I'm trying to crawl this website (http://zeri.info/) with Screaming Frog but because of some technical issue with their site (i can't find what is causing it) i'm able to crawl only the first page of each category (ex. http://zeri.info/sport/) and then it will go to crawl each page of their archive (hundreds of thousands of pages) but it won't crawl the links inside these pages. Thanks a lot!
Technical SEO | | gjergjshala0 -
Is it easier to rank high with a front page than a landing page?
My product is laptop and of cause, I like to rank high for the keyword "laptop". Do any of you know if the search engines tends to rank a front page higher than a landing page? Eg. www.brand.com vs. www.brand.com/laptop
Technical SEO | | Debitoor0 -
Is a canonical tag the best solution for multiple search listing pages in a site?
I have a site where dozens of page listings are showing in my report with a parameter showing the page number for the listings. Is the best solution to canonical these page listings back a core page (all-products)? Or, do I change my site configuration in Webmasters to ignore "page" parameters? What's the solution? Example URL 1- http://mydomain.com/products/all-products?page=84 Example URL 2- http://mydomain.com/products/all-products?page=85 Example URL 3- http://mydomain.com/products/all-products?page=86 Thanks in advance for your direction.
Technical SEO | | JoshKimber0 -
Auto generated pages
Hi, I have two sites showing (crawl report from SEOMoz.org) extremely high numbers of duplicate titles and descriptions (e.g., 33,000). These sites have CMSs behind them and so the duplicate titles, etc., are a result of auto-generated pages. What is the best way to address these problems? Thanks! David
Technical SEO | | DWill0 -
X-cart page crawling question.
I have an x-cart site and it is showing only 1 page being crawled. I'm a newbie, is this common? Can it be changed? If so, how? Thanks.
Technical SEO | | SteveLMCG0 -
Moz Crawl Reporting Duplicate content on "template" styled pages
We have a lot of detail pages on our site that reference specific scholarships. Each page has a different Title and Description. They also have unique information all regarding the same data points. The pages are displayed in a similar structure to the user so the data is easy to read. My problem is a lot of these pages are being reported as duplicate content when they certainly are not. Most of them are reported as duplicates when they have the same sponsor. They may have the same contact information listed. These two are being reported as duplicate of each other. They share some data but they are definitely different scholarships. http://www.collegexpress.com/scholarships/adelaide-mcclelland-garden-club-scholarship/9254/ http://www.collegexpress.com/scholarships/mary-wannamaker-witt-and-lee-hampton-witt-memorial-scholarship/10785/ Would it help to add a Canonical for each page to themselves? Any other suggestions would be great. Thanks
Technical SEO | | GeorgeLaRochelle0 -
Duplicate Page Content and Title for product pages. Is there a way to fix it?
We we're doing pretty good with our SEO, until we added product listing pages. The errors are mostly Duplicate Page Content/Title. e.g. Title: Masterpet | New Zealand Products MasterPet Product page1 MasterPet Product page2 Because the list of products are displayed on several pages, the crawler detects that these two URLs have the same title. From 0 Errors two weeks ago, to 14k+ errors. Is this something we could fix or bother fixing? Will our SERP ranking suffer because of this? Hoping someone could shed some light on this issue. Thanks.
Technical SEO | | Peter.Huxley590 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0