What is the effect on using jQuery sliders for content on SEO?
-
I know using css in subversive manners gets you dinged for points. I didnt know if JS counted the same since you are essentially hiding parts of the content and showing it in intervals as slides.
The goal would be having key items for a client in divs and rotating those divs via a slider plugin as slides. I was just curious if that effected things in any way.
Thanks!
~Paul
-
Thanks Ryan! Great answer.
-
There is no issue with using js sliders in the manner you described.
Most similar questions asking if it is ok to use a certain technique can be checked by answering two questions:
After making this change can the code be read in the HTML (View Page Source) of the page?
Is this change a positive user experience?
If both of the above questions can be answered with a "yes" then it is most likely going to be acceptable to search engines.
This answer is based on the assumption you are proceeding in good faith. If you did anything manipulative such as present slide 1 for 10 seconds, slide 2, 3, &4 for 10 seconds each, then through in a long slide 5 for half a second with links or other hidden content, of course that can be an issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can cross domain canonicals help with international SEO when using ccTLDs?
Hello. My question is:** Can cross domain canonicals help with international SEO when using ccTLDs and a gTLD - and the gTLD is much more authoritative to begin with? ** I appreciate this is a very nuanced subject so below is a detailed explanation of my current approach, problem, and proposed solutions I am considering testing. Thanks for the taking the time to read this far! The Current setup Multiple ccTLD such as mysite.com (US), mysite.fr (FR), mysite.de (DE). Each TLD can have multiple languages - indeed each site has content in English as well as the native language. So mysite.fr (defaults to french) and mysite.fr/en-fr is the same page but in English. Mysite.com is an older and more established domain with existing organic traffic. Each language variant of each domain has a sitemap that is individually submitted to Google Search Console and is linked from the of each page. So: mysite.fr/a-propos (about us) links to mysite.com/sitemap.xml that contains URL blocks for every page of the ccTLD that exists in French. Each of these URL blocks contains hreflang info for that content on every ccTLD in every language (en-us, en-fr, de-de, en-de etc) mysite.fr/en-fr/about-us links to mysite.com/en-fr/sitemap.xml that contains URL blocks for every page of the ccTLD that exists in English. Each of these URL blocks contains hreflang info for that content on every ccTLD in every language (en-us, en-fr, de-de, en-de etc). There is more English content on the site as a whole so the English version of the sitemap is always bigger at the moment. Every page on every site has two lists of links in the footer. The first list is of links to every other ccTLD available so a user can easily switch between the French site and the German site if they should want to. Where possible this links directly to the corresponding piece of content on the alternative ccTLD, where it isn’t possible it just links to the homepage. The second list of links is essentially just links to the same piece of content in the other languages available on that domain. Mysite.com has its international targeting in Google Search console set to the US. The problems The biggest problem is that we didn’t consider properly how we would need to start from scratch with each new ccTLD so although each domain has a reasonable amount of content they only receive a tiny proportion of the traffic that mysite.com achieves. Presumably this is because of a standing start with regards to domain authority. The second problem is that, despite hreflang, mysite.com still outranks the other ccTLDs for brand name keywords. I guess this is understandable given the mismatch of DA. This is based on looking at search results via the Google AdWords Ad Preview tool and changing language, location, and domain. Solutions So the first solution is probably the most obvious and that is to move all the ccTLDs into a subfolder structure on the mysite.com site structure and 301 all the old ccTLD links. This isn’t really an ideal solution for a number of reasons, so I’m trying to explore some alternative possible routes to explore that might help the situation. The first thing that came to mind was to use cross-domain canonicals: Essentially this would be creating locale specific subfolders on mysite.com and duplicating the ccTLD sites in there, but using a cross-domain canonical to tell Google to index the ccTLD url instead of the locale-subfolder url. For example: mysite.com/fr-fr has a canonical of mysite.fr
Intermediate & Advanced SEO | | danatello
mysite.com/fr-fr/a-propos has a canonical of mysite.fr/a-propos Then I would change the links in the mysite.com footer so that they wouldn’t point at the ccTLD URL but at the sub-folder URL so that Google would crawl the content on the stronger domain before indexing the ccTLD domain version of the URL. Is this worth exploring with a test, or am I mad for even considering it? The alternative that came to my mind was to do essentially the same thing but use a 301 to redirect from mysite.com/fr-fr to mysite.fr. My question is around whether either of these suggestions might be worth testing, or am I completely barking up the wrong tree and liable to do more harm than good?0 -
Having 2 brands with the same content - will this work from an SEO perspective
Hi All, I would love if someone could help and provide some insights on this. We're a financial institution and have a set of products that we offer. We have recently joined with another brand and will now be offering all our products to their customers. What we are looking to do is have 1 site that masks the content for both sites so it appears as there are 2 seperate brands with different content - in fact we have a main site and then a sister brand that offers the same products. Is there anyway to do this so when someone searches for Credit Card from Brand A it is indexed under Brand A and same when someone searched for Credit Card from Brand B it is indexed under Brand B. The one thing is we would not want to rel:can the pages nor be penalised by googles latest PR algorithm. Hope someone can help! Thanks Dave
Intermediate & Advanced SEO | | CFCU1 -
Volusion SEO
I have an SEO setting on our Volusion e-commerce store enabled, it is titled "Enable full URL for Home Page Canonical Link (include /default.asp)" I am questioning whether or not this should be enabled for optimal SEO performance. Can anyone provide any advice on this?
Intermediate & Advanced SEO | | PartyStore0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Is this ok for content on our site?
We run a printing company and as an example the grey box (at the bottom of the page) is what we have on each page http://www.discountbannerprinting.co.uk/banners/vinyl-pvc-banners.html We used to use this but tried to get most of the content on the page, but we now want to add a bit more in-depth information to each page. The question i have is - would a 1200 word document be ok in there and not look bad to Google.
Intermediate & Advanced SEO | | BobAnderson0 -
SEO Tools for Content Audit
Hi i'm looking for a tool which can do a full content audit for a site for instance - Find pages which: • Lack text content. • Finds pages with lengthy meta descriptions • Finds missing H1 tags or multiple H1 tags . • Duplicate meta descriptions. • Find images with no alt text Are there any tools besides the ones on SEMOZ which can enable me to do a full content audit on factors like these. Or any SEO audit tools out there which you can recommend. Cheers, Mark
Intermediate & Advanced SEO | | monster990 -
Wordpress.com content feeding into site's subdomain, who gets SEO credit?
I have a client who had created a Wordpress.com (not Wordpress.org) blog, and feeds blog posts into a subdomain blog.client-site.com. My understanding was that in terms of SEO, Wordpress.com would still get the credit for these posts, and not the client, but I'm seeing conflicting information. All of the posts are set with permalinks on the client's site, such as blog.client-site.com/name-of-post, and when I run a Google site:search query, all of those individual posts appear in the Google search listings for the client's domain. Also, I've run a marketing.grader.com report, and these same results are seen. Looking at the source code on the page, however, I see this information which leads me to believe the content is being credited to, and fed in from, Wordpress.com ('client name' altered for privacy): href="http://client-name.files.wordpress.com/2012/08/could_you_survive_a_computer_disaster.jpeg">class="alignleft size-thumbnail wp-image-2050" title="Could_you_survive_a_computer_disaster" src="http://client-name.files.wordpress.com/2012/08/could_you_survive_a_computer_disaster.jpeg?w=150&h=143" I'm looking to provide a recommendation to the client on whether they are ok to continue moving forward with this current setup, or whether we should port the blog posts over to a subfolder on their primary domain www.client-site.com/blog and use Wordpress.org functionality, for proper SEO. Any advice?? Thank you!
Intermediate & Advanced SEO | | grapevinemktg0 -
How to seo : two domain having exactly the same content
Target website in question is ikt.co.id,it is hosted in our server located in US, what I am going to do right now is create a new subdomain id.ikt.co.id that served EXACTLY the same content but it is hosted in our Indonesia server. Whenever people go to any page within ikt.co.id, it will detect their country, if they are from Indonesia, I will redirect them to our Indonesia server. Okay, from SEO point of view I know there are couple problems such as Content duplication, and perhaps there are more. I think to handle the content duplication I can cannonical all URL on id.ikt.co.id to the ikt.co.id version instead. The same thing for social sharing, all links shared will be the one from ikt.co.id, so all of those link juices go to ikt.co.id, and for good measure I can also set the robots.txt to tell them not to index id.ikt.co.id ... All sounded good to me, until the I became paranoid and start thinking "have I missed anything that might hurt my SERP" ? here is the question, did i miss something important, if i did could you please tell me what it is and if possible brought the solution you think might work into this discussion?? Again thanks a lot for your help 😃
Intermediate & Advanced SEO | | IKT0