Rotating Content Concern on Deep Pages
-
Hi there,
I apologize if I'm too vague, but this is a tough issue describe without divulging too much of our project.
I'm working on a new project which will provide information results in sets of 3. Let's say someone wants to find 3 books that match their criteria, either through their organic search which leads them to us, or through their internal search on our site.
For instance, if they're looking for classic movies involving monsters, we might display Frankenstein, Dracula, and The Mummy. We'd list unique descriptions about the movies and include lots of other useful information.
However, there are obviously many more monster movies than those 3, so when a user refreshes the page or accesses it again, a different set of results show up. For this example, assume we have 5 results to choose from. So it's likely Google will index different results shuffled around.
I'm worried about this causing problems down the line with ranking. The meat and potatoes of the page content are the descriptions and information on the movies. If these are constantly changing, I'm afraid the page will look "unstable" to Google since we have no real static content beyond a header and title tag.
Can anyone offer any insight to this?
Thanks!
-
Thanks for the response. The issue of "hiding" the content with the randomization was a fear of mine. Believe me, I don't like the rotating content design, but it's where we're at right now.
3 search results, think specific businesses, but for user experience, only 3 will be shown at once. This is not something to be changed unfortunately. If more than 3 are in that specific business category, we'll be rotating them out (which I don't like) upon refresh.
The only solution I can think of is to have the top 3 remain static and allow the user to click a "Show more" button which loads them beneath (or replaces the original 3). Either way, Google shouldn't have an issue with that, correct?
I know there are "better" ways to accomplish what we're asking, but the site is custom built and nearly 95% complete. We are also taking a unique approach to the way we display results and serve them to our clients, so the most optimal way is not achievable at this point. It's basically finding the most optimal for what we can do, if that makes sense. Thanks for understanding!
-
Sorry been mega busy
First of all, never hide content from Google if a user is unable to view that information. You will get slapped for it. Even if the algorithm does not pick it up, someone will report it at some point. That is a bad foundation to start from.
What you are trying to do is complicated to get the full picture in my head hence the lack of response from others in this forum I think.
You need to describe exactly what will be on the page and why and what will be on others and why those pages need to be indexed. This way we can work out of the strategy you are taking is even the right one. There is likely a better way to do it.
-
Hi Gary,
Were you notified of my follow-up posts? I'd love to hear additional information from you.
Thanks a lot!
-
Hi Gary,
One thing you could try is loading all the matches on to a page and only show the top 3 matches with an option to reveal more and mark all the code up with a schema. This way the content will always be on the page and able to be crawled by Googlebot.
This is the idea I've been toying with. Do you have any idea if we could preload all matches/results and still use the refresh? It'd technically (I think) be different because the user can't load more on command, like with a button, but Google can see them.
I feel like it's a little iffy since Google seems to only approve of hidden text if the user controls when they see it or not. Any idea?
Thanks again!
-
Lesley,
Thanks for the response.
If we scripted the page so Google would ignore the content, I'm afraid we'd be in nearly the same boat we're in now. As in, we'd have no content on the page and wouldn't rank for anything.
While it would effectively "solve" the potential rotating content issues and penalties, we wouldn't have anythign to rank for.
Gary,
Thanks for the helpful response!
1. How would we run into internal duplicate content issues? These 3 results (in full) would only be found on this specific page, they'd just be rotating.
I will say that the way these results pages are structured includes snippets of content that can be found on each results individual page, e.g., a snippet of Frankenstein's plot will show on the results page, and once clicked, will show the full entry. So there's going to be some duplicate content. That shouldn't be a huge deal though?
2. That's exactly the reason I hate this. Even if Google didn't get pissed, we wouldn't have static content (keywords, longtails) to build authority and rank for.
Idea #1: I actually have this prinicple written down, but slightly different. If we had a link at the bottom of the results in Javascript to "shuffle" or "refresh" the content, the user would get the benefit, but since it's not a new page, Google couldn't crawl it. So they'd only randomize on command, but stick with the initial 3 on pageload.
I was also toying with the idea of locking 2 of the results and only shuffling the 3rd, that way there's some semblance of continuity to the indexing and we'd always be working towards that content ranking. Thoughts?
Are you saying with SCHEMA we can "hide" the additional/rotated results initially to the user, but Google sees it immediately? If so, please elaborate or send me a link since this is interesting!
Idea #2: The snippets actually lead/link to their static pages on their own URL (this is the only duplicate content I believe) so that's fine, but yes, we aren't concerned with the static pages ranking, only the grouped together results.
-
You run into a number of issues by having these pages indexed.
1. Lots of internal duplicate content, Google has said this is not a problem and the they will serve up the best result. But it can trigger Panda issues.
2. The content always changes so you will confuse Googlebot and have issues ranking for specific terms for any period of time. (your SERPS would fluctuate like crazy or trigger a quality algorithm)
Some ideas:
One thing you could try is loading all the matches on to a page and only show the top 3 matches with an option to reveal more and mark all the code up with a schema. This way the content will always be on the page and able to be crawled by Googlebot.
Another option is to not index these pages at all and create static pages for each item. But this defeats the object of what you are trying to rank for.
Serving up random content is always going to be an issue for Googlebot, but more and more webmasters have responsive designs that hide and show content based on clickable actions on pages. Googlebot indexes all the content but is smart at working out what is also visible to the user and giving preference to it.
-
In my opinion the safest way to do it would be to have a discrete iframe that loaded the contents. The reason being is that google would ignore it. It would make it on par with twitter widgets and facebook like boxes.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the impact of an off-topic page to other pages on the site?
We are working with a client who has one irrelevant, off-topic post ranking incredibly well and driving a lot of traffic. However, none of the other pages on the site, that are relevant to this client's business, are ranking. Links are good and in-line with competitors for the various terms. Oddly, very few external links reference this off-topic post, most are to the home page. Local profile is also in-line with competitors, including reviews, categorization, geo-targeting, pictures, etc. No spam issues exist and no warnings in Google Search Console. The only thing that seems weird is this off-topic post but that could affect rankings on other pages of the site? Would removing that off-topic post potentially help increase traffic and rankings for the other more relevant pages of the site? Appreciate any and all help or ideas of where to go from here. Thanks!
Intermediate & Advanced SEO | | Matthew_Edgar0 -
Duplicate content but different pages?
Hi there! Im getting LOTS of "duplicate content" pages but the thing is they are different pages. My website essentially is a niche video hosting site with embedded videos from Youtube. Im working on adding personal descriptions to each video but keeping the same video title (should I re-word it from the original also? Any help?
Intermediate & Advanced SEO | | sarevme0 -
Duplicate Page Content Issues Reported in Moz Crawl Report
Hi all, We have a lot of 'Duplicate Page Content' issues being reported on the Moz Crawl Report and I am trying to 'get to the bottom' of why they are deemed as errors... This page; http://www.bolsovercruiseclub.com/about-us/job-opportunities/ has (admittedly) very little content and is duplicated with; http://www.bolsovercruiseclub.com/cruise-deals/cruise-line-deals/explorer-of-the-seas-2015/ This page is basically an image and has just a couple of lines of static content. Also duplicated with; http://www.bolsovercruiseclub.com/cruise-lines/costa-cruises/costa-voyager/ This page relates to a single cruise ship and again has minimal content... Also duplicated with; http://www.bolsovercruiseclub.com/faq/packing/ This is an FAQ page again with only a few lines of content... Also duplicated with; http://www.bolsovercruiseclub.com/cruise-deals/cruise-line-deals/exclusive-canada-&-alaska-cruisetour/ Another page that just features an image and NO content... Also duplicated with; http://www.bolsovercruiseclub.com/cruise-deals/cruise-line-deals/free-upgrades-on-cunard-2014-&-2015/?page_number=6 A cruise deals page that has a little bit of static content and a lot of dynamic content (which I suspect isn't crawled) So my question is, is the duplicate content issued caused by the fact that each page has 'thin' or no content? If that is the case then I assume the simple fix is to increase add \ increase the content? I realise that I may have answered my own question but my brain is 'pickled' at the moment and so I guess I am just seeking assurances! 🙂 Thanks Andy
Intermediate & Advanced SEO | | TomKing0 -
"No index" page still shows in search results and paginated pages shows page 2 in results
I have "no index, follow" on some pages, which I set 2 weeks ago. Today I see one of these pages showing in Google Search Results. I am using rel=next prev on pages, yet Page 2 of a string of pages showed up in results before Page 1. What could be the issue?
Intermediate & Advanced SEO | | khi50 -
Redirecting thin content city pages to the state page, 404s or 301s?
I have a large number of thin content city-level pages (possibly 20,000+) that I recently removed from a site. Currently, I have it set up to send a 404 header when any of these removed city-level pages are accessed. But I'm not sending the visitor (or search engine) to a site-wide 404 page. Instead, I'm using PHP to redirect the visitor to the corresponding state-level page for that removed city-level page. Something like: if (this city page should be removed) { header("HTTP/1.0 404 Not Found");
Intermediate & Advanced SEO | | rriot
header("Location:http://example.com/state-level-page")
exit();
} Is it problematic to send a 404 header and still redirect to a category-level page like this? By doing this, I'm sending any visitors to removed pages to the next most relevant page. Does it make more sense to 301 all the removed city-level pages to the state-level page? Also, these removed city-level pages collectively have very little to none inbound links from other sites. I suspect that any inbound links to these removed pages are from low quality scraper-type sites anyway. Thanks in advance!2 -
Changing content in a well established page.
I have a question i rank well for O'fallon lawn care and I dont rank well for O'Fallon, MO lawn care. Is it ok to go in that page and add some content optimizing it around O'Fallon, MO Lawn care or is that a bad idea. Appreciate any feed back thanks everyone.
Intermediate & Advanced SEO | | gslc0 -
What is the best practice to optimize page content with strong tags?
For example, if I have a sub page dedicated to the keyword "Houston Leather Furniture" is it best practice to bold ONLY the exact match keyword? Or should ONLY the words from the keyword (so 'Houston' 'Leather' and 'Furniture') Is there a rule to how many times it should be done before its over-optimization? I appreciate any information as I want to do the BEST possible practice when it comes to this topic. Thanks!
Intermediate & Advanced SEO | | MonsterWeb280 -
Duplicate page content query
Hi forum, For some reason I have recently received a large increase in my Duplicate Page Content issues. Currently it says I have over 7,000 duplicate page content errors! For example it says: Sample URLs with this Duplicate Page Content http://dikelli.com.au/accessories/gowns/news.html http://dikelli.com.au/accessories/news.html
Intermediate & Advanced SEO | | sterls
http://dikelli.com.au/gallery/dikelli/gowns/gowns/sale_gowns.html However there are no physical links to any of these page on my site and even when I look at my FTP files (I am using Dreamweaver) these directories and files do not exist. Can anyone please tell me why the SEOMOZ crawl is coming up with these errors and how to solve them?0