Technical Automated Content - Indexing & Value
-
One of my clients provides some Financial Analysis tools, which generate automated content on a daily basis for a set of financial derivatives. Basically they try to estimate through technical means weather a particular share price is going up or down, during the day as well as their support and resistance levels.
These tools are fairly popular with the visitors, however I'm not sure on the 'quality' of the content from a Google Perspective. They keep an archive of these tools which tally up to nearly a 100 thousand pages, what bothers me particularly is that the content in between each of these varies only slightly.
Textually there are maybe up to 10-20 different phrases which describe the move for the day, however the page structure is otherwise similar, except for the Values which are thought to be reached on a daily basis. They believe that it could be useful for users to be able to access back-dated information to be able to see what happened in the past. The main issue is however that there is currently no back-links at all to any of these pages and I assume Google could deem these to be 'shallow' provide little content which as time passes become irrelevant. And I'm not sure if this could cause a duplicate content issue; however they already add a Date in the Title Tags, and in the content to differentiate.
I am not sure how I should handle these pages; is it possible to have Google prioritize the 'daily' published one. Say If I published one today; if I had to search "Derivative Analysis" I would see the one which is dated today rather then the 'list-view' or any other older analysis.
-
I would appreciate some more feedback, I'm looking to group some of these pages from the 100k we're bringing it down to around 33k.
As regards comments not sure it's very feasible from research we did not many people go into back-dated entries so it's highly doubtful we'd receive much if any comments.
-
Right, I guess that's true as we still rank for other terms. However there are concerns that this could effect the Domain Rank ( I don't think its the case). We've decided to try drop at least 1/3rd of these 'automated pages' by displaying them in AJAX this way there should be a bit less stuff in the google index.
-
If certain area of the website have a duplicate content that Google will only ignore those pages which contain duplication the affect will never be on the complete website!
-
I don't exactly want all content to be deemed to be unique, what I'm more interested in is making sure that this content does not penalize the rest of the website; it's fine if its ignored by Google if its more then a week or two old. What we don't want is old results coming up when today's value is far more interesting.
I'd be happy if Google would prioritize the 'daily' posts more with relation to 'freshness'.
-
In my personal opinion slightly varied content can count under the duplicate content and this is mainly because the major %age of content on different pages is same...
As you explain how the content is generated, I don’t think there is a way you can manage to change the page in such a way that it becomes unique from each other and adding unique content to each pages is not a very good idea as there are around 100 thousand pages as you said earlier!
If I would be at your place I would have added the comment section below the content so that users who are interested in the content can share their experience, how this data helped them, what exactly happened in the market.... and this user generated content will help the up-coming pages to be unique with user generated content.
This idea will help to an extent to give new life to old pages but saying that it will make all pages unique is almost next to impossible in my eye!
Obviously, this is my suggestions but I would love to listen to others what they would do if they gone through the similar situation!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Teaser Content Help!!
I'm in the process of a redesign and upgrade to Drupal 8 and have used Drupal's taxonomy feature to add a fairly large database of Points of Interest, Services etc. initially this was just for a Map/Filter for site users. The developer also wants to use teasers from these content types (such as a scenic vista description) as a way to display the content on relevant pages (such as the scenic vistas page, as well as other relevant pages). Along with the content it shows GPS coordinates and icons related to the description. In short, it looks cool, can be used in multiple relevant locations and creates a great UX. However, many of these teasers would basically be pieces of content from pages with a lot of SEO value, like descriptive paragraphs about scenic viewpoints from the scenic viewpoints page. Below is an example of how the descriptions of the scenic viewpoints would be displayed on the scenic viewpoints pages, as well as other potential relevant pages. HOW WILL THIS AFFECT THE SEO VALUE OF THE CONTENT?? Thanks in advance for any help, I can't find an answer anywhere. About 250 words worth of content about a scenic vista. There’s about 8 scenic vista descriptions like this from the scenic vistas page, so a good chunk of valuable content. There are numerous long form content pages like this that have descriptions and information about sites and points of interest that don't warrant having their own page. For more specific content with a dedicated page, I can just the the intro paragraph as a teaser and link to that specific page of content. Not sure what to do here.
Intermediate & Advanced SEO | | talltrees0 -
Possible duplicate content issue
Hi, Here is a rather detailed overview of our problem, any feedback / suggestions is most welcome. We currently have 6 sites targeting the various markets (countries) we operate in all websites are on one wordpress install but are separate sites in a multisite network, content and structure is pretty much the same barring a few regional differences. The UK site has held a pretty strong position in search engines the past few years. Here is where we have the problem. Our strongest page (from an organic point of view) has dropped off the search results completely for Google.co.uk, we've picked this up through a drop in search visibility in SEMRush, and confirmed this by looking at our organic landing page traffic in Google Analytics and Search Analytics in Search Console. Here are a few of the assumptions we've made and things we've checked: Checked for any Crawl or technical issues, nothing serious found Bad backlinks, no new spammy backlinks Geotarggetting, this was fine for the UK site, however the US site a .com (not a cctld) was not set to the US (we suspect this to be the issue, but more below) On-site issues, nothing wrong here - the page was edited recently which coincided with the drop in traffic (more below), but these changes did not impact things such as title, h1, url or body content - we replaced some call to action blocks from a custom one to one that was built into the framework (Div) Manual or algorithmic penalties: Nothing reported by search console HTTPs change: We did transition over to http at the start of june. The sites are not too big (around 6K pages) and all redirects were put in place. Here is what we suspect has happened, the https change triggered google to re-crawl and reindex the whole site (we anticipated this), during this process, an edit was made to the key page, and through some technical fault the page title was changed to match the US version of the page, and because geotargetting was not turned on for the US site, Google filtered out the duplicate content page on the UK site, there by dropping it off the index. What further contributes to this theory is that a search of Google.co.uk returns the US version of the page. With country targeting on (ie only return pages from the UK) that UK version of the page is not returned. Also a site: query from google.co.uk DOES return the Uk version of that page, but with the old US title. All these factors leads me to believe that its a duplicate content filter issue due to incorrect geo-targetting - what does surprise me is that the co.uk site has much more search equity than the US site, so it was odd that it choose to filter out the UK version of the page. What we have done to counter this is as follows: Turned on Geo targeting for US site Ensured that the title of the UK page says UK and not US Edited both pages to trigger a last modified date and so the 2 pages share less similarities Recreated a site map and resubmitted to Google Re-crawled and requested a re-index of the whole site Fixed a few of the smaller issues If our theory is right and our actions do help, I believe its now a waiting game for Google to re-crawl and reindex. Unfortunately, Search Console is still only showing data from a few days ago, so its hard to tell if there has been any changes in the index. I am happy to wait it out, but you can appreciate that some of snr management are very nervous given the impact of loosing this page and are keen to get a second opinion on the matter. Does the Moz Community have any further ideas or insights on how we can speed up the indexing of the site? Kind regards, Jason
Intermediate & Advanced SEO | | Clickmetrics0 -
No content using Fetch
Wooah, this one makes me feel a bit nervous. The cache version of the site homepage shows all the text, but I understand that is the html code constructed by the browser. So I get that. If I Google some of the content it is there in the index and the cache version is yesterday. If I Fetch and Render in GWT then none of the content is available in the preview - neither Googlebot or visitor view. The whole preview is just the menu, a holding image for a video and a tag line for it. There are no reports of blocked resources apart from a Wistia URL. How can I decipher what is blocking Google if it does not report any problems? The CSS is visible for reference to, for example, <section class="text-within-lines big-text narrow"> class="data"> some content... Ranking is a real issue, in part by a poorly functioning main menu. But i'm really concerned with what is happening with the render.
Intermediate & Advanced SEO | | MickEdwards0 -
Duplicate content on subdomains
Hi All, The structure of the main website goes by http://abc.com/state/city/publication - We have a partnership with public libraries to give local users access to the publication content for free. We have over 100 subdomains (each for an specific library) that have duplicate content issues with the root domain, Most subdomains have very high page authority (the main public library and other local .gov websites have links to this subdomains).Currently this subdomains are not index due to the robots text file excluding bots from crawling. I am in the process of setting canonical tags on each subdomain and open the robots text file. Should I set the canonical tag on each subdomain (homepage) to the root domain version or to the specific city within the root domain? Example 1:
Intermediate & Advanced SEO | | NewspaperArchive
Option 1: http://covina.abc.com/ = Canonical Tag = http://abc.com/us/california/covina/
Option 2: http://covina.abc.com/ = Canonical Tag = http://abc.com/ Example 2:
Option 1: http://galveston.abc.com/ = Canonical Tag = http://abc.com/us/texas/galveston/
Option 2: http://galveston.abc.com = Canonical Tag = http://abc.com/ Example 3:
Option 1: http://hutchnews.abc.com/ = Canonical Tag = http://abc.com/us/kansas/hutchinson/
Option 2: http://hutchnews.abc.com/ = Canonical Tag = http://abc.com/ I believe it makes more sense to set the canonical tag to the corresponding city (option 1), but wondering if setting the canonical tag to the root domain will pass "some link juice" to the root domain and it will be more beneficial. Thanks!0 -
Incorrect cached page indexing in Google while correct page indexes intermittently
Hi, we are a South African insurance company. We have a page http://www.miway.co.za/midrivestyle which has a 301 redirect to http://www.miway.co.za/car-insurance. Problem is that the former page is ranking in the index rather than the latter. The latter page does index occasionally in the same position, but rarely. This is primarily for search phrases like "car insurance" and "car insurance quotes". The ranking was knocked down the index with Penquin 2.0. It was not ranking at all but we have managed to recover to 12/13. This abnormally has only been occurring since the recovery. The correct page does index for other search terms like "insurance for car". Your help would be appreciated, thanks!
Intermediate & Advanced SEO | | miway0 -
Is this will post Duplicated Content
I have domain let say abcshoesonlinestore.com and inside pages of this abcshoesonlinestore.com is ranking very well such as affiliate page, knowledgebase page and other pages, HOWEVER i would like to change my home page and product page to shorter url which abcshoes.com and keep those inside page like www.abashoesonlinestore.com/affiliate or www.abcshoesonlinestore.com/knowledgebase as it is - will this pose duplicate content? This is my plan to do it: the home page and product page will be www.abcshoes.com and when people click www.abcshoes.com/affiliate it will redirect 301 to abcshoesonlinestore.com/affiliate HOWEVER if someone type abcshoesonlinestore.com or abcshoesonlinestore.com/product it will redirect to abcshoes.com or its product page itself (i want to use 302 instead 301 (ASSUMING if the homapage or product page have manual penalization or anything bad we want to leave it behind and start fresh JUST assume because i read some post that 301 will carry any bad thing to new site too) The reason i do not want to 301 from abcshoesonlinestore.com to abcshoes.com is because those many pages is ranking top 3 in GOOGLE ( i worry will lose this ranking since this bringing traffic for us) Is this good idea or bad idea or any better idea or should i try to see the outcome 🙂 - the only concern is from abcshoesonlinestore.com to abcshoes.com will pose as duplicate content if i do not use 301 - or can i use google webmaster tools to remove the home page and product page for abcshoesonlinestore.com can we tell google that? PS: (home page and product page will have new revise content and minor design change) but inside page will stay the same design Please give me some advise
Intermediate & Advanced SEO | | owen20110 -
Google Not Indexing Description or correct title (very technical)
Hey guys, I am managing the site: http://www.theattractionforums.com/ If you search the keyword "PUA Forums", it will be in the top 10 results, however the title of the forum will be "PUA Forums" rather than using the code in the title tag, and no description will display at all (despite there being one in the code). Any page other than the home-page that ranks shows the correct title and description. We're completely baffled! Here are some interesting bits and pieces: It shows up fine on Bing If I go into GWT and Fetch as Google Bot, it shows up as "Unreachable" when I try to pull the home-page. We previously found that it was pulling 'index.htm' before 'index.php' - and this was pulling a blank page. I've fixed this in the .htaccess however to make it redirect, however this hasn't solved the problem. I've disallowed it from pulling the description .etc from the Open Directory with the use of meta tags - didn't change anything. It's vBulletin and is running vBSEO Any suggestions at all guys? I'll be forever in anyones debt who can solve this, it's proving to be near impossible to fix. Here is the .htaccess file, it may be a part of the issue: RewriteEngine On DirectoryIndex index.php index.html Redirect /index.html http://www.theattractionforums.com/index.php RewriteCond %{HTTP_HOST} !^www.theattractionforums.com
Intermediate & Advanced SEO | | trx
RewriteRule (.*) http://www.theattractionforums.com/$1 [L,R=301] RewriteRule ^((urllist|sitemap_).*.(xml|txt)(.gz)?)$ vbseo_sitemap/vbseo_getsitemap.php?sitemap=$1 [L] RewriteCond %{REQUEST_URI} !(admincp/|modcp/|cron|vbseo_sitemap/)
RewriteRule ^((archive/)?(..php(/.)?)?)$ vbseo.php [L,QSA] RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !^(admincp|modcp|clientscript|cpstyles|images)/
RewriteRule ^(.+)$ vbseo.php [L,QSA]
RewriteRule ^forum/(.*)$ http://www.theattractionforums.com/$1 [R=301,L]0 -
Duplicate Content Through Sorting
I have a website that sells images. When you search you're given a page like this: http://www.andertoons.com/search-cartoons/santa/ I also give users the option to resort results by date, views and rating like this: http://www.andertoons.com/search-cartoons/santa/byrating/ I've seen in SEOmoz that Google might see these as duplicate content, but it's a feature I think is useful. How should I address this?
Intermediate & Advanced SEO | | andertoons0