What would be considered a bad ratio to determine Index Bloat?
-
I am using Annie Cushing's most excellent site audit checklist from Google Docs. My question concerns Index Bloat because it is mentioned in her "Index" tab.
We have 6,595 indexed pages and only 4,226 of those pages have received 1 or more visits since January 1 2013.
Is this an acceptable ratio? If not, why not and what would be an acceptable ratio? I understand the basic concept that "dissipation of link juice and constrained crawl budget can have a significant impact on SEO traffic." [Thanks to Reid Bandremer http://www.lunametrics.com/blog/2013/04/08/fifteen-minute-seo-health-check/#sr=g&m=o&cp=or&ct=-tmc&st=(opu%20qspwjefe)&ts=1385081787]
If we make this an action item I'd like to have some idea how to prioritize it compared to other things that must be done. Thanks all!
-
Hi EGOL,
Wow, thank you so very much. This is one of the best answers I've ever received, probably the best, here in Q & A. Your thoughtful comments and suggestions are so appreciated. Honestly, you gave me a check list of things that have potential to be pure gold for us if we act on them.
Yes, you are correct, this is the site that had many issues with content being under tabs. It's also got a tremendous amount of duplicate and thin content issues, in addition to orphaned pages. Progress has been coming along, slowly and surely, but having your comments, and having them be so specific, pointed and concise are something I can take to my team and say "Here's an awesome check list of things that we can actually address right now, without re-platforming the site [you know, there are always people who think that the root of all a site's problems is the platform that it's on...pure mythology]."
I hope many others find your check list useful. Combined with Annie's audit spreadsheet in Google docs, I feel like I have the tools I need to go to battle and help this site fulfill its potential. Nearly every point you mentioned struck a chord. Better yet, now that I know my way around the "guts" of this homegrown CMS, I feel like I can actually make the necessary changes.
Egol, I really can't thank you enough.
-
I totally agree Keri. Every word Egol wrote , to me, is worth its weight in gold. I think this may be the best response I have ever received here in Q & A.
-
If only people realized how much good information members drop in Q&A...
Once again, thanks for this EGOL!
-
From my experience, that is a frightening number of pages that have not received a visit. I would definitely be taking some type of action. This hits to me like a site in very bad health. I have lots of little pages on a weak little site that get a lot more traffic than none since January. This would be high on my priority list of things to solve. Solving this could bring major income so this is potential opportunity as much as it is a problem.
To diagnose, I would check.... I know you and suspect that you have looked at all of these but just making a list, just in case.
A) Duplicate content problem? Does this site have lots of pages with very similar other pages on the same site. Does the company have another site that is running the same product descriptions? Does the site run product descriptions that are used from a datafeed supplied to vendors? Are affiliates using the same content? Have other websites stolen the content?
B) Have you been scraped and republished by a strong website? Just one is all it would take. A strong site was once scraping and republishing some of my short content pages and that killed the traffic into a section of my site. As soon as I asked them to stop traffic was back within days. One site can hurt you like that or numerous small sites - even minor sites in Asia can do this.
C) Lots of thin content? Do you have a lot of pages that might only have two or three unique sentences? Google could be disrespecting your entire site because of this.
D) Technical problem? I would be looking at robots.txt and .htaccess, noindex, badly coded links, content management system causing duplicated title tags or other problems? Faulty analyitics that make it look like these pages are not getting traffic when really they are.
E) Content cannibalization? Lots of separate pages for red widgets that are being filtered from the SERPs.
F) Inadequate linkjuice? This is not a huge site but not a small one. Does it have a nice amount of linkjuice coming in?
G) Does this site have pages that are really deeeeep down in the linkstructure? Many clicks down? Fix that either with a new linkstructure or some kickass powerful links that hit nodes deep in the site to force spiders down. I would solve with linkstructure.
H) This isn't the site that had all of the content behind tabs that I remember from a while ago? (My memory is really bad so it might not even be your site.) If you have pages like that I would get rid of those tabs immediately. I have a personal opinion that Google does not treat content hidden behind tabs as well as content that is out in the open.
I) Are there a lot of other sites - strong ones - publlishing very similar pages - like product description pages - competing for the same keywords. If that is the case you could be crowded out of the SERPs and receiving no traffic on these pages.
J) Does this site have a bad history? Does it have something that might be causing a penalty or filtering?
After doing all of that you might have something that is really worth fixing. If you can't identify the problem I would be slashing, hatcheting those pages from the site right away.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing a site from Google index with no index met tags
Hi there! I wanted to remove a duplicated site from the google index. I've read that you can do this by removing the URL from Google Search console and, although I can't find it in Google Search console, Google keeps on showing the site on SERPs. So I wanted to add a "no index" meta tag to the code of the site however I've only found out how to do this for individual pages, can you do the same for a entire site? How can I do it? Thank you for your help in advance! L
Technical SEO | | Chris_Wright1 -
No index tag robots.txt
Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
Technical SEO | | WeAreDigital_BE
Jens0 -
Date in permalinks. Bad?
Hello! I have a recipe website with over 1000 posts. Currently I have the month and year in the permalink that everyone is hinting off to me is bad. On the same front people tell me if I change the permalinks to just the post name it's going to significantly slow down my site. I'm torn on this one about changing. From Google's standpoint is it better to change to the post name and if so should I be fearing I'm going to run into trouble with the change? Any suggestions you have would be appreciated. Thanks!!!
Technical SEO | | Rich-DC1 -
Should I index my search result pages?
I have a job site and I am planning to introduce a search feature. The question I have is, is it a good idea to index search results even if the query parameters are not there? Example: A user searches for "marketing jobs in New York that pay more than 50000$". A random page will be generated like example.com/job-result/marketing-jobs-in-new-york-that-pay-more-than-50000/ For any search that gets executed, the same procedure would be followed. This would result in a large number of search result pages automatically set up for long tail keywords. Do you think this is a good idea? Or is it a bad idea based on all the recent Google algorithm updates?
Technical SEO | | jombay0 -
Why does my site rank so badly
its my turn to ask the interminable question why does my site rank so badly? site is: marriagerecords.org.uk. it was #1 for 'marriage records' on google for about 6 months. then it was 5th to 10th for about 2 months. now it is nowhere for this phrase and anything else, none of the pages I have written rank for anything. I have spent hours upon hours researching original content and I have got some great backlinks from sites like wrexham.gov.uk and somerset.gov.uk (some dont show in opensiteexplorer yet). im guessing im over-optimizing something but i'd love some concrete fixes if anyone could suggest any. thanks, tom
Technical SEO | | lethal0r0 -
Getting querystring indexed?
Hi everybody! I work with tags a lot on my photo blog but I haven't gotten Google to index one tag so far. Any tips on how to do this? Thanks / Niklas
Technical SEO | | KAN-Malmo0 -
Is link cloaking bad?
I have a couple of affiliate gaming sites and have been cloaking the links, the reason I do this is to stop have so many external links on my sites. In the robot.txt I tell the bots not to index my cloaked links. Is this bad, or doesnt it really matter? Thanks for your help.
Technical SEO | | jwdesign0 -
What is the most effective way of indexing a localised website?
Hi all, I have a website, www.acrylicimage.com which provides products in three different currencies, $, £ and Euro. Currently a user can click on a flag to indicate which region they are in, or if the user has not manually selected the website looks at the users Locale setting and sets the region for them. The website also has a very simple content management system which provides ever so slightly different content depending on which region the user is in. The difference in content might literally be a few words per page, like contact details, measurements i.e. imperial to metric. I dont believe that GoogleBot, or any other bot for that matter, sets a Locale, and therefore it will only ever be indexing the content on our default region - the UK. So, my question really is if I need to be able to index different versions of content on the same page, is the best route to provide alternate urls i.e.: /en/about-us
Technical SEO | | dotcentric
/us/about-us
/eu/about-us The only potential downside I see to this is there are currently a couple of pages that do have exactly the same content regardless of whether you have selected the UK or USA regions - could this be considered content duplication? Thanks for your help. Al0