Google Indexed my Site then De-indexed a Week After
-
Hi there,
I'm working on getting a large e-commerce website indexed and I am having a lot of trouble.
The site is www.consumerbase.com.We have about 130,000 pages and only 25,000 are getting indexed.
I use multiple sitemaps so I can tell which product pages are indexed, and we need our "Mailing List" pages the most - http://www.consumerbase.com/mailing-lists/cigar-smoking-enthusiasts-mailing-list.html
I submitted a sitemap a few weeks ago of a particular type of product page and about 40k/43k of the pages were indexed - GREAT! A week ago Google de-indexed almost all of those new pages. Check out this image, it kind of boggles my mind and makes me sad. http://screencast.com/t/GivYGYRrOV
While these pages were indexed, we immediately received a ton of traffic to them - making me think Google liked them.
I think our breadcrumbs, site structure, and "customers who viewed this product also viewed" links would make the site extremely crawl-able.
What gives?
Does it come down to our site not having enough Domain Authority?
My client really needs an answer about how we are going to get these pages indexed. -
Sorry, I only gave a brief look at your site, but my first impressions was that there was very little content, and many outgoing links.
When I try to rank keyword pages I have a minimum of 500 words within the content area, but mostly try to aim for 700 - 1000. Of course this is not always possible, but you have alot of duplicate pages, plus very little text on each of those pages. I have tried to rank local seo keywords with very similar content like that, and I find it takes alot longer to get indexed and might even bounce in and out of the serps for a while.
Gaining more authority will help, as will inserting more text as you can insert more keyword \ keywords variations without appearing spammy with the keyword density. I think ultimately without unique content you have to be prepared to wait a while for them to be all indexed, and even then there is no guarantee (although better site authority should help).
Also might be worth checking your sitemap structure. I believe there is a limit on what each sitemap can have, so you may need to split it up over several although I believe some plugins do this automatically.
-
Thanks for the response. Is there a particular number of words per page or tex-to-code ratio I should strive for?
-
Many of your pages are very similar with just changes in keywords \ headings. I appreciate with that many pages it will be hard to make them more unique but as a result it may take longer for them to index, if at all.
There is also not much text on each of the pages, so I wouldn't class the content as particularly good quality either (in terms of what google sees), which also may hinder the chances of getting indexed. Plus 2 weeks for a site like that is too soon to tell whether Google will index them. The initial indexing of alot of pages is quite usual.
Short of changing the pages, all I can suggest is build up your site's authority.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best Practice Approaches to Canonicals vs. Indexing in Google Sitemap vs. No Follow Tags
Hi There, I am working on the following website: https://wave.com.au/ I have become aware that there are different pages that are competing for the same keywords. For example, I just started to update a core, category page - Anaesthetics (https://wave.com.au/job-specialties/anaesthetics/) to focus mainly around the keywords ‘Anaesthetist Jobs’. But I have recognized that there are ongoing landing pages that contain pretty similar content: https://wave.com.au/anaesthetists/ https://wave.com.au/asa/ We want to direct organic traffic to our core pages e.g. (https://wave.com.au/job-specialties/anaesthetics/). This then leads me to have to deal with the duplicate pages with either a canonical link (content manageable) or maybe alternatively adding a no-follow tag or updating the robots.txt. Our resident developer also suggested that it might be good to use Google Index in the sitemap to tell Google that these are of less value? What is the best approach? Should I add a canonical link to the landing pages pointing it to the category page? Or alternatively, should I use the Google Index? Or even another approach? Any advice would be greatly appreciated. Thanks!
Intermediate & Advanced SEO | | Wavelength_International0 -
Can Google Crawl & Index my Schema in CSR JavaScript
We currently only have one option for implementing our Schema. It is populated in the JSON which is rendered by JavaScript on the CLIENT side. I've heard tons of mixed reviews about if this will work or not. So, does anyone know for sure if this will or will not work. Also, how can I build a test to see if it does or does not work?
Intermediate & Advanced SEO | | MJTrevens0 -
Just moved to CDN and site dropped in Google
Hi there, I have been modifying a clients site for months now trying to get higher up in Google for the term "wedding dresses essex" on the website https://www.preciousmomentsbridalwear.co.uk/ It's always ranked around 7th / 8th place and we want to try and get it into 4/5th position ideally. I have optimised pages and then due to the site speed not being that great we moved it to MaxCDN this week which has made the site much faster, but now we have dropped to number 10 in Google and in danger of dropping out of the first page. I was hoping that making the site much faster for desktop and mobile would help not hinder! Any help would be appreciated! Simon
Intermediate & Advanced SEO | | Doublestruck0 -
Any way to force a URL out of Google index?
As far as I know, there is no way to truly FORCE a URL to be removed from Google's index. We have a page that is being stubborn. Even after it was 301 redirected to an internal secure page months ago and a noindex tag was placed on it in the backend, it still remains in the Google index. I also submitted a request through the remove outdated content tool https://www.google.com/webmasters/tools/removals and it said the content has been removed. My understanding though is that this only updates the cache to be consistent with the current index. So if it's still in the index, this will not remove it. Just asking for confirmation - is there truly any way to force a URL out of the index? Or to even suggest more strongly that it be removed? It's the first listing in this search https://www.google.com/search?q=hcahranswers&rlz=1C1GGRV_enUS753US755&oq=hcahr&aqs=chrome.0.69i59j69i57j69i60j0l3.1700j0j8&sourceid=chrome&ie=UTF-8
Intermediate & Advanced SEO | | MJTrevens0 -
Google Panda question Category Pages on e-commerce site
Dear Mates, Could you check this category page of our e-commerce site: http://tinyurl.com/zqjalng and give me your opinion about, this is a Panda safe page or not? Actually I have this as NOINDEX preventing any Panda hit, but I'm in doubt. My Question is "Can I index this page again in peace?" Thank you Clay
Intermediate & Advanced SEO | | ClayRey0 -
Client has moved to secured https webpages but non secured http pages are still being indexed in Google. Is this an issue
We are currently working with a client that relaunched their website two months ago to have hypertext transfer protocol secure pages (https) across their entire site architecture. The problem is that their non secure (http) pages are still accessible and being indexed in Google. Here are our concerns: 1. Are co-existing non secure and secure webpages (http and https) considered duplicate content?
Intermediate & Advanced SEO | | VanguardCommunications
2. If these pages are duplicate content should we use 301 redirects or rel canonicals?
3. If we go with rel canonicals, is it okay for a non secure page to have rel canonical to the secure version? Thanks for the advice.0 -
How does Google index pagination variables in Ajax snapshots? We're seeing random huge variables.
We're using the Google snapshot method to index dynamic Ajax content. Some of this content is from tables using pagination. The pagination is tracked with a var in the hash, something like: #!home/?view_3_page=1 We're seeing all sorts of calls from Google now with huge numbers for these URL variables that we are not generating with our snapshots. Like this: #!home/?view_3_page=10099089 These aren't trivial since each snapshot represents a server load, so we'd like these vars to only represent what's returned by the snapshots. Is Google generating random numbers going fishing for content? If so, is this something we can control or minimize?
Intermediate & Advanced SEO | | sitestrux0 -
Webpages look like they have been de-indexed
Hi there, My webpages seem that they have been de-indexed, I have no page rank anymore for my webpages, my homepage which was a PR4, is now saying N/A, plus lots of my rankings have dropped, what check should I been making to identify that this is the case? Kind Regards
Intermediate & Advanced SEO | | Paul780