Howcome Google is indexing one day 2500 pages and the other day only 150 then 2000 again ect?
-
This is about an big affiliate website of an customer of us, running with datafeeds...
Bad things about datafeeds:
- Duplicate Content (product descriptions)
- Verrryyyy Much (thin) product pages (sometimes better to noindex, i know, but this customer doesn't want to do that)
-
Hi Dana,
Thanks for your detailed explanation. Appreciate it Off course I understand that site speed is a factor for crawling (+ ranking) and that the Google bots only want to spend a certain period of time on a website. It's more like, when servers are performing almost equal every day so page loads are igual to, what could it be?
I agree with your two points of considering, but I'm the type of guy that always wants to know why something is happening
@Nakul: Thanks for your responds!
The pages that are in and out of the index are mostly product pages. So the thing about "frequently updates" can be something. The website is pretty young so authority is not yet build as it should be for a big site. This can also be a factor cause the more authority the more time Google will spend indexing a website rightAnyway, great thanks for both of your answers!
Gr. Wesley
-
I agree with everything Nakul has said. Just to piggyback on that with additional information, try to think about it this way. Remember when someone gave you $1.00 when you were little and said "Don't spend it all in one place?" Well, someone at Google must have grown up with the same grandparents I did.
Okay, now, the analogy-free explanation
Google has a "crawl budget" every day. Every day that budget is allocated to millions of different sites. Now, by "sites" I mean "pages." Some pages change really frequently (i.e. the Yahoo New homepage). Some pages change hardly ever (i.e. an archived blog post). Also, some pages have very high PR and others, not so much. Also, some pages load extremely fast (consuming less of Google's bandwidth when the page is crawled) which leaves more Google resources available to Google to crawl more pages. Google likes it, and so should we all because people with fast sites are making it possible for everyone to get crawled more often (in essence, making them very considerate, well-behaved members of the Internet community).
So, based on all these, Google is going to apportion a part of its crawl budget to your site on any given day. Some days, it may have more room in its budget for you than others. Part of this might be effected by how fast pages, on any given day, load from your site. A ton of parameters can come into play here, including whether or not the pages on that day are heavier, or whether or not your servers are performing really fast on one day versus another.
I'd say the two things to be really concerned with after considering all of these things are:
- Is Google indexing all of the pages you want indexed?
- Is Google's cache date of your important pages recent enough? (i.e. 3 weeks or less)
If the answer is "no" to either one of those, then it's time to do some investigation to find out if there are technical issues or penalties that have been put in place that are hurting Google's ability or desire (not the right word to use about a bot, but I'm using it anyway) to crawl your pages.
Does that help?
-
Domain Authority / Pagerank is what Google looks to see how deep and how frequently Google will crawl a particular website. They also typically look into how frequently the content is being updated.
Think about it from Google's perspective. Why should they index that website, 2500 pages every day. What's changing ? Does the site have enough domain authority to warrant that kind of indexing ?
In my opinion, this is not a concern. Just submit XML Sitemaps and see what percentage of your submitted pages are indexed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Shopify Website Page Indexing issue
Hi, I am working on an eCommerce website on Shopify.
Intermediate & Advanced SEO | | Bhisshaun
When I tried Indexing my newly created service pages. The pages are not getting indexed on Google.
I also tried manual indexing of each page and submitted a sitemap but still, the issue doesn't seem to be resolved. Thanks0 -
Removing Parameterized URLs from Google Index
We have duplicate eCommerce websites, and we are in the process of implementing cross-domain canonicals. (We can't 301 - both sites are major brands). So far, this is working well - rankings are improving dramatically in most cases. However, what we are seeing in some cases is that Google has indexed a parameterized page for the site being canonicaled (this is the site that is getting the canonical tag - the "from" page). When this happens, both sites are being ranked, and the parameterized page appears to be blocking the canonical. The question is, how do I remove canonicaled pages from Google's index? If Google doesn't crawl the page in question, it never sees the canonical tag, and we still have duplicate content. Example: A. www.domain2.com/productname.cfm%3FclickSource%3DXSELL_PR is ranked at #35, and B. www.domain1.com/productname.cfm is ranked at #12. (yes, I know that upper case is bad. We fixed that too.) Page A has the canonical tag, but page B's rank didn't improve. I know that there are no guarantees that it will improve, but I am seeing a pattern. Page A appears to be preventing Google from passing link juice via canonical. If Google doesn't crawl Page A, it can't see the rel=canonical tag. We likely have thousands of pages like this. Any ideas? Does it make sense to block the "clicksource" parameter in GWT? That kind of scares me.
Intermediate & Advanced SEO | | AMHC0 -
Google Indexed my Site then De-indexed a Week After
Hi there, I'm working on getting a large e-commerce website indexed and I am having a lot of trouble.
Intermediate & Advanced SEO | | Travis-W
The site is www.consumerbase.com. We have about 130,000 pages and only 25,000 are getting indexed. I use multiple sitemaps so I can tell which product pages are indexed, and we need our "Mailing List" pages the most - http://www.consumerbase.com/mailing-lists/cigar-smoking-enthusiasts-mailing-list.html I submitted a sitemap a few weeks ago of a particular type of product page and about 40k/43k of the pages were indexed - GREAT! A week ago Google de-indexed almost all of those new pages. Check out this image, it kind of boggles my mind and makes me sad. http://screencast.com/t/GivYGYRrOV While these pages were indexed, we immediately received a ton of traffic to them - making me think Google liked them. I think our breadcrumbs, site structure, and "customers who viewed this product also viewed" links would make the site extremely crawl-able. What gives?
Does it come down to our site not having enough Domain Authority?
My client really needs an answer about how we are going to get these pages indexed.0 -
To index or de-index internal search results pages?
Hi there. My client uses a CMS/E-Commerce platform that is automatically set up to index every single internal search results page on search engines. This was supposedly built as an "SEO Friendly" feature in the sense that it creates hundreds of new indexed pages to send to search engines that reflect various terminology used by existing visitors of the site. In many cases, these pages have proven to outperform our optimized static pages, but there are multiple issues with them: The CMS does not allow us to add any static content to these pages, including titles, headers, metas, or copy on the page The query typed in by the site visitor always becomes part of the Title tag / Meta description on Google. If the customer's internal search query contains any less than ideal terminology that we wouldn't want other users to see, their phrasing is out there for the whole world to see, causing lots and lots of ugly terminology floating around on Google that we can't affect. I am scared to do a blanket de-indexation of all /search/ results pages because we would lose the majority of our rankings and traffic in the short term, while trying to improve the ranks of our optimized static pages. The ideal is to really move up our static pages in Google's index, and when their performance is strong enough, to de-index all of the internal search results pages - but for some reason Google keeps choosing the internal search results page as the "better" page to rank for our targeted keywords. Can anyone advise? Has anyone been in a similar situation? Thanks!
Intermediate & Advanced SEO | | FPD_NYC0 -
Getting Pages Requiring Login Indexed
Somehow certain newspapers' webpages show up in the index but require login. My client has a whole section of the site that requires a login (registration is free), and we'd love to get that content indexed. The developer offered to remove the login requirement for specific user agents (eg Googlebot, et al.). I am afraid this might get us penalized. Any insight?
Intermediate & Advanced SEO | | TheEspresseo0 -
Why do my https pages index while noindexed?
I have some tag pages on one of my sites that I meta noindexed. This worked for the http version, which they are canonical'd to but now the https:// version is indexing. The https version is both noindexed and has a canonical to the http version, but they still show up! I even have wordpress set up to redirect all https: to http! For some reason these pages are STILL showing in the SERPS though. Any experience or advice would be greatly appreciated. Example page: https://www.michaelpadway.com/tag/insurance-coverage/ Thanks all!
Intermediate & Advanced SEO | | MarloSchneider0 -
Merging your google places page with google plus page.
I have a map listing showing for the keyword junk cars for cash nj. I recently created a new g+ page and requested a merge between the places and the + page. now when you do a search you see the following. Junk Cars For Cash NJ LLC
Intermediate & Advanced SEO | | junkcars
junkcarforcashnj.com/
Google+ page - Google+ page the first hyperlink takes me to the about page of the G+ and the second link takes me to the posts section within g+. Is this normal? should i delete the places account where the listing was originally created? Or do i leave it as is? Thanks0 -
Is it possible to get a list of pages indexed in Google?
Is there a tool that will give me a list of pages on my site that are indexed in Google?
Intermediate & Advanced SEO | | rise10