When does Google index a fetched page?
-
I have seen where it will index on of my pages within 5 minutes of fetching, but have also read that it can take a day. I'm on day #2 and it appears that it has still not re-indexed 15 pages that I fetched. I changed the meta-description in all of them, and added content to nearly all of them, but none of those changes are showing when I do a site:www.site/page
I'm trying to test changes in this manner, so it is important for me to know WHEN a fetched page has been indexed, or at least IF it has. How can I tell what is going on?
-
For those following, see this link where Ryan has provided some interesting answers regarding the cache and the site:www.. command
-
I'm going to post a question about the non-cached as upon digging I'm not finding an answer.
And, I'm reading where it seems to take a couple of days before indexing, but seeing something strange that makes it confusing:,
This page was cached a few days ago: http://webcache.googleusercontent.com/search?q=cache:http://www.qjamba.com/restaurants-coupons/wildwood/mo/all
The paragraphs wording content that starts with 'The Wildwood coupons page' was added as a test just 3 days ago and then I ran a fetch. When I do a Google search for phrases in it, it does show up in google results (like qjamba wildwood buried by the large national chains). So, it looks like it indexed the new content.
But if you search for wildwood qjamba restaurants cafes the result Google shows includes the word diners that is gone from the cached content (it was previously in the meta description tag)! But if you then search wildwood qjamba restaurants diners it doesn't come up! So, this seems to indicate that the algorithm was applied to the cached file, but that the DISPLAY by Google when the user does a search is still of older content that isn't even in the new cached file! Very odd.
I was thinking I could put changes on pages and test the effect on search results 1 or 2 days after fetching, but maybe it isn't that simple. Or maybe it is but is just hard to tell because of the timing of what Google is displaying.
I appreciate your feedback. I have H2 first on some pages because H1 was pretty big. I thought I read once that the main thing isn't if you start with H1 or H2 but that you never want to put an H1 after an H2.
I'm blocking the cut and paste just to make it harder for a copycat to pull the info. Maybe overkill though.
Thanks again, Ted
-
That's interesting because according to google own words:
Google takes a snapshot of each page examined as it crawls the web and caches these as a back-up in case the original page is unavailable. If you click on the "Cached" link, you will see the web page as it looked when we indexed it. The cached content is the content Google uses to judge whether this page is a relevant match for your query.
Source: http://www.google.com.au/help/features.html
If I look for that page using a fragment of the <title>(site:http://www.qjamba.com/ "Ferguson, MO Restaurant") I can find it, so it's in the index.</p> <p>Or maybe not, because if you search for this query <strong>"Ferguson, MO Restaurant" 19 coupons</strong> (bold part quotes included) you are not among the results. So it seems (I didn't know) that using site: is showing results which are not in the index... But I would ask in <a href="https://productforums.google.com/forum/#!forum/websearch">google search product forum</a>.</p> <p>As far as I know you can use meta tag to avoid archiving in google cache but your page doesn't have a googlebot meta tag. So <strong>I have no idea why is not showing</strong>.</p> <p>But if I was you I would dig further. By the way the html of these pages is quite weird, I didn't spend much time looking at it, but there's no H1, you are blocking cut&paste with js... Accessibility is a factor in google algo.</p></title>
-
Thanks.. That does help..
<<if 404="" you="" have="" a="" for="" the="" cache:="" command="" that="" page="" is="" not="" indexed,="" if="" searching="" content="" of="" using="" site:="" find="" different="" page,="" it="" means="" other="" indexed="" (and="" one="" possible="" explanation="" duplicate="" issue)="">></if>
THIS page gives a 404:
but site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all
Give ONLY that exact same page. How can that be?
-
I am not sure I understood your doubt but I will try to answer.
site://foo.com
is giving you a number of indexed page, is presumably the number of pages from that site in the index, it normally differs from page indexed count in GWT, so both are probably not all that accurate
site://foo.com "The quick brown fox jumps over the lazy dog"
searches among the indexed pages for that site the ones containing that precise sentence
webcache.googleusercontent.com/search?q=cache:https://foo.com/bar
check the last indexed version of a specific page
if you have a 404 for the cache: command that page is not indexed, if searching for the content of that page using site: you find a different page, it means that other page is indexed for that content (and one possible explanation for that is a duplicate content issue)
-
Thanks Massimiliano. I'll give you a 'good' answer here, and cross fingers that this next round will work. I still don't understand the timing on site:www , nor what page+features is all about. I thought site:www was supposed to be the method people use to see what is currently indexed.
-
"cache:" is the most update version in google index
if you fix the duplicate content next re-indexing will fix the duplicate content issue
-
I have a bigger problem than I realized:
I accidentally put duplicate content in my subcategory pages that was just meant for category pages. It's about 100-150 pages, and many of them have been crawled in the last few days. I have already changed the program so those pages don't have that content. Will I get penalized by Google-- de-indexed? Or should I be ok going forward because the next time they crawl it will be gone?
I'm going to start over with the fetching since I made that mistake but can you address the following just so when I get back to this spot I maybe understand better?:
1. When I type into the google searchbar lemay mo restaurant coupons smoothies qjamba
the description it gives is <cite class="_Rm">www.qjamba.com/restaurants-coupons/lemay/mo/smoothies</cite>The Lemay coupons page features both national franchise printable restaurant coupons for companies such as KFC, Long John Silver's, and O'Charlies and ...
BUT when I do a site:<cite class="_Rm">www.qjamba.com/restaurants-coupons/lemay/mo/smoothies</cite>it gives the description found in the meta description tag: www.qjamba.com/restaurants-coupons/.../smoothie...Traduci questa pagina Find Lemay all-free printable and mobile coupons for Smoothies, and more.
It looks like site:www does NOT always give the most recent indexed content since 'The Lemay coupons page...' is the content I added 2 days ago for testing! Maybe that's because Lemay was one of the urls that I inadvertently created duplicate content for.
2. Are ANY of the cache command, page+features command, or site:www supposed to be the most recent indexed content?
-
I am assuming it's duplicate, it can be de-indexed for other reasons and the other page is returned because has the same paragraphs in it. But if you ran a couple of crawling reports like moz/semrush etc.. And they signal these pages as duplicates it may be the issue.
-
thanks.
That's weird because doing the site: command separately for that first page for the /smoothies gives different content than for /all :
site:www.qjamba.com/restaurants-coupons/lemay/mo/smoothies
site:www.qjamba.com/restaurants-coupons/lemay/mo/all
But why would that 'page+features' command show the same description when the description in reality is different? This seems like a different issue than my op, but maybe it is related somehow--even if not I prob should still understand it.
-
Yes, one more idea, if you take the content of the page and you query your site for that content specifically like this:
You find a different page. Looks like those pages are duplicate.
Sorry for missing a w.
-
you are missing a w there. site:www and you have site:ww
That's why I'm so confused--it appears to be indexed from the past, they are in my dbase table with the date and time crawled -- right after the fetch --, and there is no manual penalty in webmaster tools.
Yet there is no sign it re-indexed after crawling 2 days ago now. I could resubmit (there are 15 pages I fetched), but I'm not expecting a different response and need to understand what is happening in order to use this approach to test SEO changes.
thanks for sticking with this. Any more ideas on what is happening?
-
Well, that's a http 404 status code, which means the page was not found, in other words it's not in google index.
Please note if you type site:ww.qjamba.com/restaurants-coupons/lemay/mo/all you find nothing see image below.
Again I would doubt your logs. You can also check GWT for any manual penalty you may have there.
-
Hi, thanks again.
this gives an error:
but the page exists, AND site:www.qjamba.com/restaurants-coupons/lemay/mo/all
has a result, so I'm not sure what a missing cache means in this case..
The log shows that it was crawled right after it was fetched but the result for site:... doesn't reflect the changes on the page. so it appears not to have been re-indexed yet, but why not in the cache?
-
You evidently mistyped the url to check, this is a working example:
If your new content is not there, it have not been indexed yet, if your logs says it was crawled two days ago I would start doubting the logs.
-
HI Massimiliano,
Thanks for your reply.
I'm getting an error in both FF and Chrome with this in the address bar. Have I misunderstood?
http://webcache.googleusercontent.com/search?q=cache:http://www.mysite.com/mypage
Is the command (assuming I can get it to work) supposed to show when the page was indexed, or last crawled?
I am storing when it crawls, but am wondering about the couple of days part, since it has been 2 days now and when I first did it it was re-indexing within 5 minutes a few days ago.
-
Open this url on any browser:
You can reasonably take that as the date when the page was last indexed.
You could also programmatically store the last google bot visit per page, just checking user-agent of page request. Or just analyze your web server logs to get that info out on a per page basis. And add a couple of days just to have a buffer (even google need a little processing time to generate its index).
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can Google Bot View Links on a Wix Page?
Hi, The way Wix is configured you can't see any of the on-page links within the source code. Does anyone know if Google Bots still count the links on this page? Here is the page in question: https://www.ncresourcecenter.org/business-directory If you do think Google counts these links, can you please send me URL fetcher to prove that the links are crawlable? Thank you SO much for your help.
Intermediate & Advanced SEO | | Fiyyazp0 -
Blocking Dynamic Search Result Pages From Google
Hi Mozzerds, I have a quick question that probably won't have just one solution. Most of the pages that Moz crawled for duplicate content we're dynamic search result pages on my site. Could this be a simple fix of just blocking these pages from Google altogether? Or would Moz just crawl these pages as critical crawl errors instead of content errors? Ultimately, I contemplated whether or not I wanted to rank for these pages but I don't think it's worth it considering I have multiple product pages that rank well. I think in my case, the best is probably to leave out these search pages since they have more of a negative impact on my site resulting in more content errors than I would like. So would blocking these pages from the Search Engines and Moz be a good idea? Maybe a second opinion would help: what do you think I should do? Is there another way to go about this and would blocking these pages do anything to reduce the number of content errors on my site? I appreciate any feedback! Thanks! Andrew
Intermediate & Advanced SEO | | drewstorys0 -
Pages are Indexed but not Cached by Google. Why?
Here's an example: I get a 404 error for this: http://webcache.googleusercontent.com/search?q=cache:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all But a search for qjamba restaurant coupons gives a clear result as does this: site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all What is going on? How can this page be indexed but not in the Google cache? I should make clear that the page is not showing up with any kind of error in webmaster tools, and Google has been crawling pages just fine. This particular page was fetched by Google yesterday with no problems, and even crawled again twice today by Google Yet, no cache.
Intermediate & Advanced SEO | | friendoffood2 -
Google de-indexed a page on my site
I have a site which is around 9 months old. For most search terms we rank fine (including top 3 rankings for competitive terms). Recently one of our pages has been fluctuating wildly in the rankings and has now disappeared altogether from the rankings for over 1 week. As a test I added a similar page to one of my other sites and it ranks fine. I've checked webmaster tools and there is nothing of note there. I'm not really sure what to do at this stage. Any advice would me much appreciated!
Intermediate & Advanced SEO | | deelo5550 -
How do I get my Golf Tee Times pages to index?
I understand that Google does not want to index other search results pages, but we have a large amount of discount tee times that you can search for and they are displayed as helpful listing pages, not search results. Here is an example: http://www.activegolf.com/search-northern-california-tee-times?Date=8%2F21%2F2013&datePicker=8%2F21%2F2013&loc=San+Diego%2C+CA&coupon=&zipCode=&search= These pages are updated daily with the newest tee times. We don't exactly want every URL with every parameter indexed, but at least http://www.activegolf.com/search-northern-california-tee-times. It's weird because all of the tee times are viewable in the HTML and are not javascript. An example of similar pages would be Yelp, for example this page is indexed just fine - http://www.yelp.com/search?cflt=dogwalkers&find_loc=Lancaster%2C+MA I know ActiveGolf.com is not as powerful as Yelp but it's still strange that none of our tee times search pages are being indexed. Would appreciate any ideas out there!
Intermediate & Advanced SEO | | CAndrew14.0 -
Certain Product Pages Not Indexing
Hey All, We discovered an issue where new product pages on our site were not getting indexed because a "noindex" tag was inadvertently being added to section when those pages were created. We removed the noindex tag in late April and some of the pages that had not been previously indexed are now showing up, but others are still not getting indexed and I'd appreciate some help on why this could be. Here is an example of a page that was not in the index but is now showing after removal of noindex: http://www.cloud9living.com/san-diego/gaslamp-quarter-food-tour And here is an example of a page that is still not showing in the index: http://www.cloud9living.com/atlanta/race-a-ferrari UPDATE: The above page is now showing after I manually submitted it in WMT. I had previously submitted another page like a month ago and it was still not indexing so I thought the manual submission was a dead end. However, it just so happens that the above URL just had its Page Title and H1 updated to something more specific and less duplicative so I am currently running a test to see if that's the problem with these pages not indexing. Will update this soon. Any suggestions? Thanks!
Intermediate & Advanced SEO | | GManSEO0 -
Same content pages in different versions of Google - is it duplicate>
Here's my issue I have the same page twice for content but on different url for the country, for example: www.example.com/gb/page/ and www.example.com/us/page So one for USA and one for Great Britain. Or it could be a subdomain gb. or us. etc. Now is it duplicate content is US version indexes the page and UK indexes other page (same content different url), the UK search engine will only see the UK page and the US the us page, different urls but same content. Is this bad for the panda update? or does this get away with it? People suggest it is ok and good for localised search for an international website - im not so sure. Really appreciate advice.
Intermediate & Advanced SEO | | pauledwards0 -
Can you appear in both Google local and universal results on the page?
I have a client that ranks 2nd for a google local result, and the local results are the first set of results being displayed, making it in fact the 2nd result on the page for the keyword. There aren't any blended results on this page, it's straightforward local results, then universal results directly after. They are concerned that they are not appearing in the universal results high enough (they're on page 5 with a different url than the homepage). I'm not sure why their #2 local ranking isn't good enough since those are consistently the same results, but I have to ask; Can you appear in both local and universal results on the same page?
Intermediate & Advanced SEO | | MichaelWeisbaum0