When does Google index a fetched page?
-
I have seen where it will index on of my pages within 5 minutes of fetching, but have also read that it can take a day. I'm on day #2 and it appears that it has still not re-indexed 15 pages that I fetched. I changed the meta-description in all of them, and added content to nearly all of them, but none of those changes are showing when I do a site:www.site/page
I'm trying to test changes in this manner, so it is important for me to know WHEN a fetched page has been indexed, or at least IF it has. How can I tell what is going on?
-
For those following, see this link where Ryan has provided some interesting answers regarding the cache and the site:www.. command
-
I'm going to post a question about the non-cached as upon digging I'm not finding an answer.
And, I'm reading where it seems to take a couple of days before indexing, but seeing something strange that makes it confusing:,
This page was cached a few days ago: http://webcache.googleusercontent.com/search?q=cache:http://www.qjamba.com/restaurants-coupons/wildwood/mo/all
The paragraphs wording content that starts with 'The Wildwood coupons page' was added as a test just 3 days ago and then I ran a fetch. When I do a Google search for phrases in it, it does show up in google results (like qjamba wildwood buried by the large national chains). So, it looks like it indexed the new content.
But if you search for wildwood qjamba restaurants cafes the result Google shows includes the word diners that is gone from the cached content (it was previously in the meta description tag)! But if you then search wildwood qjamba restaurants diners it doesn't come up! So, this seems to indicate that the algorithm was applied to the cached file, but that the DISPLAY by Google when the user does a search is still of older content that isn't even in the new cached file! Very odd.
I was thinking I could put changes on pages and test the effect on search results 1 or 2 days after fetching, but maybe it isn't that simple. Or maybe it is but is just hard to tell because of the timing of what Google is displaying.
I appreciate your feedback. I have H2 first on some pages because H1 was pretty big. I thought I read once that the main thing isn't if you start with H1 or H2 but that you never want to put an H1 after an H2.
I'm blocking the cut and paste just to make it harder for a copycat to pull the info. Maybe overkill though.
Thanks again, Ted
-
That's interesting because according to google own words:
Google takes a snapshot of each page examined as it crawls the web and caches these as a back-up in case the original page is unavailable. If you click on the "Cached" link, you will see the web page as it looked when we indexed it. The cached content is the content Google uses to judge whether this page is a relevant match for your query.
Source: http://www.google.com.au/help/features.html
If I look for that page using a fragment of the <title>(site:http://www.qjamba.com/ "Ferguson, MO Restaurant") I can find it, so it's in the index.</p> <p>Or maybe not, because if you search for this query <strong>"Ferguson, MO Restaurant" 19 coupons</strong> (bold part quotes included) you are not among the results. So it seems (I didn't know) that using site: is showing results which are not in the index... But I would ask in <a href="https://productforums.google.com/forum/#!forum/websearch">google search product forum</a>.</p> <p>As far as I know you can use meta tag to avoid archiving in google cache but your page doesn't have a googlebot meta tag. So <strong>I have no idea why is not showing</strong>.</p> <p>But if I was you I would dig further. By the way the html of these pages is quite weird, I didn't spend much time looking at it, but there's no H1, you are blocking cut&paste with js... Accessibility is a factor in google algo.</p></title>
-
Thanks.. That does help..
<<if 404="" you="" have="" a="" for="" the="" cache:="" command="" that="" page="" is="" not="" indexed,="" if="" searching="" content="" of="" using="" site:="" find="" different="" page,="" it="" means="" other="" indexed="" (and="" one="" possible="" explanation="" duplicate="" issue)="">></if>
THIS page gives a 404:
but site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all
Give ONLY that exact same page. How can that be?
-
I am not sure I understood your doubt but I will try to answer.
site://foo.com
is giving you a number of indexed page, is presumably the number of pages from that site in the index, it normally differs from page indexed count in GWT, so both are probably not all that accurate
site://foo.com "The quick brown fox jumps over the lazy dog"
searches among the indexed pages for that site the ones containing that precise sentence
webcache.googleusercontent.com/search?q=cache:https://foo.com/bar
check the last indexed version of a specific page
if you have a 404 for the cache: command that page is not indexed, if searching for the content of that page using site: you find a different page, it means that other page is indexed for that content (and one possible explanation for that is a duplicate content issue)
-
Thanks Massimiliano. I'll give you a 'good' answer here, and cross fingers that this next round will work. I still don't understand the timing on site:www , nor what page+features is all about. I thought site:www was supposed to be the method people use to see what is currently indexed.
-
"cache:" is the most update version in google index
if you fix the duplicate content next re-indexing will fix the duplicate content issue
-
I have a bigger problem than I realized:
I accidentally put duplicate content in my subcategory pages that was just meant for category pages. It's about 100-150 pages, and many of them have been crawled in the last few days. I have already changed the program so those pages don't have that content. Will I get penalized by Google-- de-indexed? Or should I be ok going forward because the next time they crawl it will be gone?
I'm going to start over with the fetching since I made that mistake but can you address the following just so when I get back to this spot I maybe understand better?:
1. When I type into the google searchbar lemay mo restaurant coupons smoothies qjamba
the description it gives is <cite class="_Rm">www.qjamba.com/restaurants-coupons/lemay/mo/smoothies</cite>The Lemay coupons page features both national franchise printable restaurant coupons for companies such as KFC, Long John Silver's, and O'Charlies and ...
BUT when I do a site:<cite class="_Rm">www.qjamba.com/restaurants-coupons/lemay/mo/smoothies</cite>it gives the description found in the meta description tag: www.qjamba.com/restaurants-coupons/.../smoothie...Traduci questa pagina Find Lemay all-free printable and mobile coupons for Smoothies, and more.
It looks like site:www does NOT always give the most recent indexed content since 'The Lemay coupons page...' is the content I added 2 days ago for testing! Maybe that's because Lemay was one of the urls that I inadvertently created duplicate content for.
2. Are ANY of the cache command, page+features command, or site:www supposed to be the most recent indexed content?
-
I am assuming it's duplicate, it can be de-indexed for other reasons and the other page is returned because has the same paragraphs in it. But if you ran a couple of crawling reports like moz/semrush etc.. And they signal these pages as duplicates it may be the issue.
-
thanks.
That's weird because doing the site: command separately for that first page for the /smoothies gives different content than for /all :
site:www.qjamba.com/restaurants-coupons/lemay/mo/smoothies
site:www.qjamba.com/restaurants-coupons/lemay/mo/all
But why would that 'page+features' command show the same description when the description in reality is different? This seems like a different issue than my op, but maybe it is related somehow--even if not I prob should still understand it.
-
Yes, one more idea, if you take the content of the page and you query your site for that content specifically like this:
You find a different page. Looks like those pages are duplicate.
Sorry for missing a w.
-
you are missing a w there. site:www and you have site:ww
That's why I'm so confused--it appears to be indexed from the past, they are in my dbase table with the date and time crawled -- right after the fetch --, and there is no manual penalty in webmaster tools.
Yet there is no sign it re-indexed after crawling 2 days ago now. I could resubmit (there are 15 pages I fetched), but I'm not expecting a different response and need to understand what is happening in order to use this approach to test SEO changes.
thanks for sticking with this. Any more ideas on what is happening?
-
Well, that's a http 404 status code, which means the page was not found, in other words it's not in google index.
Please note if you type site:ww.qjamba.com/restaurants-coupons/lemay/mo/all you find nothing see image below.
Again I would doubt your logs. You can also check GWT for any manual penalty you may have there.
-
Hi, thanks again.
this gives an error:
but the page exists, AND site:www.qjamba.com/restaurants-coupons/lemay/mo/all
has a result, so I'm not sure what a missing cache means in this case..
The log shows that it was crawled right after it was fetched but the result for site:... doesn't reflect the changes on the page. so it appears not to have been re-indexed yet, but why not in the cache?
-
You evidently mistyped the url to check, this is a working example:
If your new content is not there, it have not been indexed yet, if your logs says it was crawled two days ago I would start doubting the logs.
-
HI Massimiliano,
Thanks for your reply.
I'm getting an error in both FF and Chrome with this in the address bar. Have I misunderstood?
http://webcache.googleusercontent.com/search?q=cache:http://www.mysite.com/mypage
Is the command (assuming I can get it to work) supposed to show when the page was indexed, or last crawled?
I am storing when it crawls, but am wondering about the couple of days part, since it has been 2 days now and when I first did it it was re-indexing within 5 minutes a few days ago.
-
Open this url on any browser:
You can reasonably take that as the date when the page was last indexed.
You could also programmatically store the last google bot visit per page, just checking user-agent of page request. Or just analyze your web server logs to get that info out on a per page basis. And add a couple of days just to have a buffer (even google need a little processing time to generate its index).
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do i beat my spammy competitors? they are ranking on page 1 of google!
my competitor is ranking on page 1 of google. he has 3.4 million backlinks and 1400 refering domains. he has aquired these backlinks from various websites, but he doesnot have links from his niche . how do i beat my competition with less backlinks because if i follo his technique, it would a takea lot of time and people to build backlinks one of my strategy is to get .edu links second strategy is to have 6000 word content and rank for really low competition keywords related to my website.( my competitors website has 1500 words content!) any other strategy you can suggest?
Intermediate & Advanced SEO | | calvinkj0 -
Trying to get Google to stop indexing an old site!
Howdy, I have a small dilemma. We built a new site for a client, but the old site is still ranking/indexed and we can't seem to get rid of it. We setup a 301 from the old site to the new one, as we have done many times before, but even though the old site is no longer live and the hosting package has been cancelled, the old site is still indexed. (The new site is at a completely different host.) We never had access to the old site, so we weren't able to request URL removal through GSC. Any guidance on how to get rid of the old site would be very appreciated. BTW, it's been about 60 days since we took these steps. Thanks, Kirk
Intermediate & Advanced SEO | | kbates0 -
Cache and index page of Mobile site
Hi, I want to check cache and index page of mobile site. I am checking it on mobile phone but it is showing the cache version of desktop. So anybody can tell me the way(tool, online tool etc.) to check mobile site index and cache page.
Intermediate & Advanced SEO | | vivekrathore0 -
How do you check the google cache for hashbang pages?
So we use http://webcache.googleusercontent.com/search?q=cache:x.com/#!/hashbangpage to check what googlebot has cached but when we try to use this method for hashbang pages, we get the x.com's cache... not x.com/#!/hashbangpage That actually makes sense because the hashbang is part of the homepage in that case so I get why the cache returns back the homepage. My question is - how can you actually look up the cache for hashbang page?
Intermediate & Advanced SEO | | navidash0 -
Crawl efficiency - Page indexed after one minute!
Hey Guys,A site that has 5+ million pages indexed and 300 new pages a day.I hear a lot that sites at this level its all about efficient crawlabitliy.The pages of this site gets indexed one minute after the page is online.1) Does this mean that the site is already crawling efficient and there is not much else to do about it?2) By increasing crawlability efficiency, should I expect gogole to crawl my site less (less bandwith google takes from my site for the same amount of crawl)or to crawl my site more often?Thanks
Intermediate & Advanced SEO | | Mr.bfz0 -
Should I index tag pages?
Should I exclude the tag pages? Or should I go ahead and keep them indexed? Is there a general opinion on this topic?
Intermediate & Advanced SEO | | NikkiGaul0 -
Google replacing subpages in index with home page?
Hi! I run a backlink building company. Recently, we had a customer who had us build targeted backlinks to certain subpages on his site. Then something really bizarre happened...all of a sudden, their subpages that were indexed in Google (the ones we were building links to) disappeared from the index, to be replaced with their home page. They haven't lost their rank, per se--it's just now their home page instead of their subpages. At this point, we are tracking literally thousands of keywords for our link building customers, and we've never run into this issue before. Have you ever run into it? If so, what's the best way to handle it from an SEO company perspective? They have a sitemap.xml and their GWT account reports no crawl errors, so it doesn't seem to be a site issue.
Intermediate & Advanced SEO | | ownlocal0 -
Working out exactly how Google is crawling my site if I have loooots of pages
I am trying to work out exactly how Google is crawling my site including entry points and its path from there. The site has millions of pages and hundreds of thousands indexed. I have simple log files with a time stamp and URL that google bot was on. Unfortunately there are hundreds of thousands of entries even for one day and as it is a massive site I am finding it hard to work out the spiders paths. Is there any way using the log files and excel or other tools to work this out simply? Also I was expecting the bot to almost instantaneously go through each level eg. main page--> category page ---> subcategory page (expecting same time stamp) but this does not appear to be the case. Does the bot follow a path right through to the deepest level it can/allowed to for that crawl and then returns to the higher level category pages at a later time? Any help would be appreciated Cheers
Intermediate & Advanced SEO | | soeren.hofmayer0