When does Google index a fetched page?
-
I have seen where it will index on of my pages within 5 minutes of fetching, but have also read that it can take a day. I'm on day #2 and it appears that it has still not re-indexed 15 pages that I fetched. I changed the meta-description in all of them, and added content to nearly all of them, but none of those changes are showing when I do a site:www.site/page
I'm trying to test changes in this manner, so it is important for me to know WHEN a fetched page has been indexed, or at least IF it has. How can I tell what is going on?
-
For those following, see this link where Ryan has provided some interesting answers regarding the cache and the site:www.. command
-
I'm going to post a question about the non-cached as upon digging I'm not finding an answer.
And, I'm reading where it seems to take a couple of days before indexing, but seeing something strange that makes it confusing:,
This page was cached a few days ago: http://webcache.googleusercontent.com/search?q=cache:http://www.qjamba.com/restaurants-coupons/wildwood/mo/all
The paragraphs wording content that starts with 'The Wildwood coupons page' was added as a test just 3 days ago and then I ran a fetch. When I do a Google search for phrases in it, it does show up in google results (like qjamba wildwood buried by the large national chains). So, it looks like it indexed the new content.
But if you search for wildwood qjamba restaurants cafes the result Google shows includes the word diners that is gone from the cached content (it was previously in the meta description tag)! But if you then search wildwood qjamba restaurants diners it doesn't come up! So, this seems to indicate that the algorithm was applied to the cached file, but that the DISPLAY by Google when the user does a search is still of older content that isn't even in the new cached file! Very odd.
I was thinking I could put changes on pages and test the effect on search results 1 or 2 days after fetching, but maybe it isn't that simple. Or maybe it is but is just hard to tell because of the timing of what Google is displaying.
I appreciate your feedback. I have H2 first on some pages because H1 was pretty big. I thought I read once that the main thing isn't if you start with H1 or H2 but that you never want to put an H1 after an H2.
I'm blocking the cut and paste just to make it harder for a copycat to pull the info. Maybe overkill though.
Thanks again, Ted
-
That's interesting because according to google own words:
Google takes a snapshot of each page examined as it crawls the web and caches these as a back-up in case the original page is unavailable. If you click on the "Cached" link, you will see the web page as it looked when we indexed it. The cached content is the content Google uses to judge whether this page is a relevant match for your query.
Source: http://www.google.com.au/help/features.html
If I look for that page using a fragment of the <title>(site:http://www.qjamba.com/ "Ferguson, MO Restaurant") I can find it, so it's in the index.</p> <p>Or maybe not, because if you search for this query <strong>"Ferguson, MO Restaurant" 19 coupons</strong> (bold part quotes included) you are not among the results. So it seems (I didn't know) that using site: is showing results which are not in the index... But I would ask in <a href="https://productforums.google.com/forum/#!forum/websearch">google search product forum</a>.</p> <p>As far as I know you can use meta tag to avoid archiving in google cache but your page doesn't have a googlebot meta tag. So <strong>I have no idea why is not showing</strong>.</p> <p>But if I was you I would dig further. By the way the html of these pages is quite weird, I didn't spend much time looking at it, but there's no H1, you are blocking cut&paste with js... Accessibility is a factor in google algo.</p></title>
-
Thanks.. That does help..
<<if 404="" you="" have="" a="" for="" the="" cache:="" command="" that="" page="" is="" not="" indexed,="" if="" searching="" content="" of="" using="" site:="" find="" different="" page,="" it="" means="" other="" indexed="" (and="" one="" possible="" explanation="" duplicate="" issue)="">></if>
THIS page gives a 404:
but site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all
Give ONLY that exact same page. How can that be?
-
I am not sure I understood your doubt but I will try to answer.
site://foo.com
is giving you a number of indexed page, is presumably the number of pages from that site in the index, it normally differs from page indexed count in GWT, so both are probably not all that accurate
site://foo.com "The quick brown fox jumps over the lazy dog"
searches among the indexed pages for that site the ones containing that precise sentence
webcache.googleusercontent.com/search?q=cache:https://foo.com/bar
check the last indexed version of a specific page
if you have a 404 for the cache: command that page is not indexed, if searching for the content of that page using site: you find a different page, it means that other page is indexed for that content (and one possible explanation for that is a duplicate content issue)
-
Thanks Massimiliano. I'll give you a 'good' answer here, and cross fingers that this next round will work. I still don't understand the timing on site:www , nor what page+features is all about. I thought site:www was supposed to be the method people use to see what is currently indexed.
-
"cache:" is the most update version in google index
if you fix the duplicate content next re-indexing will fix the duplicate content issue
-
I have a bigger problem than I realized:
I accidentally put duplicate content in my subcategory pages that was just meant for category pages. It's about 100-150 pages, and many of them have been crawled in the last few days. I have already changed the program so those pages don't have that content. Will I get penalized by Google-- de-indexed? Or should I be ok going forward because the next time they crawl it will be gone?
I'm going to start over with the fetching since I made that mistake but can you address the following just so when I get back to this spot I maybe understand better?:
1. When I type into the google searchbar lemay mo restaurant coupons smoothies qjamba
the description it gives is <cite class="_Rm">www.qjamba.com/restaurants-coupons/lemay/mo/smoothies</cite>The Lemay coupons page features both national franchise printable restaurant coupons for companies such as KFC, Long John Silver's, and O'Charlies and ...
BUT when I do a site:<cite class="_Rm">www.qjamba.com/restaurants-coupons/lemay/mo/smoothies</cite>it gives the description found in the meta description tag: www.qjamba.com/restaurants-coupons/.../smoothie...Traduci questa pagina Find Lemay all-free printable and mobile coupons for Smoothies, and more.
It looks like site:www does NOT always give the most recent indexed content since 'The Lemay coupons page...' is the content I added 2 days ago for testing! Maybe that's because Lemay was one of the urls that I inadvertently created duplicate content for.
2. Are ANY of the cache command, page+features command, or site:www supposed to be the most recent indexed content?
-
I am assuming it's duplicate, it can be de-indexed for other reasons and the other page is returned because has the same paragraphs in it. But if you ran a couple of crawling reports like moz/semrush etc.. And they signal these pages as duplicates it may be the issue.
-
thanks.
That's weird because doing the site: command separately for that first page for the /smoothies gives different content than for /all :
site:www.qjamba.com/restaurants-coupons/lemay/mo/smoothies
site:www.qjamba.com/restaurants-coupons/lemay/mo/all
But why would that 'page+features' command show the same description when the description in reality is different? This seems like a different issue than my op, but maybe it is related somehow--even if not I prob should still understand it.
-
Yes, one more idea, if you take the content of the page and you query your site for that content specifically like this:
You find a different page. Looks like those pages are duplicate.
Sorry for missing a w.
-
you are missing a w there. site:www and you have site:ww
That's why I'm so confused--it appears to be indexed from the past, they are in my dbase table with the date and time crawled -- right after the fetch --, and there is no manual penalty in webmaster tools.
Yet there is no sign it re-indexed after crawling 2 days ago now. I could resubmit (there are 15 pages I fetched), but I'm not expecting a different response and need to understand what is happening in order to use this approach to test SEO changes.
thanks for sticking with this. Any more ideas on what is happening?
-
Well, that's a http 404 status code, which means the page was not found, in other words it's not in google index.
Please note if you type site:ww.qjamba.com/restaurants-coupons/lemay/mo/all you find nothing see image below.
Again I would doubt your logs. You can also check GWT for any manual penalty you may have there.
-
Hi, thanks again.
this gives an error:
but the page exists, AND site:www.qjamba.com/restaurants-coupons/lemay/mo/all
has a result, so I'm not sure what a missing cache means in this case..
The log shows that it was crawled right after it was fetched but the result for site:... doesn't reflect the changes on the page. so it appears not to have been re-indexed yet, but why not in the cache?
-
You evidently mistyped the url to check, this is a working example:
If your new content is not there, it have not been indexed yet, if your logs says it was crawled two days ago I would start doubting the logs.
-
HI Massimiliano,
Thanks for your reply.
I'm getting an error in both FF and Chrome with this in the address bar. Have I misunderstood?
http://webcache.googleusercontent.com/search?q=cache:http://www.mysite.com/mypage
Is the command (assuming I can get it to work) supposed to show when the page was indexed, or last crawled?
I am storing when it crawls, but am wondering about the couple of days part, since it has been 2 days now and when I first did it it was re-indexing within 5 minutes a few days ago.
-
Open this url on any browser:
You can reasonably take that as the date when the page was last indexed.
You could also programmatically store the last google bot visit per page, just checking user-agent of page request. Or just analyze your web server logs to get that info out on a per page basis. And add a couple of days just to have a buffer (even google need a little processing time to generate its index).
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Title page google serp
Why does Google change the titles automatically? I have <title>Canyoning Açores - São Jorge | Discover Experience Açores</title> but google show Discover Experience Açores: Canyoning Açores - São Jorge
Intermediate & Advanced SEO | | tiagoarruda0 -
Indexed Answer Box Result Leads to a 404 page?
Hey everyone, One of my clients is currently getting an answer box (people also ask) result for a page that is no longer live. They migrated their site approximately 6 months ago, and the old page is for some reason still indexed in the (people also asked) results. Weird thing is that this page leads to a 404 error. Why the heck is Google showing this? Are there separate indexes for "people also asked" results, and regular organic listings? Has anyone ever seen/experienced something like this before? Any insight would is much appreciated
Intermediate & Advanced SEO | | HSawhney0 -
Fix Google Index error
I changed my blog URL structure Can Someone please let me how to solve this?
Intermediate & Advanced SEO | | Michael.Leonard0 -
Newly designed page ranks in Google but then disappears - at a loss as to why.
Hi all, I wondered if you could help me at all please? We run a site called getinspired365.com (which is not optimised) and in the last 2 weeks have tried to optimise some new pages that we have added. For example, we have optimised this page - http://getinspired365.com/lifes-a-bit-like-mountaineering-never-look-down This page was added to Google's index via webmaster tools. When I then did a search for the full quote it came back 2nd in Google's search. If I did a search for half the quote (Life is a bit like mountaineering) it also ranked 2nd. We had another quote page that we'd optimised that displayed similar behaviour (it ranked 4th). But then for some reason when I now do the search it doesn't rank in the top 100 results. This, despite, an unoptimised "normal" page ranking 4th for a search such as: Thousands of geniuses live and die undiscovered. So our domain doesn't seem to be penalised as our "normal" pages are ranking. These pages aren't particularly well designed from an SEO standpoint. But our new pages - which are optimised - keep disappearing from Google, despite the fact they still show as indexed. I've rendered the pages and everything appears fine within Google Webmaster Tools. At a bit of a loss as to why they'd drop so significantly? A few pages I could understand but they've all but been removed. Any one seen this before, and any ideas what could be causing the issue? We have a different URL structure for our new pages in that we have the quote appear in the URL. All the content (bar the quote) that you see in the new pages are unique content that we've written ourselves. Could it be that we've over optimised and Google view these pages as spam? Many thanks in advance for all your help.
Intermediate & Advanced SEO | | MichaelWhyley0 -
Why is page still indexing?
Hi all, I have a few pages that - despite having a robots meta tag and no follow, no index, they are showing up in Google SERPs. In troubleshooting this with my team, it was brought up that another page could be linking to these pages and causing this. Is that plausible? How could I confirm that? Thanks,
Intermediate & Advanced SEO | | SSFCU
Sarah0 -
Google is Really Slow to Index my New Website
(Sorry for my english!) A quick background: I had a website at thewebhostinghero.com which had been slapped left and right by Google (both Panda & Penguin). It also had a manual penalty for unnatural links which had been lifted in late april / early may this year. I also had another domain, webhostinghero.com, which was redirecting to thewebhostinghero.com. When I realized I would be better off starting a new website than trying to salvage thewebhostinghero.com, I removed the redirection from webhostinghero.com and started building a new website. I waited about 5 or 6 weeks before putting any content on webhostinghero.com so Google had time to notice that the domain wasn't redirecting anymore. So about a month ago, I launched http://www.webhostinghero.com with 100% new content but I left thewebhostinghero.com online because it still brings a little (necessary) income. There are no links between the websites except on one page (www.thewebhostinghero.com/speed/) which is set to "noindex,nofollow" and is disallowed to search engines in robots.txt. I made sure the web page was deindexed before adding a "nofollow" link from thewebhostinghero.com/speed => webhostinghero.com/speed Since the new website launch, I've been publishing new content (from 2 to 5 posts) daily. It's getting some traction from social networks but it gets barely any clicks from Google search. It seems to take at least a week before Google indexes new posts and not all posts are indexed. The cached copy of the homepage is 12 days old. In Google Webmaster Tools, it looks like Google isn't getting the latest sitemap version unless I resubmit it manually. It's always 4 or 5 days old. So is my website just too young or could it have some kind of penalty related to the old website? The domain has 4 or 5 really old spammy links from the previous domain owner which I couldn't get rid of but otherwise I don't think there's anything tragic.
Intermediate & Advanced SEO | | sbrault740 -
Does Google index more than three levels down if the XML sitemap is submitted via Google webmaster Tools?
We are building a very big ecommerce site. The site has 1000 products and has many categories/levels. The site is still in construccion so you cannot see it online. My objective is to get Google to rank the products (level 5) Here is an example level 1 - Homepage - http://vulcano.moldear.com.ar/ Level 2 - http://vulcano.moldear.com.ar/piscinas/ Level 3 - http://vulcano.moldear.com.ar/piscinas/electrobombas-para-piscinas/ Level 4 - http://vulcano.moldear.com.ar/piscinas/electrobombas-para-piscinas/autocebantes.html/ Level 5 - Product is on this level - http://vulcano.moldear.com.ar/piscinas/electrobombas-para-piscinas/autocebantes/autocebante-recomendada-para-filtros-vc-10.html Thanks
Intermediate & Advanced SEO | | Carla_Dawson0 -
Indexing non-indexed content and Google crawlers
On a news website we have a system where articles are given a publish date which is often in the future. The articles were showing up in Google before the publish date despite us not being able to find them linked from anywhere on the website. I've added a 'noindex' meta tag to articles that shouldn't be live until a future date. When the date comes for them to appear on the website, the noindex disappears. Is anyone aware of any issues doing this - say Google crawls a page that is noindex, then 2 hours later it finds out it should now be indexed? Should it still appear in Google search, News etc. as normal, as a new page? Thanks. 🙂
Intermediate & Advanced SEO | | Alex-Harford0