Url shows up in "Inurl' but not when using time parameters
-
Hey everybody,
I have been testing the Inurl: feature of Google to try and gauge how long ago Google indexed our page. SO, this brings my question.
If we run inurl:https://mysite.com all of our domains show up.
If we run inurl:https://mysite.com/specialpage the domain shows up as being indexed
If I use the "&as_qdr=y15" string to the URL, https://mysite.com/specialpage does not show up.
Does anybody have any experience with this? Also on the same note when I look at how many pages Google has indexed it is about half of the pages we see on our backend/sitemap. Any thoughts would be appreciated.
TY!
-
There are several ways to do this, some are more accurate than others. If you have access to the site which contain the web-page on Google Analytics, obviously you could filter your view down to one page / landing page and see when the specified page first got traffic (sessions / users). Note that if a page existed for a long time before it saw much usage, this wouldn't be very accurate.
If it's a WordPress site which you have access to, edit the page and check the published date and / or revision history. If it's a post of some kind then it may displays its publishing date on the front-end without you even having to log in. Note that if some content has been migrated from a previous WordPress site and the publishing dates have not been updated, this may not be wholly accurate either.
You can see when the WayBack Machine first archived the specified URL. The WayBack Machine uses a crawler which is always discovering new pages, not necessarily on the date(s) they were created (so this method can't be trusted 100% either)
In reality, even using the "inurl:" and "&as_qdr=y15" operators will only tell you when Google first saw a web-page, it won't tell you how old the page is. Web pages do not record their age in their coding, so in a way your quest is impossible (if you want to be 100% accurate)
-
So, then I will pose a different question to you. How would you determine the age of a page?
-
Oh ty! Ill try that out!
-
Not sure on the date / time querying aspect, but instead of using "inurl:https://mysite.com" you might have better luck checking indexation via "site:mysite.com" (don't put in subdomains, www or protocol like HTTP / HTTPS)
Then be sure to tell Google to 'include' omitted results (if that notification shows up, sometimes it does - sometimes it doesn't!)
You can also use Google Search Console to check indexed pages:
- https://d.pr/i/oKcHzS.png (screenshot)
- https://d.pr/i/qvKhPa.png (screenshot)
You can only see the top 1,000 - but it does give you a count of all the indexed pages. I am pretty sure you could get more than 1k pages out of it, if you used the filter function repeatedly (taking less than 1k URLs from each site-area at a time)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our urls for adwords are slightly different from current urls presented on site (weused htaccess to help create shorter urls). How important is it that the adwords url match the sitemap url for keywords on those pages?
Hello, We have dynamic urls that we have made into short urls through htaccess and code manipulation. Some of our adwords urls are different from our page urls - for example a) Latest version of page www.abc.com/x-y-z.html b) Previous version of url www.abc.com/x+y+z.html c) raw original version www.abc.com/yyy/zzz?category=X&Product-code=Y etc etc. Would my ranking for keywords on the page improve if I diligently made all of them the same? They all go to the same page even now, and no 404 errors or anything. Thanks Sam
On-Page Optimization | | samgold0 -
When You Add a Robots.txt file to a website to block certain URLs, do they disappear from Google's index?
I have seen several websites recently that have have far too many webpages indexed by Google, because for each blog post they publish, Google might index the following: www.mywebsite.com/blog/title-of-post www.mywebsite.com/blog/tag/tag1 www.mywebsite.com/blog/tag/tag2 www.mywebsite.com/blog/category/categoryA etc My question is: if you add a robots.txt file that tells Google NOT to index pages in the "tag" and "category" folder, does that mean that the previously indexed pages will eventually disappear from Google's index? Or does it just mean that newly created pages won't get added to the index? Or does it mean nothing at all? thanks for any insight!
On-Page Optimization | | williammarlow0 -
Too Many on page links! Will "NoFollow" for navigation help?
I am getting to many on page links ( for all my pages). Here is my website: http://www.websterpowerproducts.co.uk I think it is to do with the the navigation bar down the right hand side. I don't really want to get ride of this as it offers users a way of getting where they want without lots of clicking. I was wondering if adding a "NoFollow" tag to each of they links would stop the link juice getting diluted by the navigation bar. Many Thanks
On-Page Optimization | | WebsterPowerTools0 -
Keyword in URL?
I have a website that has been live for about 8yrs. I do not have any significant rankings for my main keywords but am now starting SEO on my site. I am contemplating changing the url to contain the main keyword prefixed by my brand name. Any views on the ranking benefits and or CTR benefits.
On-Page Optimization | | Johnnyh
Example:
Main Volume keyword - 'car leasing'
current url - www.bobleasing.co.uk (made up name) thinking of changing to - www.bobcarleasing.co.uk (made up name) Any advice would be much appreciated. John0 -
How long after a URL starts showing a 404 does Google stop crawling?
Before hiring me to do SEO, a client re-launched their site and did not 301 the old URLs to the new. Only the home page URL stayed the same. For a month after the re-launch, the old URLs returned a 404. For the next month, all 404 pages (basically any non-existent URL) were 301'd to the home page. Finally, 2 months after launching, they properly 301'd the old URLs to the new. Now, the new URLs are not ranking well. I assume it's too late to realize any benefit from the 301's, just checking to see if anybody has any insight into how long Google keeps trying to crawl old/404/improperly 301'd URLs. Thanks!
On-Page Optimization | | AndrewMiller0 -
Close URL owned by competitors.
The following example is exactly analogous to our situation (site names slightly altered😞 We own www.business-skills.com. It's our main site. We don't own, and would rather avoid paying for, www.businessskills.com. It's a parked domain and the owners want a very large sum for it. We own www.business-skills.co.uk and point it to our main site. We don't own www.businessskills.co.uk. This is owned by our biggest competitor. We also own www.[ourbrand].com and .co.uk, and point them to the main site. My question is - how much traffic do you think we may be missing due to these nearly-but-not-quite URL matches? Does it matter in terms of lost revenue? What sort of things should I be looking at to get a very rough estimate?
On-Page Optimization | | JacobFunnell0 -
Meta refresh - nojavascript url
seomox is telling me that I am getting a page that is not being indexed or crawled and since the crawl status code is 200 and there are no robots the meta-refresh url must be the problem. the meta refresh url is different than the on page report card url as it's the nojavascript url which my developer says should be ok. see his comments below. The is redirecting to http://mastermindtoys.com/store/nojavascript.html only in case if the JavaScript is disabled in the client browser. This is the right way to do it, I don’t understand why this might be a problem, otherwise MM has to implement Noscript pages that have a real content. I didn’t get what’s wrong about accessibility. The code 200 means it is accessible, and yes there is nothing to access if JavaScript is disabled on browser. I think there are no modern retail sites that would do any sensible business with the scripting disabled in browsers.The H1 is really present 2 times and second occurrence can be removed, though I highly doubt about importance of this change.Regarding duplicates – what URLs are considered duplicates? Can you please send me examples?I am not aware of canonical URL problem for MM site unless we consider old .asp links as duplicate links of the canonical product pages. I would appreciate if SEOMoz gave us an example what they mean.I suspect that the page is not getting indexed as a result of this or I'm just not getting a good score. Which is it?
On-Page Optimization | | mastermindtoys0