Which URLs were indexed 2 years ago?
-
Hi,
I hope anyone can help me with this issue. Our french domain experienced a huge drop of indexed URLs in 2012. More than 50k URLs were indexed, after the drop less than 10k were counted.
I would like to check what happened here and which URLs were thrown out of the index. So I was thinking about a comparison between todays data and the data of 2012. Unfortunately we don't have any data on the indexed pages in 2012 beside the number of indexed pages.
Is there any way to check, which URLs were indexed 2 years ago?
-
Sandra,
There's no outright way to compare google's cache of today with its cache of a date in the past that I know of. If you had google analytics installed at that time, you could go about getting useful info from that source. Since the primary matter is really which pages were bringing in search traffic then that are not bringing in search traffic now, you could compare landing page stats from today vs. two years ago to see which pages are not bringing in traffic any longer.
Any chance your website was redesigned in the mean time? Sometimes change of navigation, architecture, or URLs can be the culprit.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rankings Shifting for the Past 2-3 Years
For the past 2-3 years, my site’s organic rankings have been dramatically shifting, ranking well for a few days and then dropping for a month or two, then ranking again for a few days, and then dropping again. During the time of higher rankings, form submissions increase significantly. The ranking increases or decreases are typically between 10-30 spots each time. I’ve done everything I can think of to address any issues: improved speed, limited 404s, changed the architecture of the site, updated link anchor text, etc. Nothing seems to work. The site has 88% of its traffic coming from desktop, with 9% from mobile and 3% from tablet. The disparity between desktop and mobile leads me to believe that the ranking issues are mobile-related, especially now that Google is using mobile-first indexing. I thought dwell time could be an issue, but session duration is 2:07 minutes and bounce rate is under 60%, with an average of 2.27 pages per session. That doesn’t indicate any quality of traffic issues to me. There are no warnings in Google Search Console, and speed is 58 for mobile and 86 for desktop on Page Speed Insights. I’ve been doing SEO for 12 years, but this has me stumped. My top suspicions are: Speed issues on mobile Penalties for redirects from old website to the new website Penalties for anchor text for the old brand name instead of the new one. The site is https://dragonflydm.com Has anyone seen anything like this? Any ideas?
Intermediate & Advanced SEO | | dragonflydm0 -
How to get a large number of urls out of Google's Index when there are no pages to noindex tag?
Hi, I'm working with a site that has created a large group of urls (150,000) that have crept into Google's index. If these urls actually existed as pages, which they don't, I'd just noindex tag them and over time the number would drift down. The thing is, they created them through a complicated internal linking arrangement that adds affiliate code to the links and forwards them to the affiliate. GoogleBot would crawl a link that looks like it's to the client's same domain and wind up on Amazon or somewhere else with some affiiiate code. GoogleBot would then grab the original link on the clients domain and index it... even though the page served is on Amazon or somewhere else. Ergo, I don't have a page to noindex tag. I have to get this 150K block of cruft out of Google's index, but without actual pages to noindex tag, it's a bit of a puzzler. Any ideas? Thanks! Best... Michael P.S., All 150K urls seem to share the same url pattern... exmpledomain.com/item/... so /item/ is common to all of them, if that helps.
Intermediate & Advanced SEO | | 945010 -
Any way to force a URL out of Google index?
As far as I know, there is no way to truly FORCE a URL to be removed from Google's index. We have a page that is being stubborn. Even after it was 301 redirected to an internal secure page months ago and a noindex tag was placed on it in the backend, it still remains in the Google index. I also submitted a request through the remove outdated content tool https://www.google.com/webmasters/tools/removals and it said the content has been removed. My understanding though is that this only updates the cache to be consistent with the current index. So if it's still in the index, this will not remove it. Just asking for confirmation - is there truly any way to force a URL out of the index? Or to even suggest more strongly that it be removed? It's the first listing in this search https://www.google.com/search?q=hcahranswers&rlz=1C1GGRV_enUS753US755&oq=hcahr&aqs=chrome.0.69i59j69i57j69i60j0l3.1700j0j8&sourceid=chrome&ie=UTF-8
Intermediate & Advanced SEO | | MJTrevens0 -
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
Https & http urls in Google Index
Hi everyone, this question is a two parter: I am now working for a large website - over 500k monthly organic traffic. The site currently has both http and https urls in Google's index. The website has not formally converted to https. The https began with an error and has evolved unchecked over time. Both versions of the site (http & https) are registered in webmaster tools so I can clearly track and see that as time passes http indexation is decreasing and https has been increasing. The ratio is at about 3:1 in favor of https at this time. Traffic over the last year has slowly dipped, however, over the last two months there has been a steady decline in overall visits registered through analytics. No single page appears to be the culprit, this decline is occurring across most pages of the website, pages which traditionally draw heavy traffic - including the home page. Considering that Google is giving priority to https pages, could it be possible that the split is having a negative impact on traffic as rankings sway? Additionally, mobile activity for the site has steadily increased both from a traffic and a conversion standpoint. However that traffic has also dipped significantly over the last two months. Looking at Google's mobile usability error's page I see a significant number of errors (over 1k). I know Google has been testing and changing mobile ranking factors, is it safe to posit that this could be having an impact on mobile traffic? The traffic declines are 9-10% MOM. Thank you. ~Geo
Intermediate & Advanced SEO | | Geosem0 -
Why are some pages indexed but not cached by Google?
The question is simple but I don't understand the answer. I found a webpage that was linking to my personal site. The page was indexed in Google. However, there was no cache option and I received a 404 from Google when I tried using cache:www.thewebpage.com/link/. What exactly does this mean? Also, does it have any negative implication on the SEO value of the link that points to my personal website?
Intermediate & Advanced SEO | | mRELEVANCE0 -
Google showing high volume of URLs blocked by robots.txt in in index-should we be concerned?
if we search site:domain.com vs www.domain.com, We see: 130,000 vs 15,000 results. When reviewing the site:domain.com results, we're finding that the majority of the URLs showing are blocked by robots.txt. They are subdomains that we use as production environments (and contain similar content as the rest of our site). And, we also find the message "In order to show you the most relevant results, we have omitted some entries very similar to the 541 already displayed." SEER Interactive mentions that this is one way to gauge a Panda penalty: http://www.seerinteractive.com/blog/100-panda-recovery-what-we-learned-to-identify-issues-get-your-traffic-back We were hit by Panda some time back--is this an issue we should address? Should we unblock the subdomains and add noindex, follow?
Intermediate & Advanced SEO | | nicole.healthline0 -
Freshness Index?
Hi, I've been a member for a few months but this is my first entry. I typically build small portal websites to help attract more customers for small business approx. 5-7 pages and very tightly optimized around one primary keyword and 2 secondaries. These are typically very low competition. I do no link building to speak of. I don't keyword stuff or use poorly written content. I know that may be subjective but I believe the content I am using is genuinely useful to the reader. What I have noticed recently is the sites get ranked quite well to begin with e.g. anywhere from the bottom half of the first page to page 2-3 and they stick for maybe 2-3 weeks, and the client is very happy, they then just vanish. It's not just the Google dance either these sites don't typically come back at all or when they do they are 100+ I was advised this was due to the freshness index but honestly these sites are hardly newsworthy...just wondering if anyone had any ideas? Many thanks in advance.
Intermediate & Advanced SEO | | nichemarkettools0