Google isn't seeing the content but it is still indexing the webpage
-
When I fetch my website page using GWT this is what I receive.
HTTP/1.1 301 Moved Permanently
X-Pantheon-Styx-Hostname: styx1560bba9.chios.panth.io
server: nginx
content-type: text/html
location: https://www.inscopix.com/
x-pantheon-endpoint: 4ac0249e-9a7a-4fd6-81fc-a7170812c4d6
Cache-Control: public, max-age=86400
Content-Length: 0
Accept-Ranges: bytes
Date: Fri, 14 Mar 2014 16:29:38 GMT
X-Varnish: 2640682369 2640432361
Age: 326
Via: 1.1 varnish
Connection: keep-aliveWhat I used to get is this:
HTTP/1.1 200 OK
Date: Thu, 11 Apr 2013 16:00:24 GMT
Server: Apache/2.2.23 (Amazon)
X-Powered-By: PHP/5.3.18
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 11 Apr 2013 16:00:24 +0000
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
ETag: "1365696024"
Content-Language: en
Link: ; rel="canonical",; rel="shortlink"
X-Generator: Drupal 7 (http://drupal.org)
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:dc="http://purl.org/dc/terms/"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:og="http://ogp.me/ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:sioc="http://rdfs.org/sioc/ns#"
xmlns:sioct="http://rdfs.org/sioc/types#"
xmlns:skos="http://www.w3.org/2004/02/skos/core#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"><title>Inscopix | In vivo rodent brain imaging</title>
-
Well I didn't see all of that but I did recognize the site wide redirect and GTW wasn't updated to the new https website so I was trying to pull data from the old one and obviously I wasn't getting anything.
Thanks for looking into this and laying it out for me. I appreciate it.
-
I just looked. Your entire website is 301 redirecting from the http version to the https version. You have a site wide 301 in place. If you are submitting the http URL to GWT fetch as googlebot, then you will see the 301 response and that is it.
It looks like you also changed web servers from Apache to Nginx. Nginx IMHO is a better setup than Apache so that is a good thing.
This just all gets back to that whoever develops/manages your website updated your webserver and also converted you over to https site wide and put 301s in place to move users from the old URLs to new URLs. So, the response from fetch as google is expected.
-
Doh! I just figured it out. But thanks for the help, it was just a stupid over-site on my part.
-
Just to verify, is that the URL you are submitting to GWT? Has that changed?
-
Just to clarify (because I'm a newbie) the _location: https://www.inscopix.com/ _in the first fetch example is the website the 301 is directing to correct?
-
The page is not invisible, it is responding to the 301 redirect you have in place.
If this is Page/URL A and you used to get the response with the content. Then if you put a 301 in place, there is no "content" on Page/URL A, there is just the redirect. The response from GWT is good in that it can see the 301 redirect.
If you setup a 301 redirect from page A to page B. Enter the URL for page B to see the content of the page. The Googlebot, when crawling a website and indexing page will follow the redirect. I am not sure that the fetch as Googlebot does this.
#Update#
According to this page the fetch as Googlebot tool does not follow 301 redirects
http://www.webnots.com/what-is-fetch-as-google.html
Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Previously blacklisted website still not appearing on Google searches.
We have a client who before us, had a website that was blacklisted by Google. After we created their new website, we submitted an appeal through Google's Webmaster Tools, and it was approved. One year later, they are still unable to rank for anything on Google. The keyword we are attempting to rank for on their home page is "Day in the Life Legal Videos" which shouldn't be too difficult to rank for after a year. But their website cannot be found. What else can we do to repair this previously blacklisted website after we're already been approved by Google? Here is the website in question: https://www.verdictvideos.com/
Intermediate & Advanced SEO | | rodneywarner0 -
How can I make a list of all URLs indexed by Google?
I started working for this eCommerce site 2 months ago, and my SEO site audit revealed a massive spider trap. The site should have been 3500-ish pages, but Google has over 30K pages in its index. I'm trying to find a effective way of making a list of all URLs indexed by Google. Anyone? (I basically want to build a sitemap with all the indexed spider trap URLs, then set up 301 on those, then ping Google with the "defective" sitemap so they can see what the site really looks like and remove those URLs, shrinking the site back to around 3500 pages)
Intermediate & Advanced SEO | | Bryggselv.no0 -
My landing pages don't show up in the SERPs, only my frontpage does.
I am having some trouble with getting the landing pages for a clients website to show up in the SERPs.
Intermediate & Advanced SEO | | InmediaDK
As far as I can see, the pages are optimized well, and they also get indexed by Google. The website is a danish webshop that sells wine, www.vindanmark.com Take for an instance this landing page, http://www.vindanmark.com/vinhandel/
It is optimzied for the keywords "Vinhandel Århus". Vinhandel means "Winestore" and "Århus" is a danish city. As you can see, I manage to get them at page 1 (#10), but it's the frontpage that ranks for the keyword. And this goes for alle the other landing pages as well. But I can't figure out, why the frontpage keep outranking the landingpages on every keyword.
What am I doing wrong here?1 -
Weird rankings on my website, can't figure it out
Hey guys, One of my post popular pages for "Rust Hacks" use to be - http://www.ilikecheats.com/01/rust-cheats-hacks-aimbot/ Now when searching Google for site:ilikecheats.com rust hacks This page shows as the highest ranking - http://forum.ilikecheats.com/forums/221-Rust-Hacks-Rust-Cheats-Public-Forum What's weird is it seems the entire front end (Wordpress site) isn't ranking well anymore on page #1 of Google and the forums are ranking better currently. I did have a huge penalty from backlinks last year but cleared it. I got Yoast to do a site review and I'm cleaning up everything now. I also cleared most of the bad links via the disavow tool. Another example is when I search for "warz hacks" the forums show up in 4th place but the main website isn't showing at all back to page 10. If I search site:ilikecheats.com warz hacks the links directly to the main site doesn't show until page #2. So is this still a penalty that is carried over or is something else going on? Can't seem to figure it out, thanks in advance for looking. 😃 Any ideas what's going on and why the main pages no longer rank - http://www.ilikecheats.com
Intermediate & Advanced SEO | | Draden670 -
Thousands of Web Pages Disappered from Google Index
The site is - http://shop.riversideexports.com We checked webmaster tools, nothing strange. Then we manually resubmitted using webmaster tools about a month ago. Now only seeing about 15 pages indexed. The rest of the sites on our network are heavily indexed and ranking really well. BUT the sites that are using a sub domain are not. Could this be a sub domain issue? If so, how? If not, what is causing this? Please advise. UPDATE: What we can also share is that the site was cleared twice in it's lifetime - all pages deleted and re-generated. The first two times we had full indexing - now this site hovers at 15 results in the index. We have many other sites in the network that have very similar attributes (such as redundant or empty meta) and none have behaved this way. The broader question is how to do we get the indexing back ?
Intermediate & Advanced SEO | | suredone0 -
Old pages still in index
Hi Guys, I've been working on a E-commerce site for a while now. Let me sum it up : February new site is launched Due to lack of resources we started 301's of old url's in March Added rel=canonical end of May because of huge index numbers (developers forgot!!) Added noindex and robots.txt on at least 1000 urls. Index numbers went down from 105.000 tot 55.000 for now, see screenshot (actual number in sitemap is 13.000) Now when i do site:domain.com there are still old url's in the index while there is a 301 on the url since March! I know this can take a while but I wonder how I can speed this up or am doing something wrong. Hope anyone can help because I simply don't know how the old url's can still be in the index. 4cArHPH.png
Intermediate & Advanced SEO | | ssiebn70 -
Why the archive sub pages are still indexed by Google?
Why the archive sub pages are still indexed by Google? I am using the WordPress SEO by Yoast, and selected the needed option to get these pages no-index in order to avoid the duplicate content.
Intermediate & Advanced SEO | | MichaelNewman1 -
Can you see the 'indexing rules' that are in place for your own site?
By 'index rules' I mean the stipulations that constitute whether or not a given page will be indexed. If you can see them - how?
Intermediate & Advanced SEO | | Visually0