Google isn't seeing the content but it is still indexing the webpage
-
When I fetch my website page using GWT this is what I receive.
HTTP/1.1 301 Moved Permanently
X-Pantheon-Styx-Hostname: styx1560bba9.chios.panth.io
server: nginx
content-type: text/html
location: https://www.inscopix.com/
x-pantheon-endpoint: 4ac0249e-9a7a-4fd6-81fc-a7170812c4d6
Cache-Control: public, max-age=86400
Content-Length: 0
Accept-Ranges: bytes
Date: Fri, 14 Mar 2014 16:29:38 GMT
X-Varnish: 2640682369 2640432361
Age: 326
Via: 1.1 varnish
Connection: keep-aliveWhat I used to get is this:
HTTP/1.1 200 OK
Date: Thu, 11 Apr 2013 16:00:24 GMT
Server: Apache/2.2.23 (Amazon)
X-Powered-By: PHP/5.3.18
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 11 Apr 2013 16:00:24 +0000
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
ETag: "1365696024"
Content-Language: en
Link: ; rel="canonical",; rel="shortlink"
X-Generator: Drupal 7 (http://drupal.org)
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:dc="http://purl.org/dc/terms/"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:og="http://ogp.me/ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:sioc="http://rdfs.org/sioc/ns#"
xmlns:sioct="http://rdfs.org/sioc/types#"
xmlns:skos="http://www.w3.org/2004/02/skos/core#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"><title>Inscopix | In vivo rodent brain imaging</title>
-
Well I didn't see all of that but I did recognize the site wide redirect and GTW wasn't updated to the new https website so I was trying to pull data from the old one and obviously I wasn't getting anything.
Thanks for looking into this and laying it out for me. I appreciate it.
-
I just looked. Your entire website is 301 redirecting from the http version to the https version. You have a site wide 301 in place. If you are submitting the http URL to GWT fetch as googlebot, then you will see the 301 response and that is it.
It looks like you also changed web servers from Apache to Nginx. Nginx IMHO is a better setup than Apache so that is a good thing.
This just all gets back to that whoever develops/manages your website updated your webserver and also converted you over to https site wide and put 301s in place to move users from the old URLs to new URLs. So, the response from fetch as google is expected.
-
Doh! I just figured it out. But thanks for the help, it was just a stupid over-site on my part.
-
Just to verify, is that the URL you are submitting to GWT? Has that changed?
-
Just to clarify (because I'm a newbie) the _location: https://www.inscopix.com/ _in the first fetch example is the website the 301 is directing to correct?
-
The page is not invisible, it is responding to the 301 redirect you have in place.
If this is Page/URL A and you used to get the response with the content. Then if you put a 301 in place, there is no "content" on Page/URL A, there is just the redirect. The response from GWT is good in that it can see the 301 redirect.
If you setup a 301 redirect from page A to page B. Enter the URL for page B to see the content of the page. The Googlebot, when crawling a website and indexing page will follow the redirect. I am not sure that the fetch as Googlebot does this.
#Update#
According to this page the fetch as Googlebot tool does not follow 301 redirects
http://www.webnots.com/what-is-fetch-as-google.html
Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do internal search results get indexed by Google?
Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.
Intermediate & Advanced SEO | | Mat_C0 -
How long will old pages stay in Google's cache index. We have a new site that is two months old but we are seeing old pages even though we used 301 redirects.
Two months ago we launched a new website (same domain) and implemented 301 re-directs for all of the pages. Two months later we are still seeing old pages in Google's cache index. So how long should I tell the client this should take for them all to be removed in search?
Intermediate & Advanced SEO | | Liamis0 -
Google Update? Anyone seeing a drastic change in rankings?
Hey everyone, Has anyone seen a drastic change in clients Google Rankings, we have one which has dropped from 9.5% visibility to 5.4% in one month.. It's extremely worrying as I have never seen a drop like this before since I've managed the account, in fact rankings have increased month on month since we took over the account. I've also looked at google, no penalty showing, I've started removing spammy sites who link to us and also added no follow to some of the links on the site which linked out to try and keep some link juice in the site. Anything else I can try / do?
Intermediate & Advanced SEO | | Unbranded_Lee2 -
Is Google able to see child pages in our AJAX pagination?
We upgraded our site to a new platform the first week of August. The product listing pages have a canonical issue. Page 2 of the paginated series has a canonical pointing to page 1 of the series. Google lists this as a "mistake" and we're planning on implementing best practice (https://webmasters.googleblog.com/2013/04/5-common-mistakes-with-relcanonical.html) We want to implement rel=next,prev. The URLs are constructed using a hashtag and a string of query parameters. You'll notice that these parameters are ¶meter:value vs ¶meter=value. /products#facet:&productBeginIndex:0&orderBy:&pageView:grid&minPrice:&maxPrice:&pageSize:& None of the URLs are included in any indexed URLs because the canonical is the page URL without the AJAX parameters. So these results are expected. Screamingfrog only finds the product links on page 1 and doesn't move to page 2. The link to page 2 is AJAX. ScreamingFrog only crawls AJAX if its in Google's deprecated recommendations as far as I know. The "facet" parameter is noted in search console, but the example URLs are for an unrelated URL that uses the "?facet=" format. None of the other parameters have been added by Google to the console. Other unrelated parameters from the new site are in the console. When using the fetch as Google tool, Google ignores everything after the "#" and shows only the main URL. I tested to see if it was just pulling the canonical of the page for the test, but that was not the case. None of the "#facet" strings appear in the Moz crawl I don't think Google is reading the "productBeginIndex" to specify the start of a page 2 and so on. One thought is to add the parameter in search console, remove the canonical, and test one category to see how Google treats the pages. Making the URLs SEO friendly (/page2.../page3) is a heavy lift. Any ideas how to diagnose/solve this issue?
Intermediate & Advanced SEO | | Jason.Capshaw0 -
Pull meta descriptions from a website that isn't live anymore
Hi all, we moved a website over to Wordpress 2 months ago. It was using .cfm before, so all of the URLs have changed. We implemented 301 redirects for each page, but we weren't able to copy over any of the meta descriptions. We have an export file which has all of the old web pages. Is there a tool that would allow us to upload the old pages and extract the meta descriptions so that we can get them onto the new website? We use the Yoast SEO plugin which has a bulk meta descriptions editor, so I'm assuming that the easiest/most effective way would be to find a tool that generates some sort of .csv or excel file that we can just copy and paste? Any feedback/suggestions would be awesome, thanks!
Intermediate & Advanced SEO | | georgetsn0 -
Any idea why this page isn't indexing?
Hi Mozzers, Question for all of you. Any idea why this page isn't indexing in Google? It's indexing in Bing, but we don't see it in Google's results. It doesn't seem like we have any noindex tags or anyway issues with the robots files either. Any ideas? http://ohva.k12.com/
Intermediate & Advanced SEO | | petertong230 -
Can Google index PDFs with flash?
Does anyone know if Google can index PDF with Flash embedded? I would assume that the regular flash recommendations are still valid, even when embedded in another document. I would assume there is a list of the filetype and version which Google can index with the search appliance, but was not able to find any. Does anyone have a link or a list?
Intermediate & Advanced SEO | | andreas.wpv0 -
Sitemaps / Google Indexing / Submitted
We just submitted a new sitemap to google for our new rails app - http://www.thesquarefoot.com/sitemap.xml Which has over 1,400 pages, however Google is only seeing 114. About 1,200 are in the listings folder / 250 blog posts / and 15 landing pages. Any help would be appreciated! Aron sitemap.png
Intermediate & Advanced SEO | | TheSquareFoot0