Google isn't seeing the content but it is still indexing the webpage
-
When I fetch my website page using GWT this is what I receive.
HTTP/1.1 301 Moved Permanently
X-Pantheon-Styx-Hostname: styx1560bba9.chios.panth.io
server: nginx
content-type: text/html
location: https://www.inscopix.com/
x-pantheon-endpoint: 4ac0249e-9a7a-4fd6-81fc-a7170812c4d6
Cache-Control: public, max-age=86400
Content-Length: 0
Accept-Ranges: bytes
Date: Fri, 14 Mar 2014 16:29:38 GMT
X-Varnish: 2640682369 2640432361
Age: 326
Via: 1.1 varnish
Connection: keep-aliveWhat I used to get is this:
HTTP/1.1 200 OK
Date: Thu, 11 Apr 2013 16:00:24 GMT
Server: Apache/2.2.23 (Amazon)
X-Powered-By: PHP/5.3.18
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 11 Apr 2013 16:00:24 +0000
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
ETag: "1365696024"
Content-Language: en
Link: ; rel="canonical",; rel="shortlink"
X-Generator: Drupal 7 (http://drupal.org)
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:dc="http://purl.org/dc/terms/"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:og="http://ogp.me/ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:sioc="http://rdfs.org/sioc/ns#"
xmlns:sioct="http://rdfs.org/sioc/types#"
xmlns:skos="http://www.w3.org/2004/02/skos/core#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"><title>Inscopix | In vivo rodent brain imaging</title>
-
Well I didn't see all of that but I did recognize the site wide redirect and GTW wasn't updated to the new https website so I was trying to pull data from the old one and obviously I wasn't getting anything.
Thanks for looking into this and laying it out for me. I appreciate it.
-
I just looked. Your entire website is 301 redirecting from the http version to the https version. You have a site wide 301 in place. If you are submitting the http URL to GWT fetch as googlebot, then you will see the 301 response and that is it.
It looks like you also changed web servers from Apache to Nginx. Nginx IMHO is a better setup than Apache so that is a good thing.
This just all gets back to that whoever develops/manages your website updated your webserver and also converted you over to https site wide and put 301s in place to move users from the old URLs to new URLs. So, the response from fetch as google is expected.
-
Doh! I just figured it out. But thanks for the help, it was just a stupid over-site on my part.
-
Just to verify, is that the URL you are submitting to GWT? Has that changed?
-
Just to clarify (because I'm a newbie) the _location: https://www.inscopix.com/ _in the first fetch example is the website the 301 is directing to correct?
-
The page is not invisible, it is responding to the 301 redirect you have in place.
If this is Page/URL A and you used to get the response with the content. Then if you put a 301 in place, there is no "content" on Page/URL A, there is just the redirect. The response from GWT is good in that it can see the 301 redirect.
If you setup a 301 redirect from page A to page B. Enter the URL for page B to see the content of the page. The Googlebot, when crawling a website and indexing page will follow the redirect. I am not sure that the fetch as Googlebot does this.
#Update#
According to this page the fetch as Googlebot tool does not follow 301 redirects
http://www.webnots.com/what-is-fetch-as-google.html
Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Is Indexing my 301 Redirects to Other sites
Long story but now i have a few links from my site 301 redirecting to youtube videos or eCommerce stores. They carry a considerable amount of traffic that i benefit from so i can't take them down, and that traffic is people from other websites, so basically i have backlinks from places that i don't own, to my redirect urls (Ex. http://example.com/redirect) My problem is that google is indexing them and doesn't let them go, i have tried blocking that url from robots.txt but google is still indexing it uncrawled, i have also tried allowing google to crawl it and adding noindex from robots.txt, i have tried removing it from GWT but it pops back again after a few days. Any ideas? Thanks!
Intermediate & Advanced SEO | | cuarto7150 -
How good is Google at reading geo-targeted dynamic content -- Javascript?
We are using a single page application for a section of our website where it generates content based on the user's geographical location. Because Google's Search Console is searching from Virginia (where we don't have any content), we are not able to see anything render in Google Search Console. How good is Google at reading geo-targeted dynamic content? Do we have anything to worry about in terms of indexing the content because it's being served through JS?
Intermediate & Advanced SEO | | imjonny1230 -
Does collapsing content impact Google SEO signals?
Recently I have been promoting custom long form content development for major brand clients. For UX reasons we collapse the content so only 2-3 sentences of the first paragraph are visible. However there is a "read more" link that expands the entire content piece.
Intermediate & Advanced SEO | | RosemaryB
I have believed that the searchbots would have no problem crawling, indexing and applying a positive SEO signal for this content. However I'm starting to wonder. Is there any evidence that the Google search algorithm could possible discount or even ignore collapsed content?1 -
What if my site isn't ready for Mobile Armageddon by April 21st??
Hello Moz Experts, I am fighting for one of our sites to be mobile optimized, but the fight is taking longer than anticipated (need approval from higher ups). What happens if my site is not ready by April 21st? Will it take long to recover, like Penguin? Or, will the recovery be fairly quick? Say I release a mobile version of my site a week later. Then Google will have to reindex it and rank me again. How long will that take before I regain my traffic? Thanks,
Intermediate & Advanced SEO | | TMI.com0 -
I'm updating content that is out of date. What is the best way to handle if I want to keep old content as well?
So here is the situation. I'm working on a site that offers "Best Of" Top 10 list type content. They have a list that ranks very well but is out of date. They'd like to create a new list for 2014, but have the old list exist. Ideally the new list would replace the old list in search results. Here's what I'm thinking, but let me know if you think theres a better way to handle this: Put a "View New List" banner on the old page Make sure all internal links point to the new page Rel=canonical tag on the old list pointing to the new list Does this seem like a reasonable way to handle this?
Intermediate & Advanced SEO | | jim_shook0 -
Is 301 redirecting your index page to the root '/' safe to do or do you end up in an endless loop?
Hi I need to tidy up my home page a little, I have some links to our index.html page but I just want them to go to the root '/' so I thought I could 301 redirect it. However is this safe to do? I'm getting duplicate page notifications in my analytic reportings tools about the home page and need a quick way to fix this issue. Many thanks in advance David
Intermediate & Advanced SEO | | David-E-Carey0 -
How to get the 'show map of' tag/link in Google search results
I have 2 clients that have apparently random examples of the 'show map of' link in Google search results. The maps/addresses are accurate and for airports. They are both aggregators, they service the airports e.g. lax airport shuttle (not actual example) BUT DO NOT have Google Place listings for these pages either manually OR auto populated from Google, DO NOT have the map or address info on the pages that are returned in the search results with the map link. Does anyone know how this is the case? Its great that this happens for them but id like to know how/why so I can replicate across all their appropriate pages. My understanding was that for this to happen you HAD to have Google Place pages for the appropriate pages (which they cant do as they are aggregators). Thanks in advance, Andy
Intermediate & Advanced SEO | | AndyMacLean0 -
Best solution to get mass URl's out the SE's index
Hi, I've got an issue where our web developers have made a mistake on our website by messing up some URL's . Because our site works dynamically IE the URL's generated on a page are relevant to the current URL it ment the problem URL linked out to more problem URL's - effectively replicating an entire website directory under problem URL's - this has caused tens of thousands of URL's in SE's indexes which shouldn't be there. So say for example the problem URL's are like www.mysite.com/incorrect-directory/folder1/page1/ It seems I can correct this by doing the following: 1/. Use Robots.txt to disallow access to /incorrect-directory/* 2/. 301 the urls like this:
Intermediate & Advanced SEO | | James77
www.mysite.com/incorrect-directory/folder1/page1/
301 to:
www.mysite.com/correct-directory/folder1/page1/ 3/. 301 URL's to the root correct directory like this:
www.mysite.com/incorrect-directory/folder1/page1/
www.mysite.com/incorrect-directory/folder1/page2/
www.mysite.com/incorrect-directory/folder2/ 301 to:
www.mysite.com/correct-directory/ Which method do you think is the best solution? - I doubt there is any link juice benifit from 301'ing URL's as there shouldn't be any external links pointing to the wrong URL's.0