Google Fetch and Render - Partial result (resources temporarily unavailable)
-
Over the past few weeks, my website pages have been showing as partial in the Google Search Console. There are many resources/ files (js, css, images) that are 'temporarily unreachable'. The website files haven't had any structural changes for about 2 years (it historically has always shows as 'completed' and rendered absolutely fine in the search console).
I have checked and the robots.txt is fine as is the sitemap. My host hasn't been very helpful, but has confirmed there are no server issues. My website rankings have now dropped which I think is due to these resources issues and I need to clear this issue up asap - can any one here offer any assistance? It would be hugely appreciated.
Thanks,
Dan
-
Can anyone suggest any answers or has anyone had similar issues? I continue to monitor the site via fetch and render and the issues remain the same - lots of images, css and js files 'Temporarily Unreachable' (yet they do exist and the link can be clicked on). The website functions fine otherwise.
As I say, I have changed website hosts and it is still the same. This is really affecting my rankings and if anyone has any clues I would be most grateful.
Many thanks,
Dan
-
Hi Martijn,
Thank you for your response!
The results in terms of the fetch and render for a page of the website looks different everytime. Sometimes it is images that are "Temporarily Unavailable' , sometimes it is css/ js files and sometimes both. However, there is never a 'Completed' result and always some form of 'Partial result. All the files/ images are reachable when you click on them, however. Nothing is blocked in terms of robots.
From time to time the entire page itself says 'Temporarily Unreachable', although it comes back to 'partial' after waiting a few hours.
I have contacted my web hosts who haven't offered much help. I actually changed web hosts and paid for a more expensive, faster server (as I assumed the server was taking too long for Google)!
However, the results are the same, so really struggling to understand why this is happening. As before, the robots.txt file is fine without any blockingCould you explain what you mean in terms of crawling with the GoogleBot User Agent?
Having had a quick scan around different forums, it seems there are quite a few websites having a similar problem, but there doesn't seem to be a solution so far.
Thanks again for your time.
-
Hi Dan,
Are there any more insights into what the screenshot actually looks like when the resources aren't being loaded? I would, in addition, try to crawl the site/page with the GoogleBot User Agent and see for yourself what happens. In some cases it could be that your CDN or server is blocking requests that are often done, obviously, this shouldn't happen with Google but it wouldn't be the first time that I see GoogleBot being blocked by a server.
Martijn.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Search results vary in chrome vs other browsers even in Incognito mode: Google's stand?
Hi all, We use incognito mode or private browsing to check the Actual results which are not impacted by previous history, location (sometimes), etc. Even we browse this way, we can see the different search results. Why would this happen? What's Google's stand on this? What is the actual way to browse to get the unbiased results for certain search queries? I have experienced that Chrome will rank our own websites bit higher compared to the other browsers even in incognito mode. Thanks
SERP Trends | | vtmoz1 -
URL Parameter for Limiting Results
We have a category page that lists products. We have parameters and the default value is to limit the page to display 9 products. If the user wishes, they can view 15 products or 30 products on the same page. The parameter is ?limit=9 or ?limit=15 and so on. Google is recognizing this as duplicate meta tags and meta descriptions via HTML Suggestions. I have a couple questions. 1. What should be my goal? Is my goal to have Google crawl the page with 9 items or crawl the page with all items in the category? In Search Console, the first part of setting up a URL parameter says "Does this parameter change page content seen by the user?". In my opinion, I think the answer is Yes. Then, when I select how the parameter affects page content, I assume I'd choose Narrows because it's either narrowing or expanding the number of items displayed on the page. 2. When setting up my URL Parameters in Search Console, do I want to select Every URL or just let Googlebot decide? I'm torn because when I read about Every URL, it says this setting could result in Googlebot unnecessarily crawling duplicate content on your site (it's already doing that). When reading further, I begin to second guess the Narrowing option. Now I'm at a loss on what to do. Any advice or suggestions will be helpful! Thanks.
SERP Trends | | dkeipper0 -
Cant find link google are saying is inserted
Guys, one of my sites are coming up as "hacked in Google". I have looked for the link they are suggesting but cannot find it in the database. I have tried to resubmit but they are saying It is against the terms of service.
SERP Trends | | Johnny_AppleSeed0 -
How to get Google Results for Did You Mean | Showing results for
If someone misspells our company name in Google, how do I get google to display **Did You Mean: **xyz. Our company name is difficult to spell and could be spelled multiple ways. What is the trick to this?
SERP Trends | | hfranz0 -
Does google follow text links without the link?
Hi A short question. Lets say I have been mentioned in an article online like "also mentioned on www.domain.com is" but the url mentioned is not a link does that give any seo value at all to my site? Does it count towards building a brand i googles eyes by being mentioned a lot but not awarded a link? Hope you can help clarify
SERP Trends | | AndersDK0 -
Does Google index search results pages for other search engines?
I've noticed a lot of backlinks to sites consist of SERPS from other search engines that Google. A link to a query like: http://searcheninge.com/?q=apple for instance. Does Google index these links and do they give any value? Regards, Henrik
SERP Trends | | euromillions0 -
Usage of the new Google quick flight search application
Hello I m interested in some first data or studies about how often the new google quick flight search application is used/clicked compared to the organic or ppc results. Would be awesome, if anybody knows more about it thx 5766285218_b588a1eb4e_z.jpg
SERP Trends | | Aris0 -
Google Places - Overcoming a geographical disadvantage
We have done some work on the Google Place entry for this website: http://www.royalhoteliow.co.uk/ We have spent a lot of time ensuring we have consistent address information, to make sure we have a good number of citations, and have worked with the staff at the hotel to ensure that there is a regular stream of reviews coming into a variety of websites that we know are used by Google. However.... Our target searches are 'Isle of Wight Hotels' and 'Isle of Wight Hotel'. If you do a Google map search for 'Isle of Wight', it drops a pin in the middle of the Island, which is 30 miles by 20! The main areas for hotels on the Isle of Wight (dictated by tourism), are about 15 miles from where Google percieves the Isle of Wight is, in the coastal towns. So it seems that no matter how hard we try, in terms of citations and reviews, GP is always going to prioritise those close to the middle of the area. Anyone got experience of a similar issue, or got any suggestions?
SERP Trends | | MatrixCreate430