Fetching & Rendering a non ranking page in GWT to look for issues
-
Hi
I have a clients nicely optimised webpage not ranking for its target keyword so just did a fetch & render in GWT to look for probs and could only do a partial fetch with the below robots.text related messages:
Googlebot couldn't get all resources for this page
Some boiler plate js plugins not found & some js comments reply blocked by robots (file below):
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/As far as i understand it the above is how it should be but just posting here to ask if anyone can confirm whether this could be causing any prrobs or not so i can rule it out or not.
Pages targeting other more competitive keywords are ranking well and are almost identically optimised so cant think why this one is not ranking.
Does fetch and render get Google to re-crawl the page ? so if i do this then press submit to index should know within a few days if still problem or not ?
All Best
Dan
-
ok thanks !
nothing has changed just hoped it might do something
-
If anything changed between the 15th and today, it'll help ensure it gets updated. But that's all.
-
thanks Donna ! yes its all there and cache date is 15 Jan but still thought worthwhile fetching & rendering & submitting again, or does that do nothing more if its already indexed apart from asking G to take another look ?
-
Can you see if it's cached? Try cutting and pasting the entire URL into the search window, minus the http://. If it's indexed, it should show up in search results. Not the address bar, the search window.
-
Thanks for commenting Donna !
And providing the link to the interesting Q&A although this isn't the scenario i'm referring to with my original question.
The page isn't ranking at all although its very well optimised (and not overly so) and the keyword isn't that competitive so i would expect to be somewhere in the first 3-4 pages but its not even in first 100
Very similarly optimised pages (for other target keywords which are more competitive) are ranking well. Hence the fetch and render & submit to index i did, just to double check Googles seeing the page.
Cheers
Dan
-
Hi Dan,
You might find this Q&A helpful. It offers suggestions for what to do when an unexpected page is ranking for your targeted keyword phrase. I think most, if not all, suggestions apply in your case as well. Good luck!
-
Marvellous !
Many Thanks Robert !
All BEst
Dan
-
Yes there are a lot of overlaps when it comes to GWT - for the most part if you are making a submission request for crawling, it is indexed simultaneously - I believe the difference lies in some approaches which allow you to crawl as Google as opposed to submitting for official index.
In other words, what you have done is a definitive step in crawling and indexing, as opposed to seeing what Google would find if it were to crawl your site (as a test). "Submit to Index" is normally something I reserve for completed sites (as opposed to Stub content) to avoid accidental de-indexing.
In your circumstances, however, I don't think it will hurt you and it may help you identify any outstanding issues. Just remember to avoid it if you don't want a site indexed before it is ready!
Hope this helps,
Rob
-
Hi Robert,
Thanks for your help again
!
That's great thanks, but what about 'submit to index' which i did also ? As in did i need to do that or not ?(since GWT says all pages submitted are indexed in sitemap section of GWT, so i take it i didn't need to, but did anyway as a precaution) ?
All Best
Dan
-
Hello again, Dan,
From what I can tell from your description, you have done what you can to make this work. We would expect JS to be blocked by that robots.txt file.
To answer your questions:
Fetch & render does allow Google to re-crawl the page using GWT. A request of this nature typically takes between 1-3 days to process, so you should know where you stand at that point.
Feel free to put an update here and if there is further information I will see what I can do to help out.
Cheers!
Rob
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
[Organization schema] Which Facebook page should be put in "sameAs" if our organization has separate Facebook pages for different countries?
We operate in several countries and have this kind of domain structure:
Technical SEO | | Telsenome
example.com/us
example.com/gb
example.com/au For our schemas we've planned to add an Organization schema on our top domain, and let all pages point to it. This introduces a problem and that is that we have a separate Facebook page for every country. Should we put one Facebook page in the "sameAs" array? Or all of our Facebook pages? Or should we skip it altogether? Only one Facebook page:
{
"@type": "Organization",
"@id": "https://example.com/org/#organization",
"name": "Org name",
"url": "https://example.com/org/",
"sameAs": [
"https://www.linkedin.com/company/xxx",
"https://www.facebook.com/xxx_us"
], All Facebook pages:
{
"@type": "Organization",
"@id": "https://example.com/org/#organization",
"name": "Org name",
"url": "https://example.com/org/",
"sameAs": [
"https://www.linkedin.com/company/xxx",
"https://www.facebook.com/xxx_us"
"https://www.facebook.com/xxx_gb"
"https://www.facebook.com/xxx_au"
], Bonus question: This reasoning springs from the thought that we only should have one Organization schema? Or can we have a multiple sub organizations?0 -
I have recently re-done my website. My buyers guide and my category page are ranking for keywords I'm after.
I have recently re-done my entire site (only a few days). I believe Google is still re-crawling and updating (however, the amount of movement on other searches has been significant). My buyers guide is ranking very high for its intended keywords, as well as high for the keywords of the category page. Both are at the beginning of the second page and I wonder if its dragging me down. What do you think I should do? Is it to early to take action as everything has been completed redone.
Technical SEO | | Code2Chil0 -
How to write blogs around a page you want to rank
Hey Moz Crew! So I'm not necessarily looking for the answer here but more of a where do I begin to learn more. If you guys could point me in the right direction or even just help me ask the question in a better way, I would be so thankful. Ok so there is page on my website that lives on the second page of Google. The page could be modified and I could add content to it if I wanted to, but let's just assume that this page is perfectly optimized with absolutely wonderful content and a great user experience. Now of course I would like to get a bunch of links to that page, but If I can't write anymore content on that page or update it, It will be harder to convince people to link to it (does that even make sense?). But if I can write blogs about really good subjects around that page, and those blogs do very well, how can I make sure that the actually page is getting all the juice that it can. And will it even get juice? Is this just a simple internal linking question? Am I tapping on the door of micro sites or landing pages? Oy vey where do I start!? ❤ Much love guys 🙂
Technical SEO | | Meier0 -
I've had a sudden a increase in crawl issues as of yesterday (like 300 from a steady 10, does anyone else have this issue?
the main issue is that it's now indexing both www and http:// - anyone else got this issue or had any changes suddenly on their crawl results?
Technical SEO | | beckyhy0 -
Home page canonical issues
Hi, I've noticed I can access/view a client's site's home page using the following URL variations - http://example.com/
Technical SEO | | simon-145328
http://example/index.html
http://www.example.com/
http://www.example.com/index.html There's been no preference set in Google WMT but Google has indexed and features this URL - http://example.com/ However, just to complicate matters, the vast majority of external links point to the 'www' version. Obviously i would like to tidy this up and have asked the client's web development company if they can place 301 redirects on the domains we no longer want to work - I received this reply but I'm not sure whether this does take care of the duplicate issue - Understand what you're saying, but this shouldn't be an issue regarding SEO. Essentially all the domains listed are linking to the same index.html page hosted at 1 location My question is, do i need to place 301 redirects on the domains we don't want to work and do i stick with the 'non www' version Google has indexed and try to change the external links so they point to the 'non www' version or go with the 'www' version and set this as the preferred domain in Google WMT? My technical knowledge in this area is limited so any help would be most appreciated. Regards,
Simon.0 -
What has happened to my page rank
hi my page rank for the site www.in2town.co.uk was page rank four and then last week it went down to page rank 2 and now my page rank is 0. i really do not understand what has happened. can anyone please give me advice on what is happening
Technical SEO | | ClaireH-1848860 -
SEOMoz Crawl Diagnostic indicates duplicate page content for home page?
My first SEOMoz Crawl Diagnostic report for my website indicates duplicate page content for my home page. It lists the home page URL Page Title and URL twice. How do I go about diagnosing this? Is the problem related to the following code that is in my .htaccess file? (The purpose of the code was to redirect any non "www" backlink referrals to the "www" version of the domain.) RewriteCond %{HTTP_HOST} ^whatever.com [NC]
Technical SEO | | Linesides
RewriteRule ^(.*)$ http://www.whatever.com/$1 [L,R=301] Should I get rid of the "http" reference in the second line? Related to this is a notice in the "Crawl Notices Found" -- "301 Permanent redirect" which shows my home page title as "http://whatever.com" and shows the redirect address as http://http://www.whatever.com/ I'm guessing this problem is again related to the redirect code I'm using. Also... The report indicates duplicate content for those links that have different parameters added to the URL i.e. http://www.whatever.com?marker=Blah Blah&markerzoom=13 If I set up a canonical reference for the page, will this fix this? Thank you.0