NO Meta description pulling through in SERP with react website - Requesting Indexing & Submitting to Google with no luck
-
Hi there,
A year ago I launched a website using react, which has caused Google to not read my meta descriptions. I've submitted the sitemap and there was no change in the SERP. Then, I tried "Fetch and Render" and request indexing for the homepage, which did work, however I have over 300 pages and I can't do that for every one. I have requested a fetch, render and index for "this url and linked pages," and while Google's cache has updated, the SERP listing has not. I looked in the Index Coverage report for the new GSC and it says the urls and valid and indexable, and yet there's still no meta description.
I realize that Google doesn't have to index all pages, and that Google may not also take your meta description, but I want to make sure I do my due diligence in making the website crawlable. My main questions are:
-
If Google didn't reindex ANYTHING when I submitted the sitemap, what might be wrong with my sitemap?
-
Is submitting each url manually bad, and if so, why?
-
Am I simply jumping the gun since it's only been a week since I requested indexing for the main url and all the linked urls?
-
Any other suggestions?
-
-
Hi David,
The Fetch and Render looked blank, but I know Google can still read the code since it picked up on the schema we added less than a week after we added it. I sent the javascript guides over to our developers, but I would still really appreciate you looking at the URL if possible. I can't find a way to DM you on here, so I've sent you a LinkedIn request. Feel free to ignore it if there's a better way to communicate
- JW
-
That is a interesting Question
-
Hi,
I would mostly look into the site itself, from what you've mentioned here I don't think that the problem is in your sitemap but more on the side or React. Are you using server side or client side rendering for the pages in React? That usually can have a big impact on how Google is able to see the different pages and pick up on content (including meta tags).
Martijn.
-
Hi DigitalMarketingSEO,
This sounds like it's Google having some issues with your React website.
There are plenty of good SEO for Javascript guides out there that I would recommending reading through:
https://www.elephate.com/blog/ultimate-guide-javascript-seo/
https://builtvisible.com/javascript-framework-seo/
https://www.briggsby.com/dealing-with-javascript-for-seoHow did the "Fetch and Render" look? Was Googlebot able to see your page exactly as a human user would?
Can you share the URL here (or PM me)? I've done a lot of work on JS sites and I'd be happy to take a quick look to see I can give some more specific advice.
Cheers,
David
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is this is Wow HIT ME IN THE Face Google bug or am I missing something?
We have a page on our site https://www.spurshelving.co.uk/shop/bigimage.aspx?m=353&i=3436 which enders happily on all browsers as far as I am aware and is reasonably well optimised. So when google sent me a link to a new test tool I just had to check it out. https://testmysite.withgoogle.com/intl/en-gb/?utm_source=awareness&utm_medium=email&utm_campaign=tmsv1awareness&utm_content=header Well the result was shocking...... The page that renders in the results is a default missing product page and not the page that the link renders on a web page. I played a little and simply used the I=3436 attribute and the page appeared no problem I then reversed the attributes so that they were i=3436&m=353 and the page again resolved totally as expected. This indicates to me that Google have an issue with aspx attributes. Now I know what to do but is this same issue an issue in spidering and indexing pages. If is is wow that is a big smack in the face. Does it also harm search results in other engines. Keen for comments here
Web Design | | Eff-Commerce0 -
Subdomain to sub-directory migration: New subdirectory not yet indexed
Hi all, We have recently migrated a sub-domain to sub-directory to claim it's traffic in our website. Like demo.website.com to website.com/demo. We have also set a redirect for same which is working fine; but still old subdomain is showing in google search results and new directory haven't been indexed. We have submitted the new sub-directory in search console multiple times and it got partially indexed as per the status. We have allowed crawlers. Thanks
Web Design | | vtmoz0 -
Fetch as Google not showing Waypoints.js on scroll animation
So I noticed that my main content underneath 4 reasons to choose LED Habitats did not show up in Fetch as Google as well as a few other sections. The site being brand new, so I'm not sure how this will be indexed. What happens is, as the user scrolls the content is brought in using Waypoints and Animate.css which offers an engaging yet simple user experience. I'm just afraid that If the content doesn't show up in "Fetch as Google" in webmaster tools that this content will never be found / indexed by Google. There are thousands of sites that use this library, I'm just curious what I'm doing wrong.. or what I can do. Is there a way for me to keep the simple animations but keep Google Happy at the same time? I took a screen shot of "Fetch as Google" and you can see blatant missing sections which are the sections animated by the waypoints library. Thanks for listening! Robert ZqgLWHi
Web Design | | swarming0 -
Link juice passing from a .org.uk link to a .org/uk websites
Hi all, A client I am working on had a CMS built in recently which has resulted in all their canonicals tags being taken off the website, and as such the same page with both a .org/uk and .org.uk/uk domain have appeared in the search results and I am wondering what your guys take is on the best cause of action. For further background: Historically they have always used .org.uk/uk (not sure why) for their UK website and used .org/xxx for other countries (they also have a .org splashpage FYI). Having seen the .org/uk pages, and knowing they have to choose one to avoid duplication, they would like to move their uk website to the .org/uk domain to fit in with the rest of the divisions. However due to the historical use of .org.uk/uk their backlink profile contains links to both the .org.uk and .org domains. My question then: would a canonical tag on all the .org.uk/uk pages pointing to the .org/uk pages be strong enough to pass on link juice to the .org/uk pages (from all links pointing to .org.uk) or would a 301 redirect be required in this instance, or indeed would it be best to stay with the .org.uk/uk domain? Thanks, Diana
Web Design | | Diana.varbanescu0 -
Website 'stolen', no contact details
Hi all, Wondering if anyone could help out here, good a very strange issue.... Went into Google Webmaster Tools and looked at the incoming links to a client's site (new client, only just gained access to WMT) and noticed 2563 links coming from a domain. Upon viewing said domain it is a 100% copy of the clients site, I mean 100%; the phone numbers, email address etc are still pointing to the client's site. Everything is the same, the pages, the navigation etc. When I click on a link on the copy site it loads the same pages but at their site, the internal linking points to the version of the pages on their site. It seems to be an ongoing thing because the last time the client updated their blog was last week and this is on the copy site. Obviously this cannot be helping with regard to seo. The client knows nothing about it so not come from them. The copy site is indexed in Google!!. The first thing to do is to contact these people and ask what they are doing. This is proving to be easier said than done, the contact details (as mentioned above) on the pages still point back to the client and the whois gives no details. What would be the first step to take here? Obviously there is the whole legal area about stolen content but that can wait until we have the site down and out of Google. Is there somewhere in Google to report things such as this? I will speak to client and if they are happy I will share both the domains in question, they know I am seeking alternative opinions Many thanks Carl
Web Design | | GrumpyCarl0 -
Duplicate Content & Canonicals
I am a bit confused about canonicals and whether they are "working" properly on my site. In Webmaster Tools, I'm showing about 13,000 pages flagged for duplicate content, but nearly all of them are showing two pages, one URL as the root and a second with parameters. Case in point, these two are showing as duplicate content: http://www.gallerydirect.com/art/product/vincent-van-gogh/starry-night http://www.gallerydirect.com/art/product/vincent-van-gogh/starry-night?substrate_id=3&product_style_id=8&frame_id=63&size=25x20 We have a canonical tag on each of the pages pointing to the one without the parameters. Pages with other parameters don't show as duplicates, just one root and one dupe per listing, So, am I not using the canonical tag properly? It is clearly listed as:Is the tag perhaps not formatted properly (I saw someone somewhere state that there needs to be a /> after the URL, but that seems rather picky for Google)?Suggestions?
Web Design | | sbaylor0 -
Does Google take email server IP blacklists into account?
This is just a hypothetical, but would Google use information from email server blacklists to determine the quality of a website? The reason is that we're planning to code in an e-mail queuing system for our next CMS, and we would put SPF and DKIM in place. We wouldn't be sending any bulk e-mails (we use Constant Contact for this), but we might be sending personalised follow up e-mails, unpaid order emails and that sort of thing. There's no reason to think we'll be blacklisted, but from experience I know that these email blacklist directories quite often give false positives when an e-mail server is incorrectly configured. So the risk is that we might get blacklisted by mistake when we start using this new feature. Would Google take this into account as part of the algorithm? And if so, would the damage be permanent? (I.e. does getting removed from the blacklist mean Google will stop thinking we're a low quality / spammy site)
Web Design | | OptiBacUK0 -
Old links in Google, new website affecting SEO?
Hi Guys, I have launched my website in october and it has already been indexed by google. Now I'm going to launch my redesign which comes with a new structure, content, links, etc. So the question is, do I have to resubmit my website to google to get rid of old links? Onsite Explorer shows links to my forum which has been spammed with p* stuff which has been already indexed as well. The forum is off now. I want to use SEOmoz to track my new website but I guess this could be a hard thing as old links etc will be shown as well. Is there any tool to let Google know about my changes? Does it affect my SEO in any way? Thank you for your help. Nick
Web Design | | NickITW0