Should I worry about rendering problems of my pages in google search console fetch as google?
-
Some elements are not properly shown when I preview our pages in search console (fetch as google), e.g.
google maps, css tables etc. and some parts are not showing up since we load them asynchroneously for best page speed.Is this something should pay attention to and try to fix?
-
This is a great question. You will likely get a few different opinions as well. I for one am a fan of using the Google Text Cache as my primary source to determine if something is crawlable. However, I do tend to support my hypothesis with the fetch and render tool as well.
Screaming Frog recently launched an update to their toolset which includes rendering. Consider downloading and seeing how their toolset displays your pages content. It can be another good source for validating if everything is crawlable.
Nick
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Thousands of 503 errors in GSC for pages not important to organic search - Is this a problem?
Hi, folks A client of mine now has roughly 30 000 503-errors (found in the crawl error section of GSC). This is mostly pages with limited offers and deals. The 503 error seems to occur when the offers expire, and when the page is of no use anymore. These pages are not important for organic search, but gets traffic from direct and newsletters, mostly. My question:
Intermediate & Advanced SEO | | Inevo
Does having a high number of 503 pages reported in GSC constitute a problem in terms of organic ranking for the domain and the category and product pages (the pages that I want to rank for organically)? If it does, what is the best course of action to mitigate the problem? Looking excitingly forward to your answers to this 🙂 Sigurd0 -
Targeting two search terms with same intent - one or more pages for SEO benefits?
I'd like some professional opinions on this topic. I'm looking after the SEO for my friends site, and there are two main search terms we are looking to boost in search engines. The company sells Billboard advertising space to businesses in the UK. Here are the two search terms we're looking to target: Billboard Advertising - 880 searches P/M Outdoor Advertising - 720 searches P/M It would usually make sense to make a separate page to target the keyword "billboard advertising" on its own fully optimised landing page with more information on the topic and with a targeted URL: www.website.com/billboard-advertising/ and the homepage to target "outdoor advertising" as it's an outdoor advertising agency. But there's a problem, as both search terms are highly related and have the same intent, I'm worried that if we create a separate page to target the billboard advertising, it will conflict with the homepage targeting outdoor advertising. Also, the main competitors who are currently ranked position 1-3, are ranking with their home pages and not optimised landing pages to target the exact search term "billboard advertising". Any advice on this?
Intermediate & Advanced SEO | | Jseddon920 -
Google indexing pages from chrome history ?
We have pages that are not linked from site yet they are indexed in Google. It could be possible if Google got these pages from browser. Does Google takes data from chrome?
Intermediate & Advanced SEO | | vivekrathore0 -
Search engine simulators are not finding text on my website. Do I have a problem with Javascript or AJAX?
My website text is not appearing in search engine simulators. Is there a problem with the javascript? Or perhaps AJAX is affecting it? Is there a tool I can use to examine how my website architecture is affecting how the site is crawled? I am totally lost. Help!
Intermediate & Advanced SEO | | ecigseo0 -
Meta NOINDEX... how long before Google drops dupe pages?
Hi, I have a lot of near dupe content caused by URL params - so I have applied: How long will it take for this to take effect? It's been over a week now, I have done some removal with GWT removal tool, but still no major indexed pages dropped. Any ideas? Thanks, Ben
Intermediate & Advanced SEO | | bjs20100 -
Google is ranking the wrong page for the targeted keyword
I have two examples below where we want it to rank for the targeted page but google picked another page to rank instead. This is happening a lot on this site I just recently started to work on. Example 1 Googles Choice for key word Motorcycle Tires: http://www.rockymountainatvmc.com/cl/50/Tires-and-Wheels What we want Google to choice for Motorcycle Tires: http://www.rockymountainatvmc.com/c/49/-/181/Motorcycle-Tires Other pages about Motorcycle tires: http://www.rockymountainatvmc.com/d/12/Motorcycle-Tires We even used the rel="canonical" for this url to point to our target page. http://www.rockymountainatvmc.com/c/50/-/181/Motorcycle-Tires Example 2 ATV Tires We want this page to rank http://www.rockymountainatvmc.com/c/43/81/165/ATV-Tires however google has decided to rank http://www.rockymountainatvmc.com/t/43/81/165/723/ATV-Tires-All that is acutally one folder under where we want it.
Intermediate & Advanced SEO | | DoRM0 -
Google consolidating link juice on duplicate content pages
I've observed some strange findings on a website I am diagnosing and it has led me to a possible theory that seems to fly in the face of a lot of thinking: My theory is:
Intermediate & Advanced SEO | | James77
When google see's several duplicate content pages on a website, and decides to just show one version of the page, it at the same time agrigates the link juice pointing to all the duplicate pages, and ranks the 1 duplicate content page it decides to show as if all the link juice pointing to the duplicate versions were pointing to the 1 version. EG
Link X -> Duplicate Page A
Link Y -> Duplicate Page B Google decides Duplicate Page A is the one that is most important and applies the following formula to decide its rank. Link X + Link Y (Minus some dampening factor) -> Page A I came up with the idea after I seem to have reverse engineered this - IE the website I was trying to sort out for a client had this duplicate content, issue, so we decided to put unique content on Page A and Page B (not just one page like this but many). Bizarrely after about a week, all the Page A's dropped in rankings - indicating a possibility that the old link consolidation, may have been re-correctly associated with the two pages, so now Page A would only be getting Link Value X. Has anyone got any test/analysis to support or refute this??0