Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How do you check the google cache for hashbang pages?
-
So we use http://webcache.googleusercontent.com/search?q=cache:x.com/#!/hashbangpage to check what googlebot has cached but when we try to use this method for hashbang pages, we get the x.com's cache... not x.com/#!/hashbangpage
That actually makes sense because the hashbang is part of the homepage in that case so I get why the cache returns back the homepage.
My question is - how can you actually look up the cache for hashbang page?
-
I was actually trying to give you the tools to figure out what's cached and indexed. You can just run a site search for the content and look at the cache, though. For example:
If nothing shows up it's probably not indexed.
-
Thanks Carson but that wasn't the question.
The question was how to check the cache.
-
Generally I'd avoid hashtags or hashbangs if you have large amounts of content you want indexed behind a hashbang. Use pushState instead whenever it makes sense for the user to actually change the URL.
The general rule is that if you can see the content in your page source (ctrl+u version), it's probably being indexed. That means that client-side AJAX behind hashbangs is generally not indexed, where server-side will generally get indexed.
If for some reason you must use hashbangs, AND you must use client-rendering content, create an HTML snapshot of your page for Google. Generally, though, that's more effort than changing one of the above.
-
I think google has stopped responding to cache requests on hashbang pages all together.
See here... **I'm just playing with random urls and don't see google cache 404'ing as it should **http://recordit.co/XBlo3U2A73
You can really put anything there it won't work.
-
Searching for indexed & duplicate content. I put a line or two in quotes and Googled it. I found most of the UTMs that way. Once you do that, it's a simple change to site:yoursite.com inurl:UTM
-
Thanks a lot, Matt.
I'm curious.. how did you exactly find the version with the utm codes that are being cached?
-
Strangely, browseo sees it correctly: http://www.browseo.net/?url=https%3A%2F%2Fplaceit.net%2F%3F_escaped_fragment_%3D%2Fstages%2Fsamsung-galaxy-note-friends-park
I'm not 100% sure why this is happening on your site specifically. Normally the #! isn't too big of an issue for cache but I've seen it have a few hiccups. These pages seem to be indexed fine but they aren't generating cache.
I did find a few working but only those with UTM codes:
This doesn't look like it's working but view the source code - the content is actually there. I found it by Googling the content in " marks.
-
What you're saying make sense and our urls are setup like this but we still don't see just the homepage come up when looking up the google cache with the esc fragment version
http://webcache.googleusercontent.com/search?q=cache:https://placeit.net/?escaped_fragment=/stages/samsung-galaxy-note-friends-park
https://placeit.net/?escaped_fragment=/stages/samsung-galaxy-note-friends-park
homepage - http://webcache.googleusercontent.com/search?q=cache:https://placeit.net/?escaped_fragment=
-
Let's use a Wix example site (not a client, just a sample from their page) as my example. Say you wanted to check:
http://www.kingskolacheny.com/#!press/crr2
In the source code I see the escaped fragment URL. This is the one you can find a cache for:
http://www.kingskolacheny.com/?escaped_fragment=press/crr2
That leads me to: http://webcache.googleusercontent.com/search?q=cache:http://www.kingskolacheny.com/?escaped_fragment=press/crr2
If your #! URLs are not setup this way, you will struggle to see it. One page websites are ... one page. But if you have escaped fragment URLs setup, you should be able to submit those and go from there.
The easiest way I know to find these is Screaming Frog, Ajax tab, Ugly URL field - try that one.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL structure - Page Path vs No Page Path
We are currently re building our URL structure for eccomerce websites. We have seen a lot of site removing the page path on product pages e.g. https://www.theiconic.co.nz/liberty-beach-blossom-shirt-680193.html versus what would normally be https://www.theiconic.co.nz/womens-clothing-tops/liberty-beach-blossom-shirt-680193.html Should we be removing the site page path for a product page to keep the url shorter or should we keep it? I can see that we would loose the hierarchy juice to a product page but not sure what is the right thing to do.
Intermediate & Advanced SEO | | Ashcastle0 -
Fresh page versus old page climbing up the rankings.
Hello, I have noticed that if publishe a webpage that google has never seen it ranks right away and usually in a descend position to start with (not great but descend). Usually top 30 to 50 and then over the months it slowly climbs up the rankings. However, if my page has been existing for let's say 3 years and I make changes to it, it takes much longer to climb up the rankings Has someone noticed that too ? and why is that ?
Intermediate & Advanced SEO | | seoanalytics0 -
Location Pages On Website vs Landing pages
We have been having a terrible time in the local search results for 20 + locations. I have Places set up and all, but we decided to create location pages on our sites for each location - brief description and content optimized for our main service. The path would be something like .com/location/example. One option that has came up in question is to create landing pages / "mini websites" that would probably be location-example.url.com. I believe that the latter option, mini sites for each location, would be a bad idea as those kinds of tactics were once spammy in the past. What are are your thoughts and and resources so I can convince my team on the best practice.
Intermediate & Advanced SEO | | KJ-Rodgers0 -
How to setup multiple pages in Google Search?
How to setup multiple pages in Google Search? I have seen sites that are arranged in google like : Website in Google
Intermediate & Advanced SEO | | Hall.Michael
About us. Contact us
Services. Etc.. Kindly review screenshot. Is this can achieved by Yoast Plugin? X9vMMTw.png0 -
Google indexing pages from chrome history ?
We have pages that are not linked from site yet they are indexed in Google. It could be possible if Google got these pages from browser. Does Google takes data from chrome?
Intermediate & Advanced SEO | | vivekrathore0 -
What referrer is shown in http request when google crawler visit a page?
Is it legit to show different content to http request having different referrer? case a: user view one page of the site with plenty of information about one brand, and click on a link on that page to see a product detail page of that brand, here I don't want to repeat information about the brand itself case b: a user view directly the product detail page clicking on a SERP result, in this case I would like to show him few paragraph about the brand Is it bad? Anyone have experience in doing it? My main concern is google crawler. Should not be considered cloaking because I am not differentiating on user-agent bot-no-bot. But when google is crawling the site which referrer will use? I have no idea, does anyone know? When going from one link to another on the website, is google crawler leaving the referrer empty?
Intermediate & Advanced SEO | | max.favilli0 -
Disallowed Pages Still Showing Up in Google Index. What do we do?
We recently disallowed a wide variety of pages for www.udemy.com which we do not want google indexing (e.g., /tags or /lectures). Basically we don't want to spread our link juice around to all these pages that are never going to rank. We want to keep it focused on our core pages which are for our courses. We've added them as disallows in robots.txt, but after 2-3 weeks google is still showing them in it's index. When we lookup "site: udemy.com", for example, Google currently shows ~650,000 pages indexed... when really it should only be showing ~5,000 pages indexed. As another example, if you search for "site:udemy.com/tag", google shows 129,000 results. We've definitely added "/tag" into our robots.txt properly, so this should not be happening... Google showed be showing 0 results. Any ideas re: how we get Google to pay attention and re-index our site properly?
Intermediate & Advanced SEO | | udemy0 -
Is 404'ing a page enough to remove it from Google's index?
We set some pages to 404 status about 7 months ago, but they are still showing in Google's index (as 404's). Is there anything else I need to do to remove these?
Intermediate & Advanced SEO | | nicole.healthline0