Ajax #! URL support?
-
Hi Moz,
My site is currently following the convention outlined here:
https://support.google.com/webmasters/answer/174992?hl=en
Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content.
For example, if the bot sees this url:
it will replace it will instead access the page:
http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73
In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine.
However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest.
If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great.
Also, pushstate is not practical for everyone due to limited browser support, etc.
Thanks,
Dustin
Updates:
I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago?
Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page:
And when it is ready to spider the page for content it, it spider's this URL instead:
http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73
The server does the rest, it is simply telling Roger to recognize the #! format and replace it with
?escaped_fragment
Though I obviously do not know how Roger is coded but it is a simple string replacement.
Thanks.
-
Hello Dustin, this is Abe on the Moz Help team.
This question is a bit intricate, I apologize if i am not reading your question correctly.
With AJAX content like this, I know Google's full specifications
https://developers.google.com/webmasters/ajax-crawling/docs/specification
indicate that the #! and ?escaped_fragment= technique works for their crawlers. However, Roger is a bit picky and isn't robust enough yet to use only the sitemap as the reference in this case. Luckily, one of our wonderful users came up with a solution using pushState() method. Click here:
http://www.moz.com/blog/create-crawlable-link-friendly-ajax-websites-using-pushstate
to find out how to create crawl-able content using pushState . This should help our crawler read AJAX content. Let me know if this information works for you!
I hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Refering URL Does Not Exist
I'm getting 250 or so 401 errors which says the refering Url is: https://www.carburetor-parts.com/assets/manuals/Carter_ThermoQuad_Carburetor.pdf Interesting that the file does not exist. It may have at one time. At any rate can a pdf have a URL. I can't find the reference anywhere. Any ideas/ Thanks Mike
Moz Bar | | MikeCarbs0 -
On Page Grader - URL not accessible
We have tried to use the On Page Grader today and it is coming back with URL not accessible for all pages on our website. We previously used the On Page Grader on Friday 10th Nov for a couple of product pages with no issues. Since then, the only changes we have made on the websites is updating some downloadable documents. We have done this several times before and it has never affected Moz. We have not changed the page URLs, and therefore do not know why it is now not working. The pages are working fine on the website with no issues. A link to one of the pages is below. http://www.processinstruments.co.uk/products/dissolved-oxygen-monitor/ Any help would be greatly appreciated.
Moz Bar | | PiMike0 -
URL is Inaccesible
Hi, I have tried this url: https://www.3dquickprinting.com/ on Moz onpage grader but it responds "Sorry, but that URL is inaccessible." I have checked my Robots.txt but it does not have any entry to block MOZ crawler. Please see: #User-Agent: *
Moz Bar | | HiteshP
#Crawl-Delay: 30 #For robots.txt
User-agent: BLEXBot
Disallow: /
User-agent: MJ12bot
Disallow: /
User-agent: TwengaBot
Disallow: /
User-agent: 008
Disallow: /
User-agent: WotBox
Disallow: / Please advice what to do get rid from this error. Thanks,0 -
Why is the exact same URL being seen as duplicate and showing an error in my SEO reports
Well, I am still having duplicate page issues. I have a question about one of the errors SEO is giving me when I download a crawl report. I am going to attach a screen shot of part of the report so you can see for yourself, along with explaining it here. SEO shows the list of URL's that it crawled in the report. In this(see attachment) portion of the report it has 321 results for the exact same URL. It also says all of these exact same URL's have received a 404 error. What I want to know is how does it make 321 results for the same URL? And with this error that I don't see when I look at the page? 0hkRDST
Moz Bar | | JoshMaxAmps0 -
URLS appearing twice in Moz crawl
I have asked this question before and got a Moz response to which i replied but no reply after that. Hi, We have noticed in our moz crawl that urls are appearing twice so urls like this - http://www.recyclingbins.co.uk/about/ www.recyclingbins.co.uk/about/ Thought it may be possible rel=canonical issue as can find URL's but no linking URL's to the pages. Does anyone have any ideas? Thank you Jon I did the crawl test and they were not there
Moz Bar | | imrubbish0 -
Rank Tracker - URLs are Different when Exporting to CSV
When exporting to CSV in Rank Tracker, many of the URLs are reduced to the root domain instead of the full, ranking URL as seen within the tool. Right now the URLs must be copied/pasted or manually edited afterwards in the CSV. However, it doesn't happen to every item. A few of them do show the correct URL after being exported. Any idea if this is a bug or just an odd thing the export does?
Moz Bar | | AlfredGoldberg1 -
Site crawl errors - download list of all urls
Hi Ive provided my clients developers with the pdf reports of crawl errors but these seem to miss some urls I see there are lots of csv file download/email options Will the email csv button send a report of everything listing all urls that are missing from the pdfs ? if not will the more specific csv reports Would be good if i can press 1 button and get all issues listed with all urls It does look like this happens but i just want confirmed best way asap since need to provide reports urgently, any guidance much appreciated ? All Best Dan
Moz Bar | | Dan-Lawrence0