Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Javascript to fetch page title for every webpage, is it good?
-
We have a zend framework that is complex to program if you ask me, and since we have 20k+ pages that we need to get proper titles to and meta descriptions, i need to ask if we use Javascript to handle page titles (basically the previously programming team had NOT set page titles at all) and i need to get proper page titles from a h1 tag within the page.
current course of action which we can easily implement is fetch page title from that h1 tag being used throughout all pages with the help of javascript, But this does makes it difficult for engines to actually read what's the page title? since its being fetched with javascript code that we have put in, though i had doubts, is anyone one of you have simiilar situation before? if yes i need some help!
Update: I tried the JavaScript way and here is what it looks like http://islamicencyclopedia.org/public/index/hadith/id/1/book_id/106 i know the fact that google won't read JavaScript like the way we have done with the website, But i need help on "How we can work around this issue" Knowing we don't have other options.
-
Your welcome. Interesting question. My answer is that if the HTML TITLE is set with client side JavaScript then it's has little change of being picked up as the title by crawlers or Google. Let's say we alter the node element Title value with like this:
In this case it will alter the value after the hard coded HTML title was send to the browser. It would need the crawler to load the document in full and read the HTML title value only after fully rendering it as if it where a human user. This is not likely.
Then we could also try a document write to construct the HTML HEAD tag Title as a string to use for the browser as the title like this:
Will not work as the title text is not actually altered after evaluation of the script line.
This does not work because the title is not set but because it's not actually printed to the browser as a string. The source code for the title still looks like this in to any browser:
As you can see the script does not print the result string of the evaluation to the browser but still sets the value of the document object model node HTML TITLE to the value it evaluates to.
Try it for yourself with this dummy page I made just to be certain.
http://www.googlewiki.nl/test/seojavascripttest2.html And this is the DOM info for this page http://www.googlewiki.nl/seo-checker/testanchor.php?url=http://www.googlewiki.nl/test/seojavascripttest2.html&anchor=testOr am I missing something here?
Hope this helps.
-
Google can read Javascript, but only certain types and implemetations. Is there a way you can set this swap out to happen on the database or server side? That might be the best way to get the live text readable, as most likely the javascript is being rendered and displayed after the initial crawl of the page. Even if it is a milisecond later, Google might not allow/catch it.
-
Any javascript effor is invalid for SEO. Google doesn't read them.
You can try to make it on PHP, it's not complex, search your an replace with meta desc=$var and <title>$var2 (and, also, </head>). Then you can set the meta desc and page title with a variable in your code, and then this effor effectively have SEO value, because when the search engine fetch the page they have title and desc.</p> <p>Maybe this is more work than JS form, but also it's better for SEO and web itself (The JS run takes time on client side).</p></title>
-
Thank you Daniel for the input, Since the code is all messed up and i can't convince the Board to redo the site from scratch. i'll have to go with small tricks with title tags and descriptions to be set, with JavaScript as i just tried, it worked and it now does fetch all the titles and displays them on the browser title, without any significant change in the actual code except for the addition of JavaScript that i just tried.
But i did a test run with Rich snippet testing tool to see what Google pulls in as preview for search results, and it didn't show anything, No title and No description .. alas! So i guess it does mean using JavaScript to fetch title & description won't help? I'm still not sure.
So now the real question i have in mind; Does this Javascript technique that we just used, will it be of any good SEO wise or will have any value?
-
Hi... I would not prefer a client side approach to this. If it's readable depends on the script itself. Although some JS fans will say this alright I would prefer to do this server side with php, or similar, and make a template that does this rewrite. It's not to hard. Or why not a batch run to modify all pages once to hardcode the correct title in the page? Have some scripts that can do this for you if you would like.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
For FAQ Schema markup, do we need to include every FAQ that is on the page in the markup, or can we use only selected FAQs?
The website FAQ page we are working on has more than 50 FAQs. FAQ Schema guidelines say the markup must be an exact match with the content. Does that mean all 50+ FAQs must be in the mark-up? Or does that mean the few FAQs we decided to put in the markup are an exact match?
Intermediate & Advanced SEO | | PKI_Niles0 -
Should I apply Canonical Links from my Landing Pages to Core Website Pages?
I am working on an SEO project for the website: https://wave.com.au/ There are some core website pages, which we want to target for organic traffic, like this one: https://wave.com.au/doctors/medical-specialties/anaesthetist-jobs/ Then we have basically have another version that is set up as a landing page and used for CPC campaigns. https://wave.com.au/anaesthetists/ Essentially, my question is should I apply canonical links from the landing page versions to the core website pages (especially if I know they are only utilising them for CPC campaigns) so as to push link equity/juice across? Here is the GA data from January 1 - April 30, 2019 (Behavior > Site Content > All Pages😞
Intermediate & Advanced SEO | | Wavelength_International0 -
Why does Google rank a product page rather than a category page?
Hi, everybody In the Moz ranking tool for one of our client's (the client sells sport equipment) account, there is a trend where more and more of their landing pages are product pages instead of category pages. The optimal landing page for the term "sleeping bag" is of course the sleeping bag category page, but Google is sending them to a product page for a specific sleeping bag.. What could be the critical factors that makes the product page more relevant than the category page as the landing page?
Intermediate & Advanced SEO | | Inevo0 -
Effect of Removing Footer Links In all Pages Except Home Page
Dear MOZ Community: In an effort to improve the user interface of our business website (a New York CIty commercial real estate agency) my designer eliminated a standardized footer containing links to about 20 pages. The new design maintains this footer on the home page, but all other pages (about 600 eliminate the footer). The new design does a very good job eliminating non essential items. Most of the changes remove or reduce the size of unnecessary design elements. The footer removal is the only change really effect the link structure. The new design is not launched yet. Hoping to receive some good advice from the MOZ community before proceeding My concern is that removing these links could have an adverse or unpredictable effect on ranking. Last Summer we launched a completely redesigned version of the site and our ranking collapsed for 3 months. However unlike the previous upgrade this modifications does not URL names, tags, text or any major element. Only major change is the footer removal. Some of the footer pages provide good (not critical) info for visitors. Note the footer will still appear on the home page but will be removed on the interior pages. Are we risking any detrimental ranking effect by removing this footer? Can we compensate by adding text links to these pages if the links from the footer are removed? Seems irregular to have a home page footer but no footer on the other pages. Are we inviting any downgrade, penalty, adverse SEO effect by implementing this? I very much like the new design but do not want to risk a fall in rank and traffic. Thanks for your input!!!
Intermediate & Advanced SEO | | Kingalan1
Alan0 -
NoIndexing Massive Pages all at once: Good or bad?
If you have a site with a few thousand high quality and authoritative pages, and tens of thousands with search results and tags pages with thin content, and noindex,follow the thin content pages all at once, will google see this is a good or bad thing? I am only trying to do what Google guidelines suggest, but since I have so many pages index on my site, will throwing the noindex tag on ~80% of thin content pages negatively impact my site?
Intermediate & Advanced SEO | | WebServiceConsulting.com0 -
Meta Tag Force Page Refresh - Good or Bad?
I had recently come across a meta tag that could cause a auto refresh on a users browser when implemented. I have been using it for a redesign and was curious if there could be any negative effects for using it, here is the code: All input is appreciated. Ciao, Todd Richard
Intermediate & Advanced SEO | | RichFinnSEO0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
Deferred javascript loading
Hi! This follows on from my last question. I'm trying to improve the page load speed for http://www.gear-zone.co.uk/. Currently, Google rate the page speed of the GZ site at 91/100 – with the javascript being the only place where points are being deducated. The only problem is, the JS relates to the trustpilot widget, and social links at the bottom of the page – neither of which work when they are deferred. Normally, we would add the defer attribute to the script tags, but by doing so it waits until the page is fully loaded before executing the scripts. As both the js I mentioned (reviews and buttons) use the document.Write command, adding this would write the code off the page and out of placement from where they should be. Anyone have any ideas?
Intermediate & Advanced SEO | | neooptic0