Latest posts made by DanielMulderNL
-
RE: Javascript to fetch page title for every webpage, is it good?
Your welcome. Interesting question. My answer is that if the HTML TITLE is set with client side JavaScript then it's has little change of being picked up as the title by crawlers or Google. Let's say we alter the node element Title value with like this:
In this case it will alter the value after the hard coded HTML title was send to the browser. It would need the crawler to load the document in full and read the HTML title value only after fully rendering it as if it where a human user. This is not likely.
Then we could also try a document write to construct the HTML HEAD tag Title as a string to use for the browser as the title like this:
Will not work as the title text is not actually altered after evaluation of the script line.
This does not work because the title is not set but because it's not actually printed to the browser as a string. The source code for the title still looks like this in to any browser:
As you can see the script does not print the result string of the evaluation to the browser but still sets the value of the document object model node HTML TITLE to the value it evaluates to.
Try it for yourself with this dummy page I made just to be certain.
http://www.googlewiki.nl/test/seojavascripttest2.html And this is the DOM info for this page http://www.googlewiki.nl/seo-checker/testanchor.php?url=http://www.googlewiki.nl/test/seojavascripttest2.html&anchor=test
Or am I missing something here?
Hope this helps.
posted in Intermediate & Advanced SEO
-
RE: Javascript to fetch page title for every webpage, is it good?
Hi... I would not prefer a client side approach to this. If it's readable depends on the script itself. Although some JS fans will say this alright I would prefer to do this server side with php, or similar, and make a template that does this rewrite. It's not to hard. Or why not a batch run to modify all pages once to hardcode the correct title in the page? Have some scripts that can do this for you if you would like.
Hope this helps.
posted in Intermediate & Advanced SEO
-
RE: Pointless Wordpress Tagging: Keep or unindex?
That's skew any way you look at it. But still. Put them all in a secondary sitemap.html so they are not orphans, remove from sitemap.xml, place noindex in the HTML HEAD and still try to consolidate where possible.
In general we do not want to get rid of pages that are not a problem as they can receive organic traffic for not targeted keywords that we have no real other way of discovering. The web is an organic momentum flux and is not a solid state structure. It needs some degree of unintended and not calculated behavior also in website structure. Otherwise the sum of the parts of all pages in google would equal the value of google which is not the case. google connects dot's, we interpret and find new meaning and relations translating into traffic we did not expect.
The momentum flux is a joke of course. It's a quantum state of course
posted in White Hat / Black Hat SEO
-
RE: Pointless Wordpress Tagging: Keep or unindex?
No do not use a 301 and certainly do not remove any pages from the index as mentioned here. That's foolish and uncalled for and potentially harmful against zero to no risk if you would let them be and only make them less prominent to users of the site. And if you really feel you need to cut drastically in the number of tag pages then use rel=canonical instead of a 301!
Consolidate not decimate! When we 301 we assimilate pages to 1 page. We say the old page is gone for good and the new 1 is the new page for the old link. This diffuses the keyword that the page was found for as it melts all different pages that 301 to a page into 1. However when we use a canonical url we consolidate the pages into 1 new one that bundle the old. When we search for a page that has and canonical to a other page it still ranks next to the new page for a while. Only the title in search is the same as the page referred to with the canonical. With the 301 it will disappears completely from the index and google cache along with it's internal keyword binding it hat before. So use canonical not 301! And my advice: consolidate to 1 useful tag page with a real body of work and optimize this for a primary keyword like 'seo news' or something and leave the pages with the 301 be but don't link to them anymore from then on.
Hope this is helpful.
Gr Daniel
posted in White Hat / Black Hat SEO
Best posts made by DanielMulderNL
-
RE: Pointless Wordpress Tagging: Keep or unindex?
That's skew any way you look at it. But still. Put them all in a secondary sitemap.html so they are not orphans, remove from sitemap.xml, place noindex in the HTML HEAD and still try to consolidate where possible.
In general we do not want to get rid of pages that are not a problem as they can receive organic traffic for not targeted keywords that we have no real other way of discovering. The web is an organic momentum flux and is not a solid state structure. It needs some degree of unintended and not calculated behavior also in website structure. Otherwise the sum of the parts of all pages in google would equal the value of google which is not the case. google connects dot's, we interpret and find new meaning and relations translating into traffic we did not expect.
The momentum flux is a joke of course. It's a quantum state of course
posted in White Hat / Black Hat SEO
-
RE: Javascript to fetch page title for every webpage, is it good?
Your welcome. Interesting question. My answer is that if the HTML TITLE is set with client side JavaScript then it's has little change of being picked up as the title by crawlers or Google. Let's say we alter the node element Title value with like this:
In this case it will alter the value after the hard coded HTML title was send to the browser. It would need the crawler to load the document in full and read the HTML title value only after fully rendering it as if it where a human user. This is not likely.
Then we could also try a document write to construct the HTML HEAD tag Title as a string to use for the browser as the title like this:
Will not work as the title text is not actually altered after evaluation of the script line.
This does not work because the title is not set but because it's not actually printed to the browser as a string. The source code for the title still looks like this in to any browser:
As you can see the script does not print the result string of the evaluation to the browser but still sets the value of the document object model node HTML TITLE to the value it evaluates to.
Try it for yourself with this dummy page I made just to be certain.
http://www.googlewiki.nl/test/seojavascripttest2.html And this is the DOM info for this page http://www.googlewiki.nl/seo-checker/testanchor.php?url=http://www.googlewiki.nl/test/seojavascripttest2.html&anchor=test
Or am I missing something here?
Hope this helps.
posted in Intermediate & Advanced SEO
-
RE: Javascript to fetch page title for every webpage, is it good?
Hi... I would not prefer a client side approach to this. If it's readable depends on the script itself. Although some JS fans will say this alright I would prefer to do this server side with php, or similar, and make a template that does this rewrite. It's not to hard. Or why not a batch run to modify all pages once to hardcode the correct title in the page? Have some scripts that can do this for you if you would like.
Hope this helps.
posted in Intermediate & Advanced SEO
-
RE: Pointless Wordpress Tagging: Keep or unindex?
No do not use a 301 and certainly do not remove any pages from the index as mentioned here. That's foolish and uncalled for and potentially harmful against zero to no risk if you would let them be and only make them less prominent to users of the site. And if you really feel you need to cut drastically in the number of tag pages then use rel=canonical instead of a 301!
Consolidate not decimate! When we 301 we assimilate pages to 1 page. We say the old page is gone for good and the new 1 is the new page for the old link. This diffuses the keyword that the page was found for as it melts all different pages that 301 to a page into 1. However when we use a canonical url we consolidate the pages into 1 new one that bundle the old. When we search for a page that has and canonical to a other page it still ranks next to the new page for a while. Only the title in search is the same as the page referred to with the canonical. With the 301 it will disappears completely from the index and google cache along with it's internal keyword binding it hat before. So use canonical not 301! And my advice: consolidate to 1 useful tag page with a real body of work and optimize this for a primary keyword like 'seo news' or something and leave the pages with the 301 be but don't link to them anymore from then on.
Hope this is helpful.
Gr Daniel
posted in White Hat / Black Hat SEO
Trained in business administration, autodidact programmer since the commodore 64, worked in banking, recruitment and sourcing in several commercial management positions before starting my own company. I am an independent blogger, working as marketing of online things consultant and supplier of strategical content and SEO solutions. Co-founder of the optimization tool www.rankwise.net that helps seo with optimizing the SEO process. Author Google+ marketing blog (Dutch).