Questions created by netzkern_AG
-
HREFLANG: language and geography without general language
Some developers always implement the hreflang for German (which should be "de") as "de-de", so language German and country Germany. There is usually no other German version targeting the other German-speaking countries (mostly ch, at). So obviously the recommendation is to make it "de" and that's the end. But I kept wondering and not finding anything: IF there is a more specialised hreflang, will google take that if there is no default? Example: Search in: de-at (or de-ch) Search result has the following hreflang versions: de-de; x-default (==en), en => Will Google give the result for x-default or de-de?
Technical SEO | | netzkern_AG0 -
Removing old content
Ahoy! Variously I have heard the opinion that content which does not generate regular search traffic (let's ballpark it at >10 views in any given month) should be noindexed or even removed. Allegedly this would improve the overall quality of the site, rankings and traffic. I remain doubtful. What would you do if the interest in a given matter goes down over time for any (most) given topics of your content and is replaced by "newer" specific interest? Concrete example: I have a website about (book) reviews. Naturally, there will always be new books; old books are not in the media as much and "forgotten". Nevertheless, the reviews (all unique, based on really having read the books, no trace of the standard back cover copy) are obviously still there. Personally I feel that they do not really lose any value - they are still reviews of that one book, even though it is not the most recent one. So, what would you do: Deindex "older" book reviews after a certain time? Even remove them completely? Just let them run? I am looking forward to your opinions - and even your experience if you have done something like this! Nico
Content Development | | netzkern_AG0 -
Combining variants of "last modified", cache-duration etc
Hiya, As you know, you can specify the date of the last change of a document in various places, for example the sitemap, the http-header, ETag and also add an "expected" change, for example Cache-Duration via header/htaccess (or even the changefreq in the sitemap). Is it advisable or rather detrimental to use multiple variants that essentially tell browser/search engines the same thing? I.e. should I send a lastmod header AND ETag AND maybe something else? Should I send a cache duration at all if I send a lastmod? (Assume that I can keep them correct and consistent as the data for each will come from the very same place.) Also: Are there any clear recommendations on what change-indicating method should be used? Thanks for your answers! Nico
Technical SEO | | netzkern_AG0 -
Google Webmaster Guideline Change: Human-Readable list of links
In the revised webmaster guidelines, google says "[...] Provide a sitemap file with links that point to the important pages on your site. Also provide a page with a human-readable list of links to these pages (sometimes called a site index or site map page)." (Source: https://support.google.com/webmasters/answer/35769?hl=en) I guess what they mean by this is something like this: http://www.ziolko.de/sitemap.html Still, I wonder why they say that. Just to ensure that every page on a site is linked and consequently findable by humans (and crawlers - but isn't the XML sitemap for those and gives even better information)? Should not a good navigation already lead to every page? What is the benefit of a link-list-page, assuming you have an XML sitemap? For a big site, a link-list is bound to look somewhat cluttered and its usefulness is outclassed by a good navigation, which I assume as a given. Or isn't it? TL;DR: Can anybody tell me what exactly is the benefit of a human-readable list of all links? Regards, Nico
On-Page Optimization | | netzkern_AG0 -
Good CTAs for Meta-Descriptions: Direct, Indirect, Narrow, Broad?
It is no secret that good meta descriptions should be written to incite the searcher to click on the result without misleading them. Time and again I read that there are measurable effects by including "strong" CTAs (calls to action). What constitutes a call to action seems by some to be taken really narrow (i.e. "Click here to learn more!" - a very specific action that is spelled out) and by others rather broadly ("... Offer available till December 31" - only implicit, the action [buying/securing] not even mentioned). I now wondered: Many "guides" still recommend rather blunt calls like "Click here", "Read more", "Discover how". Personally I find those really unattractive and often a waste of space. However, I am not the benchmark and favour the informational side perhaps a little too strongly. Do those direct but general CTAs really work well in every case* or should one be more elaborate/indirect? I am looking forward of hearing your experience/opinion! Nico Yes, of course it is "test, test, test" and to some degree each case is different; looking for general patterns, though 🙂
On-Page Optimization | | netzkern_AG0 -
Does Google index internal anchors as separate pages?
Hi, Back in September, I added a function that sets an anchor on each subheading (h[2-6]) and creates a Table of content that links to each of those anchors. These anchors did show up in the SERPs as JumpTo Links. Fine. Back then I also changed the canonicals to a slightly different structur and meanwhile there was some massive increase in the number of indexed pages - WAY over the top - which has since been fixed by removing (410) a complete section of the site. However ... there are still ~34.000 pages indexed to what really are more like 4.000 plus (all properly canonicalised). Naturally I am wondering, what google thinks it is indexing. The number is just way of and quite inexplainable. So I was wondering: Does Google save JumpTo links as unique pages? Also, does anybody know any method of actually getting all the pages in the google index? (Not actually existing sites via Screaming Frog etc, but actual pages in the index - all methods I found sadly do not work.) Finally: Does somebody have any other explanation for the incongruency in indexed vs. actual pages? Thanks for your replies! Nico
Technical SEO | | netzkern_AG0 -
No Appearance in Local pack - group practice favored
Hi, One client has a website, Google My Business etc of his own. He ranks ok to good locally for search terms. However, his entry simply won't show up within the local n-pack (where it objectively should) and also does not appear in the map. It seems to me that instead a group practice with a colleague that has both their names in its name/title. (Moreover, it is in the same spot - they decided to go with different websites and entries of their own, though.) For some reason, this practice is also connected to the ranking website of our client. I suppose (NAP problems and previously used phone tracking numbers aside) that this group practice essentially blocks the real client-entry from appearing. Has anybody made such experiences? (My provisional ToDo would look like: Disconnect the group practice from the client's website; erase/merge it if possible; do proper LocalSEO otherwise.) Regards Nico
Local Listings | | netzkern_AG0 -
Publication Date, Modification Date - (Proper) Usage and Effect
Hi, First of all: I think apart from QDF results, effects of this are rather small and trumped by such things as the actual content and value a page offered. Nevertheless I got to wondering how the publication date and modification date are used ... effectively and correctly. Fact: Google displays the publication date on SERPS (if it is given via schema or through the CMS or in any other form). This also applies if you have a date of last modification, for example via schema.org/dateModified - regardless of the extent of changes.
On-Page Optimization | | netzkern_AG
Google only considers the publication date. Google also uses it as an indicator for "freshness". There are quite a few articles on that out there, ex: http://www.kevinmuldoon.com/change-date-article-boost-seo/ and http://www.viperchill.com/new-seo/ Q1: In my opinion, faking the publication date is at the very least a darkish grey area which nonetheless seems to still work. Would you agree? Q2: Would you see it as legitimate to (at some point after thoroughly reworking one page) update the publication date to the date of republication? Case in point: I have a page with book reviews. These reviews do not really go stale - much like recipes; tastes may change a bit, but essentially it stays the same. I find it somewhat irking to see a 10 year old date there - even if I maybe have restructured and rewritten, maybe even completely redone a review... But apart from the question of whether to ever "update" your publication date. I started pondering when it was proper to change the modification date (especially as it seems to have little effect apart from serving as date for last changes in headers, caches etc.)? For example, content changes when Manually changing text a visitor leaves a comment a visitor gives a book/article/page a rating a visitor gives a book a rating and this rating is part of another entity's aggregate rating Q: Which of these events would warrant an update of the last modification? ratings and aggregate ratings typically only change single numbers (vote count and sums/averages); yet there is [legitimate] change and it is utilised in SERPS (review stars). I am still hesitant. My answers would be: Changing the publication date might be valid in case of a MAJOR overhaul with new or lots of extra content - when, for example you could publish the same article again in another issue of the same print magazine the article has been published in before; and all of those changes warrant an update of the last modification, at least as it is currently used, i.e. only to show when change has happened with any real influence. Personally I'd wish for lastModified carrying more weight compared to pubDate AND especially for more google-side checks if actual change has happened. (To be ignored in case of small things like legitimately switching a sentence or correcting a typo; to be penalised if changed when nothing really changes; to honour when real change happens) Looking forward on your opinions for dating content - and of course on your hints what I am forgetting. 🙂 Nico0 -
Schema.org markup for breadcrumbs: does it finally work?
Hi, TL;DR: Does https://schema.org/BreadcrumbList work? It's been some time since I last implemented schema.org markup for breadcrumbs. Back then the situation was that google explicitly discouraged the use of the schema.org markup for breadcrumbs. In my experience it had been pretty hit or miss - sometimes it worked without issues; sometimes it did not work without obvious reason. Consequently, I ditched it for the data-vocabulary.org markup which did not give me any issues. However, I prefer using schema.org and currently a new site is being designed for a client. Thus, I'd like to use schema.org markup for the breadcrumb - but of course only if it works now. Google has dropped the previous warning/discouragements and by now lists a schema.org code https://developers.google.com/structured-data/breadcrumbs based on the new-ish https://schema.org/BreadcrumbList. Has anybody here used this markup on a site (preferably more than one) and can confirm whether or not it is reliably working and showing the breadcrumb trail / site hierarchy in the SERP? Thanks for your answers! Nico
Technical SEO | | netzkern_AG0 -
URL, Breadcrumb/Site Hierarchy Display, User (and Bot) Expectations
TL;DR: Do parts of URLs that are used throughout the web quite consistently have any influence on robots (or users)? Are there any studies? What would you use for pages that are something between a tag-page and a wiki-like article? Long version: On a site with a lot of content, I decided to go for tags to present articles on that topic together. My first thought was to simply list those under the URL /tag/{Tag_Name}. Short. Simple. Grabs the core meaning - on this page you'll find stuff about the tag. But: those tag-pages will be more than just lists of the tagged pages (let's say they are articles on various topics and products with certain attributes and the same tag can apply to a product and an article). The tag pages themselves will often talk a lot about the use of said tag - extensively, without blabbering. It is aimed at being a landing page and hub for the tag/keyword. Having this in mind, I pondered using /wiki/. It does fit in some respects, but it really is not a wiki. /info/, /lexicon/, /knowledge/ and other ideas came to mind but the more I thought the weirder I did find most ideas. What I am now wondering: Do these parts of URLs (/tag/, or /product/, or /wiki/) that are not really keywords in most cases have any influence on search engines? They are used quite consistently across the web and therefore could be used as signals. I suspect, though, that they might have more influence on shaping user expectation. (If I see /wiki/ in an URL or site hierarchy display (breadcrumb), I expect ... well, a wiki-style page; if I see /tag/ I expect a collection of stuff with that tag.) What would you chose if it is not quite a tag, nor quite a wiki but something in-between? Or do you think it does not matter at all? (Breadcrumbs will be used and google has used them for display in just about all SERPs.) Are there perchance any studies concerning these parts of URLS? Regards Nico
On-Page Optimization | | netzkern_AG0 -
Linking to own homepage with keywords as link text
I recently discovered, that previous SEO work on a client's website apparently included setting links from subpages to the homepage using keywords as link text that the whole website should rank for. i.e. (fictional example) a subpage about chocolate would link to the homepage via "Visit the best sweet shop in Dallas and get a free sample." I am dubious about the influence this might have - anybody with any tests? I also think that it is quite weird when considering user friendliness - at least I would not expect such a link to take me to the homepage of the very site I was just on, probably browsing in a relevant page. So, what about such links: actually helpful, mostly don't matter or even potentially harmful? Looking forward to your opinions! Nico
Intermediate & Advanced SEO | | netzkern_AG0 -
Duplicate content through product variants
Hi, Before you shout at me for not searching - I did and there are indeed lots of threads and articles on this problem. I therefore realise that this problem is not exactly new or unique. The situation: I am dealing with a website that has 1 to N (n being between 1 and 6 so far) variants of a product. There are no dropdown for variants. This is not technically possible short of a complete redesign which is not on the table right now. The product variants are also not linked to each other but share about 99% of content (obvious problem here). In the "search all" they show up individually. Each product-variant is a different page, unconnected in backend as well as frontend. The system is quite limited in what can be added and entered - I may have some opportunity to influence on smaller things such as enabling canonicals. In my opinion, the optimal choice would be to retain one page for each product, the base variant, and then add dropdowns to select extras/other variants. As that is not possible, I feel that the best solution is to canonicalise all versions to one version (either base variant or best-selling product?) and to offer customers a list at each product giving him a direct path to the other variants of the product. I'd be thankful for opinions, advice or showing completely new approaches I have not even thought of! Kind Regards, Nico
Technical SEO | | netzkern_AG0 -
Issues with Duplicates and AJAX-Loader
Hi, On one website, the "real" content is loaded via AJAX when the visitor clicks on a tile (I'll call a page with some such tiles a tile-page here). A parameter is added to the URL at the that point and the content of that tile is displayed. That content is available via an URL of its own ... which is actually never called. What I want to achieve is a canonicalised tile-page that gets all of the tiles' content and is indexed by google - if possible with also recognising that the single-URLs of a tile are only fallback-solutions and the "tile-page" should be displayed instead. The current tile-page leads to duplicate meta-tags, titles etc and minimal differences between what google considers a page of its own (i.e. the same page with different tiles' contents). Does anybody have an idea on what one can do here?
Technical SEO | | netzkern_AG0 -
How to avoid a redirect chain?
Hi there, I am aware that it is not good practice to have a redirect chain but I am not really sure hoe to do it (on Apache). I have multiple redirects in a chain because on the one hand I had to redirect because the content of the site got a new URL and because on the other hand I changed from http to https. Thus I have a chain like http://example.com via 301 to http://the-best-example.com via 301 to https://the-best-example.com via 301 to https://greatest-example.com Obviously I want to clean this up without loosing any link juice or visitors who had bookmarked my site. So, I could make three separate redirects: http://example.com via 301 to https://greatest-example.com
Technical SEO | | netzkern_AG
http://the-best-example.com via 301 to https://greatest-example.com
https://the-best-example.com via 301 to https://greatest-example.com But is there a way to combine it? Can I use an "OR" operator to link the 3 conditions to this one rule? Any other suggestions? Thanks a lot!!!0