Ridding of taxonomies, so that articles enhance related page's value
-
Hello,
I'm developing a website for a law firm, which offers a variety of services.
The site will also feature a blog, which would have similarly-named topics. As is customary, these topics were taxonomies.
But I want the articles to enhance the value of the service pages themselves and because the taxonomy url /category/divorce has no relationship to the actual service page url /practice-areas/divorce, I'm worried that if anything, a redundantly-titled taxonomy url would dilute the value of the service page it's related to.
Sure, I could show some of the related posts on the service page but if I wanted to view more, I'm suddenly bounced over to a taxonomy page which is stealing thunder away from the more important service page.
So I did away with these taxonomies all together, and posts are associatable with pages directly with a custom db table.
And now if I visit the blog page, instead of a list of category terms, it would technically be a list of the service pages and so if a visitor clicks on a topic they are directed to /practice-areas/divorce/resources (the subpages are created dynamically) and the posts are shown there.
I'll have to use custom breadcrumbs to make it all work. Just wondering if you guys had any thoughts on this. Really appreciate any you might have and thanks for reading
-
Thank you for taking the time to respond. Makes a lot of sense, I appreciate it.
-
It is true that having pages with the same "page-name" (the last part following the final slash of a URL, e.g the page-name of this question is "ridding-of-taxonomies-so-that-articles-enhance-related-page-s-value"), which are also topically very similar - can cause 'jumpy' SERPs.
Many feel that the dangers of what is termed 'keyword cannibalisation' are over-egged. This may be true, but I have (myself) assuredly seen examples of it in action. Usually it occurs with most prominence when neither page strongly eclipses the other in terms of SEO authority (e.g: inbound signals like referring domains, citations across the web and general 'buzz' associated with a given URL).
If both pages are new with little authority (or 'popularity') bound to their unique addresses, then certainly Google can get confused. You can end up with problems like, earning a decent ranking for a related keyword - but it hops from page to page every day / week and Google's algorithm bubbles away in the background. This can make it hard to drive traffic to the correct destination.
If both pages are very specific about the keywords which they are targeting, you could turn references of those keywords on the page you don't want to rank - into hyperlinks pointing to the URL which you do want to rank! (sorry that was a bit of a mouthful)
Although TBPR (Tool Bar PageRank) was done away with aeons ago, 'actual' PageRank is still at large within Google's ranking algorithm(s). When one page links to another page with anchor text that matches a keyword, it 'gives away' some of its (ranking) value to the page receiving the link (for the specific keyword or collection of keywords / search entity in question). Think of links as 'votes' from one page to another. The difference between this and real voting is that, for Google not all votes are equal (links from more authoritative pages boost the receiving pages more than links from pages that nobody cares about). Not very progressive but still...
In general we in SEO abused this mechanic between different domains resulting in Google's current clamp-down on EMA (Exact-Match Anchor, in regard to keyword anchor text) linking. That being said: the risk from doing the same thing internally within your own website is extremely minimal, as you are just redistributing SEO authority from one page to another along a specific axiom of relevance.
That's not like when you do it from one domain to another, obviously to leech authority from an external site to your own - which in most occurrences is a violation of Google's Web-Master guidelines.
Do be careful though, don't overdo this. If the content of the page which you don't want to rank ends up stuffed full of hyperlinks, that could make the page look spammy and hurt your CRO (or earn Panda-related algorithmic devaluation).
Just don't go mental, everything should be fine.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages excluded from Google's index due to "different canonicalization than user"
Hi MOZ community, A few weeks ago we noticed a complete collapse in traffic on some of our pages (7 out of around 150 blog posts in question). We were able to confirm that those pages disappeared for good from Google's index at the end of January '18, they were still findable via all other major search engines. Using Google's Search Console (previously Webmastertools) we found the unindexed URLs in the list of pages being excluded because "Google chose different canonical than user". Content-wise, the page that Google falsely determines as canonical instead has little to no similarity to the pages it thereby excludes from the index. False canonicalization About our setup: We are a SPA, delivering our pages pre-rendered, each with an (empty) rel=canonical tag in the HTTP header that's then dynamically filled with a self-referential link to the pages own URL via Javascript. This seemed and seems to work fine for 99% of our pages but happens to fail for one of our top performing ones (which is why the hassle 😉 ). What we tried so far: going through every step of this handy guide: https://moz.com/blog/panic-stations-how-to-handle-an-important-page-disappearing-from-google-case-study --> inconclusive (healthy pages, no penalties etc.) manually requesting re-indexation via Search Console --> immediately brought back some pages, others shortly re-appeared in the index then got kicked again for the aforementioned reasons checking other search engines --> pages are only gone from Google, can still be found via Bing, DuckDuckGo and other search engines Questions to you: How does the Googlebot operate with Javascript and does anybody know if their setup has changed in that respect around the end of January? Could you think of any other reason to cause the behavior described above? Eternally thankful for any help! ldWB9
Intermediate & Advanced SEO | | SvenRi1 -
Problem: Magento prioritises product URL's without categories?
HI there, we are moving a website from Shoptrader to Magento, which has 45.000 indexations.
Intermediate & Advanced SEO | | onlinetrend
yes shoptrader made a bit of a mess. Trying to clean it up now. there is a 301 redirect list of all old URL's pointing to the new one product can exist in multiple categories want to solve this with canonical url’s for instance: shoptrader.nl/categorieA/product has 301 redirect towards magento.nl/nl/categorieA/product shoptrader.nl/categorieA/product-5531 has 301 redirect towards magento.nl/nl/categorieA/product shoptrader.nl/categorieA/product¤cy=GBP has 301 redirect towards magento.nl/nl/categorieA/product shoptrader.nl/categorieB/product has 301 redirect towards magento.nl/nl/categorieB/product, has canonical tag towards magento.nl/nl/categorieA/product shoptrader.nl/categorieB/product?language=nl has 301 redirect towards magento.nl/nl/categorieB/product, has canonical tag towards magento.nl/nl/categorieA/product Her comes the problem:
New developer insists on using /productname as canonical instead of /category/category/productname, since Magento says so. The idea is now to redirect to /category/category/productname and there will be a canonical URL on these pages pointing to /productname, loosing some link juice twice. So in the end indexation will take place on /productname … if Google picks it up the 301 + canonical. Would be more adviseable to direct straight to /productname (http://moz.com/community/q/is-link-juice-passed-through-a-301-and-a-canonical-tag), but I prefer to point to one URL with categories attached. Which has more advantages(?): clear menustructure able to use subfolders in mobile searchresults missing breadcrumb What would you say?0 -
HTML5: Changing 'section' content to be 'main' for better SEO relevance?
We received an HTML5 recommendation that we should change onpage text copy contained in 'section" to be listed in 'main' instead, because this is supposedly better for SEO. We're questioning the need to ask developers spend time on this purely for a perceived SEO benefit. Sure, maybe content in 'footer' may be seen as less relevant, but calling out 'section' as having less relevance than 'main'? Yes, it's true that engines evaluate where onpage content is located, but this level of granular focus seems unnecessary. That being said, more than happy to be corrected if there is actually a benefit. On a side note, 'main' isn't supported by older versions of IE and could cause browser incompatibilities (http://caniuse.com/#feat=html5semantic). Would love to hear others' feedback about this - thanks! 🙂
Intermediate & Advanced SEO | | mirabile0 -
If it's not in Webmaster Tools, is it Duplicate Title
I am showing a lot of errors in my SEOmoz reports for duplicate content and duplicate titles, many of which appear to be related to capitalization vs non-capitalization in the URL. Case in point, if a URL contains a lower character, such as: http://www.gallerydirect.com/art/product/allyson-krowitz/distinct-microstructure-i as opposed to the same URL having an upper character in the structure: http://www.gallerydirect.com/art/product/allyson-krowitz/distinct-microstructure-I I am finding that some of the internal links on the site use the former structure and other links use the latter structure. These show as duplicate title/content in the SEOmoz reports, but they don't appear as duplicate titles in Webmaster Tools. My question is, should I try to work with our developers to create a script to change all of the content with cap letters in the destination links internally on the site, or is this a non-issue since it doesn't appear in Webmaster Tools?
Intermediate & Advanced SEO | | sbaylor0 -
Traffic drop off and page isn't indexed
In the last couple weeks my impressiona and clicks have dropped off to about half what it used to be. I am wondering if Google is punishing me for something... I also added two new pages to my site in the first week of June and they still aren't indexed. In the past it seemed like new pages would be indexed in a couple days. Is there any way to tell if Google is unhappy with my site? WMT shows 3 server errors, 3 Access denied, and 122 not found errors. Could those not found pages be killing me? Thanks for any advise, Greg www.AntiqueBanknotes.com
Intermediate & Advanced SEO | | Banknotes0 -
Should I use both Google and Bing's Webmaster Tools at the same time?
Hi All, Up till now I've been registered only to Google WMT. Do you recommend using at the same time Bing's WMT? Thanks
Intermediate & Advanced SEO | | BeytzNet0 -
Pro's & Con's of registering your customers?
I know that making a user register will drop the the conversion rate. However, there are a lot of sites that still stand by making users register before you can purchase. I was wondering if they know something that I don't that would outweigh the loss of those conversions. What exactly are the Pro's & Con's of making your customers register before being able to purchase an item?
Intermediate & Advanced SEO | | HCGDiet0 -
Hundreds of thousands of 404's on expired listings - issue.
Hey guys, We have a conundrum, with a large E-Commerce site we operate. Classified listings older than 45 days are throwing up 404's - hundreds of thousands, maybe millions. Note that Webmaster Tools peaks at 100,000. Many of these listings receive links. Classified listings that are less than 45 days show other possible products to buy based on an algorithm. It is not possible for Google to crawl expired listings pages from within our site. They are indexed because they were crawled before they expired, which means that many of them show in search results. -> My thought at this stage, for usability reasons, is to replace the 404's with content - other product suggestions, and add a meta noindex in order to help our crawl equity, and get the pages we really want to be indexed prioritised. -> Another consideration is to 301 from each expired listing to the category heirarchy to pass possible link juice. But we feel that as many of these listings are findable in Google, it is not a great user experience. -> Or, shall we just leave them as 404's? : google sort of says it's ok Very curious on your opinions, and how you would handle this. Cheers, Croozie. P.S I have read other Q & A's regarding this, but given our large volumes and situation, thought it was worth asking as I'm not satisfied that solutions offered would match our needs.
Intermediate & Advanced SEO | | sichristie0