Wiki/Knowledge bases
-
Hi
A client of mine is creating a knowledge base/wiki for their website. There using there suppliers own knowledge base (basically their a reseller). What would be the best practice with regards to duplicate content. Would it be best to make all the pages "no follow"? and block the pages by the robot.txt?
-
Hi Tom
Yes that makes sense, think the robot content noindex,nofollow would be the best solution.
-
If the pages will be exact duplicates, you could do either of the options you've given above, or you could use a canonical tag and point it to the original page.
My person preference would be to add a noindex tag on the head tag of the code, so it would be:
Of course, this means the page won't be indexed which is a shame as a knowledge base can be a great way of pulling in long-tail keyword traffic. If you ever wanted to rank it, however, the content would need to be made unique.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should i noindex/nofollow a faceted navigation page?
I have an ecommerce website with 4 departments, that share the same categories, For example a bicycle shop would have different products for mountain biking and road cycling, but they would both share the same 'tyres' category. I get around this by having the department as a filter, that changes the products on show, and adds a URL parameter of ?department=1. When this filter is applied, i have a canonical link setup to the non-filtered category. Any filter links are nofollowed. My top menu has 4 different sections, one for each department, and links to these URLs with the department parameter already on, these links are set to allow robots to follow. As i am actively pointing Google at these pages, and it is my main navigation, should the page they go to be noindexed? As its the canonical i want to rank. Hopefully this makes sense. Cheers
Technical SEO | | SEOhmygod0 -
JSON-LD meta data: Do you have any rules/recommendations for using BlogPosting vs Article?
Dear Moz Community. I'm looking at moving from in-line Microdata in the HTML to JSON-LD on the web pages that I manage. Seems a far simpler solution having all the meta data in one place - especially for trouble shooting! With this in mind I've started to change the page templates on my personal site before I tackle the ones for my eCommerce site. I've made a start, and I'm still working on the templates producing some default values (like if a page doesn't have an associated image) but have been wondering if any of you have any rules/recommendations for using BlogPosting vs Article? I'd call this type of page an Article:
Technical SEO | | andystorey
https://cycling-jersey-collection.com/browse-collection/selle-italia-chinol-seb-bennotto-1982-team-jersey Whereas this page is from the /blog so that should probably be a BlogPosting:
https://cycling-jersey-collection.com/blog/2017-worldtour-team-jerseys I've used the following resources but it would be great to get a discussion on here.
https://yoast.com/structured-data-schema-ultimate-guide/
https://developers.google.com/search/docs/data-types/data-type-selector
https://search.google.com/structured-data/testing-tool/u/0/ I'm keen to get this 100% right as once this is done I'm going to drive through some further changes to get some progress on things like this: https://moz.com/blog/ranking-zero-seo-for-answers
https://moz.com/blog/what-we-learned-analyzing-featured-snippets Kind Regards andy moz-screenshot.jpg1 -
Title Tag vs. H1 / H2
OK, Title tag, no problem, it's the SEO juice, appears on SERP, etc. Got it. But I'm reading up on H1 and getting conflicting bits of information ... Only use H1 once? H1 is crucial for SERP Use H1s for subheads Google almost never looks past H2 for relevance So say I've got a blog post with three sections ... do I use H1 three times (or does Google think you're playing them ...) Or do I create a "big" H1 subhead and then use H2s? Or just use all H2s because H1s are scary? 🙂 I frequently use subheads, it would seem weird to me to have one a font size bigger than another, but of course I can adjust that in settings ... Thoughts? Lisa
Technical SEO | | ChristianRubio0 -
Why are URLs like www.site.com/#something being indexed?
So, everything after a hash (#) is not supposed to be crawled and indexed. Has that changed? I see a clients site with all sorts of URLs indexed like ... http://www.website.com/#!category/c11f For the above URL, I thought it was the same as simply http://www.website.com/. But they aren't, they're getting indexed and all the content on the pages with these hash tags are getting crawled as well. Thanks!
Technical SEO | | wiredseo0 -
Creating in-text links with ' 'target=_blank' - helping/hurting SEO!?!
Good Morning Mozzers, I have a question regarding a new linking strategy I'm trying to implement at my organization. We publish 'digital news magazines' that oftentimes have in-text links that point to external sites. More recently, the editorial department and me (SEO) conferred on some ways to reduce our bounce rate and increase time on page. One of the suggestions I offered is to add the 'target=_blank" attribute to all the links so that site visitors don't necessarily have to leave the site in order to view the link. It has, however, come to my attention that this can have some very negative effects on my SEO program, most notably, (fake or inaccurate) time(s) on-page. Is this an advisable way to create in-text links? Are there any other negative effects that I can expect from implementing such a strategy?
Technical SEO | | NiallSmith0 -
Block /tag/ or not?
I've asked this question in another area but now i want to ask it as a bigger question. Do we block /tag/ with robots.txt or not. Here's why I ask: My wordpress site does not block /tag/ and I have many /tag/ results in the top 10 results of Google. Have for months. The question is, does Google see /tag/ on WordPress as duplicate content? SEOMoz says it's duplicate content but it's a tag. It's not really content per say. I'm all for optimizing my site but Google is not penalizing me for /tag/ results. I don't want to block /tag/ if Google is not seeing it as duplicate content for only one reason and that's because I have many results in the top 10 on G. So, can someone who knows more about this weigh in on the subject for I really would like a accurate answer. Thanks in advance...
Technical SEO | | MyAllenMedia0 -
Redirection based on country and impact on rankings
I have a website that ranks number 1 in google.co.uk for its main key term and ranks number 4 in google.com.au. However i would rather that visitors in australia see domain.com.au rather than domain.co.uk. Is there a way to achieve this using some clever 301s without impacting the rankings in australia and so that the .com.au starts to rank in place of the .co.uk OR Would you advise launching a .com.au version of the site hosted separately with unique / re-written content on it.?
Technical SEO | | thefresh0 -
Crawl issues/ .htacess issues
My site is getting crawl errors inside of google webmaster tools. Google believe a lot of my links point to index.html when they really do not. That is not the problem though, its that google can't give credit for those links to any of my pages. I know I need to create a rule in the .htacess but the last time I did it I got an error. I need some assistance on how to go about doing this, I really don't want to lose the weight of my links. Thanks
Technical SEO | | automart0