NOINDEX,NOFOLLOW - Any SEO benefit to these pages?
-
Hi
I could use some advice on a site architecture decision. I am developing something akin to an affiliate scheme for my business. However it is not quite as simple as an affliate setup because the products sold through "affiliates" will be slightly different, as a result I intend to run the site from a subdomain of my main domain.
I am intending to NOINDEX,NOFOLLOW the subdomained site because it will contain huge amounts of duplication from my main site (it is really a subset of the main site with some slightly different functionality in places). I don't really want or need this subdomain site indexed, hence my decision to NOINDEX,NOFOLLOW it.
However given I will, hopefully, be having lots of people link into the subdomain I am hoping to come up with some sort of arrangement that will mean that my main domain derives some sort of benefit from the linking. They are, after all, votes for my business so they feel like "good links". I am assuming here that a direct link into my NOFOLLOW,NOINDEX subdomain is going to provide ZERO benefit to my main domain. Happy to be corrected!
The best I can come up with is to have a "landing page" on my main domain which links into parts of my main domain and then provides a link through to the subdomain site. However this feels like a bad experience from the user's point of view (i.e. land on a page and then have to click to get to the real action) and feels a bit spammy, i.e. I don't really have a good reason for this page other than linking!
Equally I could NOINDEX,FOLLOW the homepage of the affiliate site and link back to the main domain from there. However this also feels a bit spammy and would be far less beneficial, I guess, because the subdomain homepage would have many more outgoing links than I envisaged for my "landing page" idea above. Also, it also looks a bit spammy (i.e. why follow the homepage and nofollow everything else?)!
The trouble, I guess, is that whatever I do feels a bit spammy. I suppose this is because IT IS spammy! Has anyone got any good ideas how I could setup an arrangement like I described above and derive benefit to my main domain without it looking (or being) spammy? I just hate to think of all of those links being wasted (in an SEO sense).
Thanks
Gary
-
Ha, brilliant. take care!
-
Would you believe me if I told you I had a brother called Derek? Most don't but it is sadly TRUE! Named before the show, my parents aren't that cruel.
I think we might have just excluded any US readers of this thread!
Have a good day!
-
Ha, no worries, glad it helped. I find I have to let things percolate or just draw them on a big white board. Often, the answer is pretty simple, it's just all too easy to box yourself into a 'I must do it this way' way of thinking when with a little flexibility, things are easy solved.
So, your a Trotter, any relation to Derrick?
-
Thanks Marcus. Today I learnt that I think best in the shower.....
Somtimes it just helps to get the question out there for others to comment - Your responses obviously got me thinking!
Thanks for all for your input!
-
Gary, I think that is spot on, we are telling you how to suck eggs!
-
After a long hot shower, I just thought this one up, how about.....
1.) I give each affiliate/trade user a URL with an affiliate ID on the URL, i.e. ?ID=123 (say) which points at my websites homepage
2.) If a user lands on my site with a URL containing an affiliate ID, the homepage is served up to the user with links that take the user to pages onto the affiliate subdomain site (the homepage that gets served up will be slightly different to the standard homepage). If the user navigates anywhere from this page they end up surfing the subdomain pages (all of which will be NOINDEX,NOFOLLOW).
3.) The "homepage" that gets displayed to the user is always INDEX,FOLLOW and has a rel=canonical tag for the homepage itself.
I "think" that this way my main domain gets the benefit of the links and the users always get the version of the site they are looking for without any extra "spammy" landing pages. Any one see any problems with this?
-
Is that really right? If that was the case presumably Google needs an index of NOINDEX pages or am I way off?
I "site A" gets a link from a NOINDEX page then Google must have some sort of record of that page's "WORTH" (for want of a better word) to attribute some value to "site A". That page's "WORTH" being derived from those pages that link to it.
That suggests to me (and my poor befuddled brain) that NOINDEX means it is indexed it just doesnt show in search results?
-
Thanks for your response. I don't think I can realistically go down the route of rewriting the entire site (the products change every year so it would not be a one-off cost by any means) - I would not get a return on that investment/time. I suppose, if I thought I might then why bother making it an affiliate site?
I agree with "a bit risky or just wouldn't work" - that's why I am asking because I didn't much like my own ideas!
Thanks again.
-
Well, it's true, in SEO, you really can learn something every day, even after ten + years.
I did not think the link value would pass through if the page was not indexed, almost like a cup with a hole, it would not hold the benefit.
-
I agree with Marcus about doing the writing. In the meantime I would NOINDEX, FOLLOW. The FOLLOW will allow pagerank to flow through these pages into the other pages of your site. If you use NOFOLLOW that pagerank is lost.
I have some syndicated content on one of my sites and some thin content that have NOINDEX, FOLLOW.
-
In my mind the best solution here would be to write (or have written) unique content for the sub domains so you could have them indexed, get increased exposure via search and win back links from these sites into the main page.
Anything else would be either a bit risky or just wouldnt work.
Hope that helps
Marcus
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does the use of a unicode character high up on page adversely affect SEO?
I work for a company in the travel industry and we are currently in the process of building out a 360-degree video landing page to inspire travel to our destination. There is some desire from individuals on my team to use the unicode degree symbol ( ° ) after 360 to ensure clarity. We currently have the ° symbol in the Page Title and H1 tag. Does the use of a unicode character adversely affect SEO? Our concern is that it is very unlikely that people are searching for 360-degree videos using the unicode symbol. We also have it fully written out as well. Just want to make sure we won't get dinged for this. Thanks!
Technical SEO | | smontunnas1 -
SEO Content Audits Questions (Removing pages from website, extracting data, organizing data).
Hi everyone! I have a few questions - we are running an SEO content audit on our entire website and I am wondering the best FREE way to extract a list of all indexed pages. Would I need to use a mix of Google Analytics, Webmaster Tools, AND our XML sitemap or could I just use Webmaster Tools to pull the full list? Just want to make sure I am not missing anything. As well, once the data is pulled and organized (helpful to know the best way to pull detailed info about the pages as well!) I am wondering if it would be a best practice to sort by high trafficked pages in order to rank them for prioritization (ie: pages with most visits will be edited and optimized first). Lastly, I am wondering what constitutes a 'removable' page. For example, when it is appropriate to fully remove a page from our website? I understand that it is best, if you need to remove a page, to redirect the person to another similar page OR the homepage. Is this the best practice? Thank you for the help! If you say it is best to organize by trafficked pages first in order to optimize them - I am wondering if it would be an easier process to use MOZ tools like Keyword Explorer, Page Optimization, and Page Authority to rank pages and find ways to optimize them for best top relevant keywords. Let me know if this option makes MORE sense than going through the entire data extraction process.
Technical SEO | | PowerhouseMarketing0 -
What are the SEO recommendations for dynamic, personalised page content? (not e-commerce)
Hi, We will have pages on the website that will display different page copy and images for different user personas. The main content (copy, headings, images) will be supplied dynamically and I'm not sure how Google will index the B and C variations of these pages. As far as I know, the page URL won't change and won't have parameters. Google will crawl and index the page content that comes from JavaScript but I don't know which version of the page copy the search robot will index. If we set user agent filters and serve the default page copy to search robots, we might risk having a cloak penalty because users get different content than search robots. Is it better to have URL parameters for version B and C of the content? For example: /page for the default content /page?id=2 for the B version /page?id=3 for the C version The dynamic content comes from the server side, so not all pages copy variations are in the default HTML. I hope my questions make sense. I couldn't find recommendations for this kind of SEO issue.
Technical SEO | | Gyorgy.B1 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Noindex,follow - linked pages not showing
We have a blog on our site where the homepage and category pages have "noindex,follow" but the articles have "index,follow". Recently we have noticed that the article pages are no longer showing in the Google SERPs (but they are in Bing!) - this was done by using the "site:" search operator. Have double-checked our robots.txt file too just in case something silly had slipped in, but that's as it should be... Has anyone else noticed similar behaviour or could suggest things I could check? Thanks!
Technical SEO | | Nobody15569050351140 -
Noindex, nofollow on a blog since 2009
Just reviewed a WordPress blog that was launched in 2009 but somehow the privacy setting was to not index it, so all this time there's been a noindex, nofollow meta tag in the header. The client couldn't figure out why masses of content wasn't showing up in search results. I've fixed the setting and assume Google will spider in short order; the blog is a subdirectory of their main site. My question is whether there is anything else I can or should do. Can Google recognize the age of the content, or that it once had a noindex meta tag? Will it "date" the blog as of today? Has the client lost out on untold benefits from the long history of content creation? I imagine that link juice from any backlinks to the blog will now flow back to the main site; think that's true? Just curious what others might think of this scenario and whether any other action is warranted.
Technical SEO | | vickim0 -
Adding 'NoIndex Meta' to Prestashop Module & Search pages.
Hi Looking for a fix for the PrestaShop platform Look for the definitive answer on how to best stop the indexing of PrestaShop modules such as "send to a friend", "Best Sellers" and site search pages. We want to be able to add a meta noindex ()to pages ending in: /search?tag=ball&p=15 or /modules/sendtoafriend/sendtoafriend-form.php We already have in the robot text: Disallow: /search.php
Technical SEO | | reallyitsme
Disallow: /modules/ (Google seems to ignore these) But as a further tool we would like to incude the noindex to all these pages too to stop duplicated pages. I assume this needs to be in either the head.tpl or the .php file of each PrestaShop module.? Or is there a general site wide code fix to put in the metadata to apply' Noindex Meta' to certain files. Current meta code here: Please reply with where to add code and what the code should be. Thanks in advance.0 -
Nofollow and ecommerce cart/checkout pages
Hi!! Another noob question: Should I be nofollowing my site's cart and checkout pages? Or as SEs can't get to the checkout pages without either logging in or completing the form is it something I shouldn't worry about? Have read things saying both. Not sure which is correct. Thank you! Appreciate the help. Lynn
Technical SEO | | hiphound0