Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Will noindex pages still get link equity?
-
We think we get link equity from some large travel domains to white label versions of our main website. These pages are noindex because they're the same URLs and content as our main B2C website and have canonicals to the pages we want indexed. Question is, is there REALLY link equity to pages on our domain which have "noindex,nofollow" on them?
Secondly we're looking to put all these white label pages on a separate structure, to better protect our main indexed pages from duplicate content risks. The best bet would be to put them on a sub folder rather than a subdomain, yes? That way, even though the pages are still noindex, we'd get link equity from these big domains to www.ourdomain.com/subfolder where we wouldn't to subdomain.ourdomain.com?
Thank you!
-
According to John Mueller, the answer is no (at least in the long term)
https://www.seroundtable.com/google-long-term-noindex-follow-24990.html
-
Thanks for your advice chaps - ultimately a change is coming in a couple of weeks, might update this page if it's useful...
-
I agree with Gaston´s view. What is stopping you though from changing the nofollow tag to follow and maintain the canonical and noindex? That way you wouldn´t have duplicate content issues (either on the main domain, folder or subdomain) and still pass link equity.
-
Hello Josep,
Firstly, when the noindex tag by itself doesnt stop pagerank to be transfered. The tag nofollow is the problem here.
Remember that the link equity is passed when you/the page lets the googlebot to go to and "follow" the next page.Secondly, if you still repect the noindex, canonical and all the correct stuff to prevent the duplicate content, there will be no difference between folder and subdomain.
Hope it helps.
Best Luck.
GR.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Solved How to solve orphan pages on a job board
Working on a website that has a job board, and over 4000 active job ads. All of these ads are listed on a single "job board" page, and don’t obviously all load at the same time. They are not linked to from anywhere else, so all tools are listing all of these job ad pages as orphans. How much of a red flag are these orphan pages? Do sites like Indeed have this same issue? Their job ads are completely dynamic, how are these pages then indexed? We use Google’s Search API to handle any expired jobs, so they are not the issue. It’s the active, but orphaned pages we are looking to solve. The site is hosted on WordPress. What is the best way to solve this issue? Just create a job category page and link to each individual job ad from there? Any simpler and perhaps more obvious solutions? What does the website structure need to be like for the problem to be solved? Would appreciate any advice you can share!
Reporting & Analytics | | Michael_M2 -
Blocking Standard pages with Robots.txt (t&c's, shipping policy, pricing & privacy policies etc)
Hi I've just had best practice site migration completed for my old e-commerce store into a Shopify environment and I see in GSC that it's reporting my standard pages as blocked by robots.txt, such as these below examples. Surely I don't want these blocked ? is that likely due to my migrators or s defaults setting with Shopify does anyone know? : t&c's shipping policy pricing policy privacy policy etc So in summary: Shall I unblock these? What caused it Shopify default settings or more likely my migration team? All Best Dan
Reporting & Analytics | | Dan-Lawrence0 -
Will changing the property from http to https in Google Analytics affect main unfiltered view?
I set my client up with an unfiltered view in Google Analytics. This is the one with historical data going back for years, so I don't want to do anything that will affect this view. Recently, the website moved from HTTP to HTTPS. There's a setting for the property that will allow me to change the property name to https://EXAMPLE.com and change the default URL to https://EXAMPLE.com. Questions: 1. If I change the property name and the default URL, will this somehow affect my unfiltered view in a way that I'll lose historical data or data moving forward? 2. I have heard that changing the default URL to HTTPS will help me avoid a common problem others have experienced (where they lose the referrer in Google Analytics and a bunch of their sessions go to direct / other). Is this true?
Reporting & Analytics | | Kevin_P3 -
Google analytics suddenly stopped tracking all my landing pages
Hey guys. I love the new update of GA. Looks so clean. So, of course, I was excited to see how my landing pages were doing. I went to behavior, all content, all pages. And I noticed it's only showing me 19 pages out of the 93 I have indexed. And none of the top ones at all! Can't find them anywhere in GA! Anyone seen this before? Thank you so much
Reporting & Analytics | | Meier0 -
How to safely exclude search result pages from Google's index?
Hello everyone,
Reporting & Analytics | | llamb
I'm wondering what's the best way to prevent/block search result pages from being indexed by Google. The way search works on my site is that search form generates URLs like:
/index.php?blah-blah-search-results-blah I wanted to block everything of that sort, but how do I do it without blocking /index.php ? Thanks in advance and have a great day everyone!0 -
Link Research Tools
Is anyone else here a user of Link Research Tools? I recently completed a Link Detox for my sites. However, it is saying that links from high quality press release sites are deadly and should be removed. They are also saying the same about the links from the Yellow Pages. Obviously I know these tools are automated, but does anyone know why they are showing these links as 'deadly' and should be removed? I have tried contacting LRT about this issue but am yet to receive a reply.
Reporting & Analytics | | AAttias0 -
How to get a list of robots.txt file
This is my site. http://muslim-academy.com/ Its in wordpress.I just want to know is there any way I can get the list of blocked URL by Robots.txt In Google Webmaster its not showing up.Just giving the number of blocked URL's. Any plugin or Software to extract the list of blocked URL's.
Reporting & Analytics | | csfarnsworth0 -
Email campaigns. Should I link to my blog or to my site?
I have a client for who we write and post a daily blog article. The articles are optimized and linked to particular targeted content on his top level site. Now we are going to start e-marketing to his 3000+ website users to announce inventory changes and specials. My question is (from a SE standpoint) are we better off linking the e-mail content to the blog and introducing people to the blog (but adding an additional step for getting to the new inventory. Or are we better off putting a link in the HTML E-mail letter that we send out to both the blog and separately to the inventory section? Just to clarify, we wonder if the search engines would provide some additional authority for the extra blog traffic and thereby build the overall score of the blog & site. We are looking at the e-mail campaigns as a potential opportunity to impact SE scores not just awareness of new inventory. Thanks everyone!
Reporting & Analytics | | webindustry0