Is my "term & conditions"-"privacy policy" and "About Us" pages stealing link juice?
-
should i make them no follow?
or is this a bogus method?
-
Hi Keri...thanks for sharing some insight to my response as well ;>)
Is due to the Panda Update why they should be indexed -- since the panda update I believe now wants to see the contact page and those 2 other pages I think ...hummm thanks for clarity !
-
I wouldn't bother with nofollow, I would put all 3 on the same page & consolidate that way. If you wanted, you could leave your navigation as is, by just using anchors to get to specific parts of the page
I don't like to noindex or block the legal pages, as they are a signal for trust
-
There's a good discussion about this in a very similar post from a few hours ago at http://www.seomoz.org/q/should-i-make-all-my-non-money-pages-no-follow. The short answer is that a couple years back it may have helped, but not anymore.
You do want to index them, as you want the search engines to see that you have a privacy policy, and both the privacy policy and about us help the engines trust your site just a tad more.
-
Yes...if it is of no importance to you then you are hurting yourself and your site -- just use the no index no follow tag -- this will prevent it from indexing and passing juice to those insignificant pages
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt, Disallow & Indexed-Pages..
Hi guys, hope you're well. I have a problem with my new website. I have 3 pages with the same content: http://example.examples.com/brand/brand1 (good page) http://example.examples.com/brand/brand1?show=false http://example.examples.com/brand/brand1?show=true The good page has rel=canonical & it is the only page should be appear in Search results but Google has indexed 3 pages... I don't know how should do now, but, i am thinking 2 posibilites: Remove filters (true, false) and leave only the good page and show 404 page for others pages. Update robots.txt with disallow for these parameters & remove those URL's manually Thank you so much!
Intermediate & Advanced SEO | | thekiller990 -
Help with Schema & what's considered "Spammy structured markup"
Hello all! I was wondering if someone with a good understanding of schema markup could please answer my question about the correct use so I can correct a penalty I just received. My website is using the following schema markup for our reviews and today I received this message in my search console. UGH... Manual Actions This site may not perform as well in Google results because it appears to be in violation of Google's Webmaster Guidelines. Site-wide matches Some manual actions apply to entire site <colgroup><col class="JX0GPIC-d-h"><col class="JX0GPIC-d-x"><col class="JX0GPIC-d-a"></colgroup>
Intermediate & Advanced SEO | | reversedotmortgage
| | Reason | Affects |
| | Spammy structured markup Markup on some pages on this site appears to use techniques such as marking up content that is invisible to users, marking up irrelevant or misleading content, and/or other manipulative behavior that violates Google's Rich Snippet Quality guidelines. Learn more. | I have used the webmasters rich snippets tool but everything checks out. The only thing I could think of is my schema tag for "product." rather than using a company like tag? (https://schema.org/Corporation). We are a mortgage company so we sell a product it's called a mortgage so I assumed product would be appropriate. Could that even be the issue? I checked another site that uses a similar markup and they don't seem to have any problems in SERPS. http://www.fha.com/fha_reverse shows stars and they call their reviews "store" OR could it be that I added my reviews in my footer so that each of my pages would have a chance at displaying my stars? All our reviews are independently verified and we just would like to showcase them. I greatly appreciate the feedback and had no intentions of abusing the markup. From my site: All Reverse Mortgage 4.9 out of 5 301 Verified Customer Reviews from eKomi | |
| | [https://www.ekomi-us.com/review-reverse.mortgage.html](<a class=)" rel="nofollow" title="eKomi verified customer reviews" target="_BLANK" style="text-decoration:none; font-size:1.1em;"> |
| | ![](<a class=)imgs/rating-bar5.png" /> |
| | |
| | All Reverse Mortgage |
| | |
| | |
| | 4.9 out of 5 |
| | 301 Verified Customer Reviews from eKomi |
| | |
| | |
| | |
| | |1 -
Canonicle & rel=NOINDEX used on the same page?
I have a real estate company: www.company.com with approximately 400 agents. When an agent gets hired we allow them to pick a URL which we then register and manage. For example: www.AGENT1.com We then take this agent domain and 301 redirect it to a subdomain of our main site. For example
Intermediate & Advanced SEO | | EasyStreet
Agent1.com 301’s to agent1.company.com We have each page on the agent subdomain canonicled back to the corresponding page on www.company.com
For example: agent1.company.com canonicles to www.company.com What happened is that google indexed many URLS on the subdomains, and it seemed like Google ignored the canonical in many cases. Although these URLS were being crawled and indexed by google, I never noticed any of them rank in the results. My theory is that Google crawled the subdomain first, indexed the page, and then later Google crawled the main URL. At that point in time, the two pages actually looked quite different from one another so Google did not recognize/honor the canonical. For example:
Agent1.company.com/category1 gets crawled on day 1
Company.com/category1 gets crawled 5 days later The content (recently listed properties for sale) on these category pages changes every day. If Google crawled the pages (both the subdomain and the main domain) on the same day, the content on the subdomain and the main domain would look identical. If the urls are crawled on different days, the content will not match. We had some major issues (duplicate content and site speed) on our www.company.com site that needed immediate attention. We knew we had an issue with the agent subdomains and decided to block the crawling of the subdomains in the robot.txt file until we got the main site “fixed”. We have seen a small decrease in organic traffic from google to our main site since blocking the crawling of the subdomains. Whereas with Bing our traffic has dropped almost 80%. After a couple months, we have now got our main site mostly “fixed” and I want to figure out how to handle the subdomains in order to regain the lost organic traffic. My theory is that these subdomains have a some link juice that is basically being wasted with the implementation of the robots.txt file on the subdomains. Here is my question
If we put a ROBOTS rel=NOINDEX on all pages of the subdomains and leave the canonical (to the corresponding page of the company site) in place on each of those pages, will link juice flow to the canonical version? Basically I want the link juice from the subdomains to pass to our main site but do not want the pages to be competing for a spot in the search results with our main site. Another thought I had was to place the NOIndex tag only on the category pages (the ones that seem to change every day) and leave it off the product (property detail pages, pages that rarely ever change). Thank you in advance for any insight.0 -
"near me" campaign
I'm looking at running a campaign to get a site ranking for terms that include "near me" so for instance, "personal trainers near me", "yoga lessons near me" I'm wondering if this should be a local campaign because of the the "near me" in the term and Google basing results on IP addresses of the searcher (if that's possible possible instead of town names) or will it come down to words on the page including "near me" Any help or examples would be hugely appreciated, thanks community!
Intermediate & Advanced SEO | | Marketing_Today0 -
Unpaid Followed Links & Canonical Links from Syndicated Content
I have a user of our syndicated content linking to our detailed source content. The content is being used across a set of related sites and driving good quality traffic. The issue is how they link and what it looks like. We have tens of thousands of new links showing up from more than a dozen domains, hundreds of sub-domains, but all coming from the same IP. The growth rate is exponential. The implementation was supposed to have canonical tags so Google could properly interpret the owner and not have duplicate syndicated content potentially outranking the source. The canonical are links are missing and the links to us are followed. While the links are not paid for, it looks bad to me. I have asked the vendor to no-follow the links and implement the agreed upon canonical tag. We have no warnings from Google, but I want to head that off and do the right thing. Is this the right approach? What would do and what would you you do while waiting on the site owner to make the fixes to reduce the possibility of penguin/google concerns? Blair
Intermediate & Advanced SEO | | BlairKuhnen0 -
De-indexing product "quick view" pages
Hi there, The e-commerce website I am working on seems to index all of the "quick view" pages (which normally occur as iframes on the category page) as their own unique pages, creating thousands of duplicate pages / overly-dynamic URLs. Each indexed "quick view" page has the following URL structure: www.mydomain.com/catalog/includes/inc_productquickview.jsp?prodId=89514&catgId=cat140142&KeepThis=true&TB_iframe=true&height=475&width=700 where the only thing that changes is the product ID and category number. Would using "disallow" in Robots.txt be the best way to de-indexing all of these URLs? If so, could someone help me identify how to best structure this disallow statement? Would it be: Disallow: /catalog/includes/inc_productquickview.jsp?prodID=* Thanks for your help.
Intermediate & Advanced SEO | | FPD_NYC0 -
Do 404 Pages from Broken Links Still Pass Link Equity?
Hi everyone, I've searched the Q&A section, and also Google, for about the past hour and couldn't find a clear answer on this. When inbound links point to a page that no longer exists, thus producing a 404 Error Page, is link equity/domain authority lost? We are migrating a large eCommerce website and have hundreds of pages with little to no traffic that have legacy 301 redirects pointing to their URLs. I'm trying to decide how necessary it is to keep these redirects. I'm not concerned about the page authority of the pages with little traffic...I'm concerned about overall domain authority of the site since that certainly plays a role in how the site ranks overall in Google (especially pages with no links pointing to them...perfect example is Amazon...thousands of pages with no external links that rank #1 in Google for their product name). Anyone have a clear answer? Thanks!
Intermediate & Advanced SEO | | M_D_Golden_Peak0 -
What is the best way to optimize/setup a teaser "coming soon" page for a new product launch?
Within the context of a physical product launch what are some ideas around creating a /coming-soon page that "teases" the launch. Ideally I'd like to optimize a page around the product, but the client wants to try build consumer anticipation without giving too many details away. Any thoughts?
Intermediate & Advanced SEO | | GSI0