Noindex user profile
-
I have a social networking site with user- and company profiles. Some profiles have little to no content. One of the users here at moz suggested noindex-ing these profiles. I am still investigating this issue and have some follow up questions:
- What is the possible gain of no-indexing uninteresting profiles? Especially interested in this since these profiles do bring in long-tail traffic atm.
- How "irreversable" is introducing a noindex directive? Would everything "return to normal" if I remove te noindex directive?
- When determining the treshold for having profiles indexed, how should the following items be weighed
- Sum of number of words on the page (comprised of one or more of the following: full name, city, 0 to N company names, bio, activity)
- (unique) Profile picture
- (Nofollowed) Links to user's profiles on social networks or user's own site.
- Embedded Google Map
Thanks!
-
The one thing I would add to your list of criteria, if you choose to go that route, is to look at Google Analytics landing pages and make sure the individual profiles don't any inbound search traffic.
-
The gain would be that you don't index a bunch of URLs on your site that contain essentially similar/thin content. I wouldn't necessarily count those that do bring in long tail traffic as ones you'd want to noindex. Things will return to normal once you remove the noindex, but unless you have decent links pointing to those profiles, it may take up to numerous months to for them to be recrawled. I'd weigh most heavily links (followed or no followed) to the profiles from decent sites, as well as activity that shows on the profile page. The rest I wouldn't consider in the threshold calculation.
-
1. unless you have a big thin content problem there is no gain
2. completely reversible, just remove and wait
3. you will have to decide, you seem like you are on the right track.
4. Question you should have asked, is there any downside to no-indexing these pages, Answer Yes there is, all links pointing to a no-indexed page will leak all their link juice, noindex is a last resort, I have never used.
if you must noindex a page, do it with a meta no-index,follow tag, note that was "follow", not "no-follow", then your link juice will flow into the page and back out again.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Good to use disallow or noindex for these?
Hello everyone, I am reaching out to seek your expert advice on a few technical SEO aspects related to my website. I highly value your expertise in this field and would greatly appreciate your insights.
Technical SEO | | williamhuynh
Below are the specific areas I would like to discuss: a. Double and Triple filter pages: I have identified certain URLs on my website that have a canonical tag pointing to the main /quick-ship page. These URLs are as follows: https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black
https://www.interiorsecrets.com.au/collections/lounge-chairs/quick-ship+black+fabric Considering the need to optimize my crawl budget, I would like to seek your advice on whether it would be advisable to disallow or noindex these pages. My understanding is that by disallowing or noindexing these URLs, search engines can avoid wasting resources on crawling and indexing duplicate or filtered content. I would greatly appreciate your guidance on this matter. b. Page URLs with parameters: I have noticed that some of my page URLs include parameters such as ?variant and ?limit. Although these URLs already have canonical tags in place, I would like to understand whether it is still recommended to disallow or noindex them to further conserve crawl budget. My understanding is that by doing so, search engines can prevent the unnecessary expenditure of resources on indexing redundant variations of the same content. I would be grateful for your expert opinion on this matter. Additionally, I would be delighted if you could provide any suggestions regarding internal linking strategies tailored to my website's structure and content. Any insights or recommendations you can offer would be highly valuable to me. Thank you in advance for your time and expertise in addressing these concerns. I genuinely appreciate your assistance. If you require any further information or clarification, please let me know. I look forward to hearing from you. Cheers!0 -
Does "google selected canonical" pass link juice the same as "user selected canonical"?
We are in a bit of a tricky situation since a key top-level page with lots of external links has been selected as a duplicate by Google. We do not have any canonical tag in place. Now this is fine if Google passes the link juice towards the page they have selected as canonical (an identical top-level page)- does anyone know the answer to this question? Due to various reasons, we can't put a canonical tag ourselves at this moment in time. So my question is, does a Google selected canonical work the same way and pass link juice as a user selected canonical? Thanks!
Technical SEO | | Lewald10 -
Noindex large productpages on webshop to counter Panda
A Dutch webshop with 10.000 productpages is experiencing lower rankings and indexation. Problems started last october, a little while after the panda and penguin update. One of the problems diagnosed is the lack of unique content. Many of the productpages lack a description and some are variants of eachother. (color, size, etc). So a solution could be to write unique descriptions and use rel canonical to concentrate color/size variations to one productpage. There is however no capacity to do this on short notice. So now I'm wondering if the following is effective. Exclude all productpages via noindex, robots.txt. IN the same way as you can do with search pages. The only pages left for indexation are homepage and 200-300 categorypages. We then write unique content and work on the ranking of the categorypages. When this works the product pages are rewritten and slowly reincluded, category by category. My worry is the loss of ranking for productpages. ALthoug the ranking is minimal currently. My second worry is the high amount of links on category pages that lead to produtpages that will be excluded rom google. Thirdly, I am wondering if this works at all. using noindex on 10.000 productpages consumes crawl budget and dillutes the internal link structure. What do you think?
Technical SEO | | oeroek0 -
HTTP Vary:User-Agent Server or Page Level?
Looking for any insights regarding the usage of the Vary HTTP Header. Mainly around the idea that search engines will not like having a Vary HTTP Header on pages that don't have a mobile version, which means the header will be to be implemented on a page-by-page basis. Additionally, does anyone has experience with the usage of the Vary HTTP Header and CDNs like Akamai?Google still recommends using the header, even though it can present some challenges with CDNs. Thanks!
Technical SEO | | burnseo0 -
Too Many noindex,follow Tags
Can you have too many pages on one site with noindex,follow tags? Just curious, because we're looking to noindex,follow lesser important pages such as bios about our team, privacy and terms, etc.
Technical SEO | | Prospector-Plastics0 -
NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png
Technical SEO | | ChrisRoberts-MTI0 -
Noindex, nofollow on a blog since 2009
Just reviewed a WordPress blog that was launched in 2009 but somehow the privacy setting was to not index it, so all this time there's been a noindex, nofollow meta tag in the header. The client couldn't figure out why masses of content wasn't showing up in search results. I've fixed the setting and assume Google will spider in short order; the blog is a subdirectory of their main site. My question is whether there is anything else I can or should do. Can Google recognize the age of the content, or that it once had a noindex meta tag? Will it "date" the blog as of today? Has the client lost out on untold benefits from the long history of content creation? I imagine that link juice from any backlinks to the blog will now flow back to the main site; think that's true? Just curious what others might think of this scenario and whether any other action is warranted.
Technical SEO | | vickim0 -
Mask links with JS that point to noindex'ed paged
Hi, in an effort to prepare our page for the Panda we dramatically reduced the number of pages that can be indexed (from 100k down to 4k). All the remaining pages are being equipped with unique and valuable content. We still have the other pages around, since they represent searches with filter combination which we deem are less interesting to the majority of users (hence they are not indexed). So I am wondering if we should mask links to these non-indexed pages with JS, such that Link-Juice doesn't get lost to those. Currently the targeted pages are non-index via "noindex, follow" - we might de-index them with robots.txt though, if the "site:" query doesn't show improvements. Thanks, Sebastian
Technical SEO | | derderko0