Mask links with JS that point to noindex'ed paged
-
Hi,
in an effort to prepare our page for the Panda we dramatically reduced the number of pages that can be indexed (from 100k down to 4k). All the remaining pages are being equipped with unique and valuable content.
We still have the other pages around, since they represent searches with filter combination which we deem are less interesting to the majority of users (hence they are not indexed). So I am wondering if we should mask links to these non-indexed pages with JS, such that Link-Juice doesn't get lost to those. Currently the targeted pages are non-index via "noindex, follow" - we might de-index them with robots.txt though, if the "site:" query doesn't show improvements.
Thanks,
Sebastian
-
Well, we just want to show less links to Google than to the user (but the links for Google are still a subset of the links shown to users). The links we'd do as JS links are those to less often applied search filters, which we don't index in order not to spam the search index.
Fortunately, if Google is smart enough in decrypting the links it wouldn't do any harm.
Thanks for our ideas tough! Especially the site: thing I considered myself, it really takes ages until something is de-indexed (for us, using robots.txt did speed it up by a magnitude).
-
Not to mention Google's ability to decipher JS to one degree or another, and they're working on improving that all the time. I've seen content they found that was supposed to be hidden in JS.
-
First be aware that the "site:" query won't show improvements for a long time. I had a 15 page website I built for someone get indexed in the dev server on accident. I 301'd every page to the new site's real URL. If I site search the dev url's they are still there, in spite of the fact that they 301 and have been for nearly two months. One I did 6 months ago only recently was removed from the site search.
if you link to your own pages that are not indexed for whatever reason, you could try to mask them in javascript but just be aware of the fine line you walk. Google does not like anything that misleads them or users. Hiding a link that is visible to users and not them is not a good idea in my opinion. If you have content that isn't worth indexing, it shouldn't be worth linking to anyway.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Lots of links from a Wiki pointing at main site
Hi everyone This may seem a bit obvious but I am getting conflicting answers on this, we have a client that has a wiki that is basically an online manual of their software. They do it like this because the manual is so big and is constantly developing, there are thousands of pages with loads of links that are pointing to various sections of relevance on the main site as well, the majority of these are No Follow but I have noticed that they have a single link on the navigation that is a direct link to their main site that is a follow link, obviously this is a sitewide. Would this be seen as being detrimental to the main site, should I have this set as No Follow as well. Thanks in Advance
Technical SEO | | Andrew_Birkitt0 -
My site was hacked and spammy URLs were injected that pointed out. The issue was fixed, but GWT is still reporting more of these links.
Excuse me for posting this here, I wasn't having much luck going through GWT support. We recently moved our eCommerce site to a new server and in the process the site was hacked. Spammy URLs were injected in, all of which were pointing outwards to some spammy eCommerce retail stores. I removed ~4,000 of these links, but more continue to pile in. As you can see, there are now over 20,000 of these links. Note that our server support team does not see these links anywhere. I understand that Google doesn't generally view this as a problem. But is that true given my circumstance? I cannot imagine that 20,000 new, senseless 404's can be healthy for my website. If I can't get a good response here, would anyone know of a direct Google support email or number I can use for this issue?
Technical SEO | | jampaper0 -
Too Many Page Links
I have 8 niche websites for golf clubs. This was done to carve out tight niches for specific types of clubs then only broadens each club by type - i.e. better player, game improvement, max game improvement. So far, for fairly young sites, <1 year, they are doing fairly well as I build content. Running campaigns has alerted me to one problem - too many on-page links. And because I use Wordpress those links are on each page in the right sidebar and lead to the other sites. Even though visitors arrive via organic search in most cases they tend to eventually exit to one of the other sites or they click on a product (Ebay) and venture off to hopefully make a purchase. Ex: Drivers site will have a picture link for each of the other 7 sites. Question: If I have one stie (like a splash page) used as one link to that page listing all the sites with a brief explanation of each site will this cause visitors to bounce off because they will have one click, than the list and other clicks depending on what other club/site they would like to go to. The links all open in new windows. This would cut down on the number of links per page of each site but will it cause too much work for visitors and cause them to leave?
Technical SEO | | NicheGuy0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Why isn't Google pushing my Schema data to the search results page
I believe we have it set up right. I'm noticing all my competitors schema data is showing up which is really giving them a leg up on us. We have a high ranking website so I'm just not sure why it's now showing up. Here is an example URL http://www.airgundepot.com/3576w.html I've used the Google webmaster tools tester and it all looks fine. Any ideas? Thanks in advance.
Technical SEO | | AirgunDepot0 -
Are these 'not found' errors a concern?
Our webmaster report is showing thousands of 'not found' errors for links that show up in javascript code. Is this something we should be concerned about? Especially since there are so many?
Technical SEO | | nicole.healthline0 -
Why would a link shown on OSE appear differently than the page containing the link?
I recently traded links with a site that I will call www.example.com When I used open site explorer to check the link it came back with a different page authority as www.example.com/index.htm yet the link does appear on the www.example.com page. Why would this be?
Technical SEO | | casper4340 -
Duplicate titles OK if page don't need to rank well?
I know It is not a good idea to have duplicate titles across a website on pages as Google does not like this. Is it ok to have duplicate titles on pages that aren't being optimised with SERP's in mind? or could this have a negative effect on the pages that are being optimised?
Technical SEO | | iSenseWebSolutions0