Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Best way to block a search engine from crawling a link?
-
If we have one page on our site that is is only linked to by one other page, what is the best way to block crawler access to that page?
I know we could set the link to "nofollow" and that would prevent the crawler from passing any authority, and we can set the page to "noindex" to prevent it from appearing in search results, but what is the best way to prevent the crawler from accessing that one link?
-
Hi there,
I'm assuming you are trying to do pagerank sculpting (or something related..) - which was made a little more tough in recent years. I'll base my answer around this assumption, so feel free to correct me if this isn't the case.
There are several methods to make a link uncrawlable:
- AJAX - Googlebot will not read any calls through AJAX. If you can load your link through an external call, it would be completely hidden.
- Javascript - Obfuscate links with Javascript that masks the link. You can do any number of solutions here, including using tags with a title of your URL, which upon clicking, goes that that URL. Simple and effective.
- Redirects - I haven't tested this last idea, and it may not work. You might be able to redirect to another page in your website, which is then set to not be indexed. Then redirect to the intended page through a query string. In theory it should work, but obviously not as good as the previous methods I described.
Let me know if you have questions. I'd be glad to help further.
Cheers!
-
Noindex/nofollow should be good enough, but if you want to be sure it doesn't get indexed, you could can also include <meta name="robots" content="NOINDEX, NOFOLLOW"> in the head section of the page to be blocked. You can also exclude the page in your robots.txt file. </meta name="robots">
You can find a simple robots.txt generator in Google Webmaster Tools if you need to block particular pages or directories. The robots.txt file should be in the root directory of your site and look something like this:
User-agent: * Disallow: /file-you-want-to-hide.html
You can also request removal of specific URLs in Webmaster Tools if it has already been indexed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best way to "Prune" bad content from large sites?
I am in process of pruning my sites for low quality/thin content. The issue is that I have multiple sites with 40k + pages and need a more efficient way of finding the low quality content than looking at each page individually. Is there an ideal way to find the pages that are worth no indexing that will speed up the process but not potentially harm any valuable pages? Current plan of action is to pull data from analytics and if the url hasn't brought any traffic in the last 12 months then it is safe to assume it is a page that is not beneficial to the site. My concern is that some of these pages might have links pointing to them and I want to make sure we don't lose that link juice. But, assuming we just no index the pages we should still have the authority pass along...and in theory, the pages that haven't brought any traffic to the site in a year probably don't have much authority to begin with. Recommendations on best way to prune content on sites with hundreds of thousands of pages efficiently? Also, is there a benefit to no indexing the pages vs deleting them? What is the preferred method, and why?
Intermediate & Advanced SEO | | atomiconline0 -
What's the best way to noindex pages but still keep backlinks equity?
Hello everyone, Maybe it is a stupid question, but I ask to the experts... What's the best way to noindex pages but still keep backlinks equity from those noindexed pages? For example, let's say I have many pages that look similar to a "main" page which I solely want to appear on Google, so I want to noindex all pages with the exception of that "main" page... but, what if I also want to transfer any possible link equity present on the noindexed pages to the main page? The only solution I have thought is to add a canonical tag pointing to the main page on those noindexed pages... but will that work or cause wreak havoc in some way?
Intermediate & Advanced SEO | | fablau3 -
What is the best way to find related forums in your industry?
Hi Guys, Just wondering what is the best way to find forums in your industry?
Intermediate & Advanced SEO | | edward-may2 -
Whats the best way to remove search indexed pages on magento?
A new client ( aqmp.com.br/ )call me yestarday and she told me since they moved on magento they droped down more than US$ 20.000 in sales revenue ( monthly)... I´ve just checked the webmaster tool and I´ve just discovered the number of crawled pages went from 3.260 to 75.000 since magento started... magento is creating lots of pages with queries like search and filters. Example: http://aqmp.com.br/acessorios/lencos.html http://aqmp.com.br/acessorios/lencos.html?mode=grid http://aqmp.com.br/acessorios/lencos.html?dir=desc&order=name Add a instruction on robots.txt is the best way to remove unnecessary pages of the search engine?
Intermediate & Advanced SEO | | SeoMartin10 -
Do 404 Pages from Broken Links Still Pass Link Equity?
Hi everyone, I've searched the Q&A section, and also Google, for about the past hour and couldn't find a clear answer on this. When inbound links point to a page that no longer exists, thus producing a 404 Error Page, is link equity/domain authority lost? We are migrating a large eCommerce website and have hundreds of pages with little to no traffic that have legacy 301 redirects pointing to their URLs. I'm trying to decide how necessary it is to keep these redirects. I'm not concerned about the page authority of the pages with little traffic...I'm concerned about overall domain authority of the site since that certainly plays a role in how the site ranks overall in Google (especially pages with no links pointing to them...perfect example is Amazon...thousands of pages with no external links that rank #1 in Google for their product name). Anyone have a clear answer? Thanks!
Intermediate & Advanced SEO | | M_D_Golden_Peak0 -
Outbound link to PDF vs outbound link to page
If you're trying to create a site which is an information hub, obviously linking out to authoritative sites is a good idea. However, does linking to a PDF have the same effect? e.g Linking to Google's SEO starter guide PDF, as opposed to linking to a google article on SEO. Thanks!
Intermediate & Advanced SEO | | underscorelive0 -
Best way to get pages indexed fast?
Any suggestion on best ways to get new sites pages indexed? Was thinking getting high pr inbound links on fiverr but always a little risky right? Thanks for your opinions.
Intermediate & Advanced SEO | | mweidner27820