Adding the link masking directory to robots.txt?
-
Hey guys,
Just want to know if you have any experience with this.
Is it worthwhile blocking search engines from following the link masking directory.. (what i mean by this is the directory that holds the link redirectors to an affiliate site:
example:
mydomain.com/go/thislinkgoes to
amazon.com/affiliatelinkI want to know if blocking the 'go' directory from getting crawled in robots.txt is a good idea or a bad idea?
I am not using wordpress but rather a custom built php site where i need to manually decide on these things.
i want to specifically know if this in any way violates guidelines for google. it doesn't change the custom experience because they know exactly where they will end up if they click on the link.
any advice would be much appreciated.
-
Iredeto,
I do this on a few of my sites and it works out well. Saves on crawl budget, keeps google from accessing my affiliate links, and keeps any pagerank from passing through the links, which keeps me in-line with Google's webmaster policy on links.
One thing to keep in mind is that Google may show those URLs in the search results with a message that they can't show the content because it has been blocked. If you want to keep this from happening you may have to remove that directory GWT using the URL Removal tool. If they get re-indexed in 90 days (or whatever the reset time frame is) you will have to do it again. Hopefully that won't be an issue once you get them all removed the first time and block the folder.
Using rel = "nofollow" tags on the hyperlinks going to that directory wouldn't hurt either.
-
Hi!
I know cases where you can hide the contents of a folder to Google to not affect the rest of the website.
Note that you are facing a machine that knows how to crawl and index content.
Also be sure to block with noindex nofollow meta tags and removing URLs from Google index if at some point you had not locked.
Best regards!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
New Featured Links in Organic Search Results?
Hi guys, I just performed a search and came across something that looks like "featured links" under a regular organic search result (see screenshot). This is the first time I'm seeing this. It looks like a combination of callout and sitelink ad extensions for Google ads. Basically, linked callouts. I went to the landing page to check out the source code and it seems like they are calling it "featured link" in their code. I tried to find more info online but wasn't able to find anything. (I might not be using the correct search terms.) Does anyone know how to take advantage of this? Thanks a lot for your feedback. dJ9dmTr
Algorithm Updates | | HinterP0 -
Canonicals from sub-domain to main domain: How much content relevancy matters? Any back-links impact?
Hi Moz community, I have this different scenario of using canonicals to solve the duplicate content issue in our site. Our subdomain and main domain have similar landing pages of same topics with content relevancy about 50% to 70%. Both pages will be in SERP and confusing users; possibly search engine too. We would like solve this by using canonicals on subdomain pointing to main domain pages. Even our intention is to only to show main domain pages in SERP. I wonder how Google handles it? Will the canonicals will be respected with this content relevancy? What happens if they don't respect? Just ignore or penalise for trying to do this? Thanks
Algorithm Updates | | vtmoz0 -
Are links from inside duplicate content on a 3rd party site pointing back to you worthwhile.
In our niche there are lots of specialist 'profile / portfolio' sites were we can upload content (usually project case studies. These are often quite big and active networks and can drive decent traffic and provide links from high ranking pages. The issue im a bit stuck on is - because they are profile / portfolio based usually its the same content uploaded to each site. But im beginning to get the feeling that these links from within duplicate content although from high ranking sites are not having an effect. Im about to embark on a campaign to re rewrite each of our portfolio items (each one c. 400 words c. 10 times) for each different site, but before i do i wandered if any one has had any experience / a point of view on with wether Google is not valuing links from within duplicate content (bare in mind these arnt spam sites, and are very reputable, mainly because once you submit the content it gets reviewed prior to going live). And wether a unique rewrite of the content solves this issue.
Algorithm Updates | | Sam-P0 -
Nofolow links drive to losing ranking
Hello there,
Algorithm Updates | | Goran024
I am an owner of mobilnishop website. We selling mobile phones. As you know , new phones coming every few days and they starting to be old after 1-2 years. So I decided to all pages which present old (discontinued) phones make them "noindex". I this way I meant to to focus google on new pages ( for new phones). After 1 year I find a huge losing trafic and key word position on goole. For example, word :
"mobilni telefoni " from 2 place I move to 11. So what I find out is that I LOST LINK JUICE. Is it possible that google does not see given link of my noindex pages? It look that I made auto goal.
Any opinion? Suggest ?0 -
How can a site with two questionable inbound links outperform sites with 500-1000 links good PR?
Our site for years was performing at #1 for but in the last 6 months been pushed down to about the #5 spot. Some of the domains above us have a handful of links and they aren't from good sources. We don't have a Google penalty. We try to only have links from quality domains but have been pushed down the SERP's? Any suggestions?
Algorithm Updates | | northerncs0 -
Google cant read my robots.txt from past 10 days
http://awesomescreenshot.com/08d1s6aybc hi, my robots.txt is http://wallpaperzoo.com/robots.txt google says it cant read and has postponed the crawl.. its been 10 days and no crawl.. please help me in solving this issue.. this is save with http://hdwallpaperzones.com/robots.txt
Algorithm Updates | | toxicpls0 -
Would 37,000 footer links from one site be the cause for our ranking drops?
Hey guys, After this week's Penguin update, I've noticed that one of our clients has seen a dip in rankings. Because of this, I've had a good link at the client's back link profile in comparison to competitors and noticed that over 37,000 footer links have been generated from one website - providing us with an unhealthy balance of anchor terms. Do you guys believe this may be the cause for our ranking drops? Would it be wise to try and contact the webmaster in question to remove the footer links? Thanks, Matt
Algorithm Updates | | Webrevolve0 -
Is it hurting my seo ranking if robots.txt is forbidden?
robots.txt is forbidden - I have read up on what the robots.txt file does and how to configure it but what about if it is not able to be accessed at all?
Algorithm Updates | | Assembla0