I wish! The affiliate program is new. It's an unknown entity for me. 7 figures is just normal sales. The in-house software has the capacity to allow the blogger to choose a product and create custom code and a thumbnail for it to put in their site, like Amazon does, but they'll have to add the nofollow by hand or I will need to pay to have the software modified, which is what I'll probably end up doing. I have no desire to get a manual penalty for manipulating page rank and I'm surprised it's not a part of the software already.
- Home
- sparrowdog
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Latest posts made by sparrowdog
-
RE: Nofollow affiliate links
-
RE: Nofollow affiliate links
If you're an Amazon affiliate it is.
By having millions of links to their website where people are being paid to do so, they're encouraging the manipulation of Google PageRank. I don't see why the rules should be any different for them because they're reputable. My site is reputable too. It's an online store with a 7 figure annual turnover, I'm just not 'Amazon'.
So I was just curious as to why they take the link juice from their affiliate links when the rest of us are encouraged to nofollow them. Everything I have read so far suggests that if you run an affiliate program, all the links where people link back to your store should be nofollow, and if you're running a blog that's got lots of affiliate links on it, you should protect your blog by adding nofollow links to all your advertising.
My in-house software doesn't nofollow by default, so I'm either going to have to get it re-written or give my affiliates manual instructions on how to alter their code. I chose not to use ClixGalore or anyone else like that so I don't have to pay their fees.
-
Nofollow affiliate links
I am setting up an affiliate program using software built in to my shop already (x-cart). The links generated by the software do not have the rel="nofollow" in them. I'm assuming they should have?
When looking at Amazon, there must be millions of links out there pointing back to Amazon and all those links are followed back to them for link juice.
Am I missing something? Surely best practice here is to re="nofollow" so you're not seen to be manipulating Google PageRank?
-
RE: URL parameters affecting link juice
I have an SEO pack installed on my shop that produces the ....product.html portion of the URL.
However, a layout feature on the site calls on javascript and adds some 'junk' on to the end of it. I am going to have the coding on the page removed, but I now have people linking to ....product.html?&cat=0&featured=Y instead of .....product.html
Likewise, I am about to set up an affiliate program and while I will be directing my affiliates to nofollow the links, I want to also set up an affiliate account for 'in-house' use and use them in blog posts to see whether my blog posts are converting in to sales. These URL's will also have a parameter at the end of them ie: ......product.html?partnerid=1234 (for example)
Will I get the internal keyword link juice from the URL's with the partner ID on the end?
-
URL parameters affecting link juice
I have a couple of quirks in my online shop that I'm ironing out. One of them is adding some URL parameters to product links.ie: website.com/product.html?&cat=0&featured=Y
If someone links to this URL, will I get the link juice as if it was website.com/product.html ?
I have URL parameters in Webmaster Tools and robots.txt set up to ignore them so they're not in the Google index, but I have found a few websites that have linked to us using these longer URL's and I'm wondering whether to write to them and ask them if they mind changing them or not.
-
RE: Google Shopping Feed being blocked by robots.txt
When I manually fetch, the /images folder isn't on the list anymore.
For now I have just reuploaded the images in to a new folder and I'm creating my feed by hand, so that should get around it for now.
Thanks for your help.
-
Google Shopping Feed being blocked by robots.txt
I had created a manual Google Shopping Feed that was working fine, and then someone well meaning put a block in my robots.txt file so Google couldn't read the images folder. because of this, Google now won't accept my feed.
I changed the robots.txt file to allow them to read the images again, but it's been 3 days now and I'm still getting the error saying my products are disallowed because the robots.txt file won't let them scan for images.
Does anyone know how long it will take for Google to see it again?
-
RE: Using the Google Remove URL Tool to remove https pages
Thanks so much for taking the time to respond.
I think I will add the https to WMT and remove them that way.
I will take a look through the .htaccess file and the creation of the ssl robots file. A while back, it seemed that Google was indexing a lot of my site as https and then the dropped it and went mainly back to http. I will get that sorted to make it clear.
-
Using the Google Remove URL Tool to remove https pages
I have found a way to get a list of 'some' of my 180,000+ garbage URLs now, and I'm going through the tedious task of using the URL removal tool to put them in one at a time. Between that and my robots.txt file and the URL Parameters, I'm hoping to see some change each week.
I have noticed when I put URL's starting with https:// in to the removal tool, it adds the http:// main URL at the front.
For example, I add to the removal tool:-
https://www.mydomain.com/blah.html?search_garbage_url_addition
On the confirmation page, the URL actually shows as:-
http://www.mydomain.com/https://www.mydomain.com/blah.html?search_garbage_url_addition
I don't want to accidentally remove my main URL or cause problems. Is this the right way this should look?
AND PART 2 OF MY QUESTION
If you see the search description in Google for a page you want removed that says the following in the SERP results, should I still go to the trouble of putting in the removal request?
www.domain.com/url.html?xsearch_...
A description for this result is not available because of this site's robots.txt – learn more.
-
RE: Is there a way to get a list of Total Indexed pages from Google Webmaster Tools?
Looks like I can only do the first thousand. It's a start though. Thank you for the information.
Many of the URL's on my list, when put in to Google search, are giving me 80-100 other variants I can remove by hand.
http://www.mathewporter.co.uk/list-a-domains-indexed-pages-in-google-docs/ for anyone else following.
Best posts made by sparrowdog
-
Using the Google Remove URL Tool to remove https pages
I have found a way to get a list of 'some' of my 180,000+ garbage URLs now, and I'm going through the tedious task of using the URL removal tool to put them in one at a time. Between that and my robots.txt file and the URL Parameters, I'm hoping to see some change each week.
I have noticed when I put URL's starting with https:// in to the removal tool, it adds the http:// main URL at the front.
For example, I add to the removal tool:-
https://www.mydomain.com/blah.html?search_garbage_url_addition
On the confirmation page, the URL actually shows as:-
http://www.mydomain.com/https://www.mydomain.com/blah.html?search_garbage_url_addition
I don't want to accidentally remove my main URL or cause problems. Is this the right way this should look?
AND PART 2 OF MY QUESTION
If you see the search description in Google for a page you want removed that says the following in the SERP results, should I still go to the trouble of putting in the removal request?
www.domain.com/url.html?xsearch_...
A description for this result is not available because of this site's robots.txt – learn more.
Looks like your connection to Moz was lost, please wait while we try to reconnect.