Site Wide Link Situation
-
Hi-
We have clients who are using an e-commerce cart that sits on a separate domain that appears to be providing site wide links to our clients websites. Therefore, would you recommend disallowing the bots to crawl/index these via a robots.txt file, a no follow meta tag on the specific pages the shopping cart links are implemented on or implement no follow links on every shopping cart link? Thanks!
-
Hi! Thanks! I completely understand. We would never want to prevent URLs on the client's domain from being crawled. That could clearly put our client's online presence at risk. However, we're more concerned with Google noticing the shopping cart's domain is pointing to every page of the client's website which could appear unnatural & potentially, put the client's site at risk. What we're hoping to achieve is preventing from Google crawling the third party URL on every page to avoid any penalization.
-
Rez you gotta consider a few things.
When looked at the site structure and AI of your site you have to think about the Juice flow as a funnel. More Juice to the top distributed less juice to the bottom. So for shopping cart pages or product pages ( depending on how deep they are ), i usually incorporate Long tail , targeted keywords ( ie: Mimi Juie baby sippy cups ) where the volume is not much but its targeted enough that even with a limited juice flow you can rank.
My initial suggestion to you was to contact the person or company that built the shopping cart in order to remove the link. ( THAT IS MY FIRST OPTION ). I would not do a no follow to the product page. ( dont do anything crazy like that ) Specially if you have Share bar options for your products and reviews etc. ( you will lose all that ) .
LAST OPTION for you should be to do a robot.txt to ONLY that link, NOT the page.
Again please understand you should not DEVALUE your page like that .
Hope this helps.
Let me know how it turns out
Hampig M
BizDetox
-
Hi-
Thanks for the feedback! So the robots.txt is the best way?
The shopping cart's URL does not have much authority so it's not important for us to get the link juice from the separate domain which is why we're debating how to implement a no follow. Do you see any harm in doing so?
Thanks,
Rez
-
Rez.
You should be able to remove that sitewide link from your shopping cart. I had a similar situation with a joomla site i did that had a sitewide link situation on the product page of JoomShopping and you can purchase to remove it. Unfortunately thats the way it is. Take a look at the help files or forums of the shopping cart site. What shopping cart is it?
If you cannot remove it, then robots.txt is the best way i would NOT do a no follow to that page. Unless you dont care about the data or care about getting ranked for those pages. But you are saying its site wide.
So i am a little confused on that.
Hope it helps.
Best Wishes,
Hampig M
BizDetox
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Indexed Pages Different when I perform a "site:Google.com" site search - why?
My client has an ecommerce website with approx. 300,000 URLs (a lot of these are parameters blocked by the spiders thru meta robots tag). There are 9,000 "true" URLs being submitted to Google Search Console, Google says they are indexing 8,000 of them. Here's the weird part - When I do a "site:website" function search in Google, it says Google is indexing 2.2 million pages on the URL, but I am unable to view past page 14 of the SERPs. It just stops showing results and I don't even get a "the next results are duplicate results" message." What is happening? Why does Google say they are indexing 2.2 million URLs, but then won't show me more than 140 pages they are indexing? Thank you so much for your help, I tried looking for the answer and I know this is the best place to ask!
Intermediate & Advanced SEO | | accpar0 -
Best way to do site seals for clients to have on their sites
I am about to help release a product which also gives people a site seal for them to place on their website. Just like the geotrust, comodo, symantec, rapidssl and other web security providers do.
Intermediate & Advanced SEO | | ssltrustpaul
I have notices all these siteseals by these companies never have nofollow on their seals that link back to their websites. So i am wondering what is the best way to do this. Should i have a nofollow on the site seal that links back to domain or is it safe to not have the nofollow.
It wont be doing any keyword stuffing or anything, it will probly just have our domain in the link and that is all. The problem is too, we wont have any control of where customers place these site seals. From experience i would say they will mostly likely always be placed in the footer on every page of the clients website. I would like to hear any and all thoughts on this. As i can't get a proper answer anywhere i have asked.0 -
Google Manual Penalty - Unnatural Links FROM My Site - Where?
Hi Mozzers, I've just received a manual penalty for one of my websites. The penalty is for 'unnatural links from my site which I find disturbing because I can't see that anything really wrong with it. The website is www.lighting-tips.co.uk - its a pretty new blog (only 6-7 posts) and whilst I've allowed guest posting I'm being very careful that the content is relevant and good quality. I'm only allowing 1 - 2 links and very few with proper anchor text so I'm wondering what has been done so wrong that I'm getting this manual penalty? Am I missing something here? Thanks in advance. Aaron
Intermediate & Advanced SEO | | AaronGro0 -
After Receiving a "Googlebot can't access your site" would this stop your site from being crawled?
Hi Everyone,
Intermediate & Advanced SEO | | AMA-DataSet
A few weeks ago now I received a "Googlebot can't access your site..... connection failure rate is 7.8%" message from the webmaster tools, I have since fixed the majority of these issues but iv noticed that all page except the main home page now have a page rank of N/A while the home page has a page rank of 5 still. Has this connectivity issues reduced the page ranks to N/A? or is it something else I'm missing? Thanks in advance.0 -
Any Suggestions For My Site?
I've recently started a website that is based on movie posters. The site has fundamentally been built for users and not SEO but I'm wondering if anyone can see any problems or just general advice that may help with our SEO efforts? The "content" on the website are the movie posters. I know Google likes text content, but I don't see what else we could add that wouldn't be purely for SEO. My site is: http://www.bit.ly/ZSPbTA
Intermediate & Advanced SEO | | whispertera0 -
Link Research Tools - Detox Links
Hi, I was doing a little research on my link profile and came across a tool called "LinkRessearchTools.com". I bought a subscription and tried them out. Doing the report they advised a low risk but identified 78 Very High Risk to Deadly (are they venomous?) links, around 5% of total and advised removing them. They also advised of many suspicious and low risk links but these seem to be because they have no knowledge of them so default to a negative it seems. So before I do anything rash and start removing my Deadly links, I was wondering if anyone had a). used them and recommend them b). recommend detoxing removing the deadly links c). would there be any cases in which so called Deadly links being removed cause more problems than solve. Such as maintaining a normal looking profile as everyone would be likely to have bad links etc... (although my thinking may be out on that one...). What do you think? Adam
Intermediate & Advanced SEO | | NaescentAdam0 -
Relaunching old site - Will it regain former link equity?
We've got an older site with significant link equity. It 301 redirects to our current website, passing all traffic, link value, etc. The 301 redirects have been in place for several years. Since the original redirects were setup, the current website has acquired massive link equity above and beyond the redirects. I am considering removing all the 301 redirects and bringing the old site back to life (same URLs, content, design as before). I would also keep the current website live as is. The goal is to capture more SERP visibility by having 2 website "brands" in the same market. Will the old site regain it's former link equity or will we effectively be starting from scratch? In other words, does Google consider how long 301 redirects have been in place?
Intermediate & Advanced SEO | | Jeff_DomainTools0 -
Links from tumblr
I have two links from hosted tumblr blogs which are not on tumblr.com. So, website1 has a tumblr blog: tumblr.website1.com And another site website2.com also uses the a record/custom domains option from tumblr but not on a subdomain, which is decribed below: http://www.tumblr.com/docs/en/custom_domains Does this mean that all links from such sites count as coming from the same IP in google's eyes? Or is there value in getting links from multiple sites because the a-record doesn't affect SEO in a negative way? Many thanks, Mike.
Intermediate & Advanced SEO | | team740