Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Nofollow and ecommerce cart/checkout pages
-
Hi!!
Another noob question:
Should I be nofollowing my site's cart and checkout pages? Or as SEs can't get to the checkout pages without either logging in or completing the form is it something I shouldn't worry about? Have read things saying both. Not sure which is correct.
Thank you! Appreciate the help.
Lynn
-
Thank you James!! I really appreciate the insight and your patience.
Lynn
-
yes that's all correct.
-
On my site the only things that are accessible via HTTPS are the checkout pages and the my account pages (or so I am told - still testing). So for these I could mark "noindex, nofollow" correct as don't really want Google to crawl these? And robots.txt can accomplish the same thing (robots.txt may be easier for me as requires no dev time; I can't control this tag via the CMS)?
Thanks for the input!
Lynn
-
1. yes
2. yes, robots.txt works too - there are numerous ways to have the same effect. personal preference comes into it, plus one may be easier than another in your site/CMS. The reason I use noindex is that any page on my site could be accessed by https - so I prefer to dynamically throw noindex into any page that is accessed that way.
-
Hello!
Thank you both for taking the time to answer. A follow-up question just so I understand:
1. "noindex, follow" will allow SEs to crawl a page but NOT put it in the index correct?
2. Can't I also stop SE access to certain directories/pages by putting an entry in the robots.txt? This would stop crawling AND indexing correct?
Why would one use one over the other? Just want to understand the idea behind it.
Thank you so much guys!!
Lynn
-
the safest route is to "noindex, follow" any page that is requested by https - this also squashes duplicate content when the user accesses non-cart pages using https...
-
Hey,
I'd 'noindex, nofollow' cart pages as they are no use to anyone searching and you're just going to dilute your authority through those extra pages.
DD
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sudden Indexation of "Index of /wp-content/uploads/"
Hi all, I have suddenly noticed a massive jump in indexed pages. After performing a "site:" search, it was revealed that the sudden jump was due to the indexation of many pages beginning with the serp title "Index of /wp-content/uploads/" for many uploaded pieces of content & plugins. This has appeared approximately one month after switching to https. I have also noticed a decline in Bing rankings. Does anyone know what is causing/how to fix this? To be clear, these pages are **not **normal /wp-content/uploads/ but rather "index of" pages, being included in Google. Thank you.
Technical SEO | | Tom3_150 -
Canonical for duplicate pages in ecommerce site and the product out of stock
I’m an SEO for an ecommerce site that sells shoes I have duplicate pages for different colors of the same product (unique URL for each color), Conventionally I have added canonical tags for each page, which direct to a specific product URL My question is what happens when a product which the googlbot is direct to, is out of stock but is still listed in the canonical tag ?
Technical SEO | | shoesonline0 -
Is it better to use XXX.com or XXX.com/index.html as canonical page
Is it better to use 301 redirects or canonical page? I suspect canonical is easier. The question is, which is the best canonical page, YYY.com or YYY.com/indexhtml? I assume YYY.com, since there will be many other pages such as YYY.com/info.html, YYY.com/services.html, etc.
Technical SEO | | Nanook10 -
How to Delete the slug /category/ from wordpress category pages
Hi all, I would like to ask you what's the better way to eliminate the slug /category/ form the wordpress category pages. I need to delete the slug /category/ to make the url seo frendly. The problem is that my site is an old site with the page indexed by Google for a long time. Thanks for your advice.
Technical SEO | | salvyy0 -
No_index of parent page
Hi, sorry its a Friday question... Page A: www.example.com/house/ Page B: www.example.com/house/kitchen Can I 'no_index' page A without it effecting page B being indexed? Views? Many thanks!
Technical SEO | | Richard5551 -
Can you 301 redirect a page to an already existing/old page ?
If you delete a page (say a sub department/category page on an ecommerce store) should you 301 redirect its url to the nearest equivalent page still on the site or just delete and forget about it ? Generally should you try and 301 redirect any old pages your deleting if you can find suitable page with similar content to redirect to. Wont G consider it weird if you say a page has moved permenantly to such and such an address if that page/address existed before ? I presume its fine since say in the scenario of consolidating departments on your store you want to redirect the department page your going to delete to the existing pages/department you are consolidating old departments products into ?
Technical SEO | | Dan-Lawrence0 -
ECommerce: Best Practice for expired product pages
I'm optimizing a pet supplies site (http://www.qualipet.ch/) and have a question about the best practice for expired product pages. We have thousands of products and hundreds of our offers just exist for a few months. Currently, when a product is no longer available, the site just returns a 404. Now I'm wondering what a better solution could be: 1. When a product disappears, a 301 redirect is established to the category page it in (i.e. leash would redirect to dog accessories). 2. After a product disappers, a customized 404 page appears, listing similar products (but the server returns a 404) I prefer solution 1, but am afraid that having hundreds of new redirects each month might look strange. But then again, returning lots of 404s to search engines is also not the best option. Do you know the best practice for large ecommerce sites where they have hundreds or even thousands of products that appear/disappear on a frequent basis? What should be done with those obsolete URLs?
Technical SEO | | zeepartner1 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0