Nofollow and ecommerce cart/checkout pages
-
Hi!!
Another noob question:
Should I be nofollowing my site's cart and checkout pages? Or as SEs can't get to the checkout pages without either logging in or completing the form is it something I shouldn't worry about? Have read things saying both. Not sure which is correct.
Thank you! Appreciate the help.
Lynn
-
Thank you James!! I really appreciate the insight and your patience.
Lynn
-
yes that's all correct.
-
On my site the only things that are accessible via HTTPS are the checkout pages and the my account pages (or so I am told - still testing). So for these I could mark "noindex, nofollow" correct as don't really want Google to crawl these? And robots.txt can accomplish the same thing (robots.txt may be easier for me as requires no dev time; I can't control this tag via the CMS)?
Thanks for the input!
Lynn
-
1. yes
2. yes, robots.txt works too - there are numerous ways to have the same effect. personal preference comes into it, plus one may be easier than another in your site/CMS. The reason I use noindex is that any page on my site could be accessed by https - so I prefer to dynamically throw noindex into any page that is accessed that way.
-
Hello!
Thank you both for taking the time to answer. A follow-up question just so I understand:
1. "noindex, follow" will allow SEs to crawl a page but NOT put it in the index correct?
2. Can't I also stop SE access to certain directories/pages by putting an entry in the robots.txt? This would stop crawling AND indexing correct?
Why would one use one over the other? Just want to understand the idea behind it.
Thank you so much guys!!
Lynn
-
the safest route is to "noindex, follow" any page that is requested by https - this also squashes duplicate content when the user accesses non-cart pages using https...
-
Hey,
I'd 'noindex, nofollow' cart pages as they are no use to anyone searching and you're just going to dilute your authority through those extra pages.
DD
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Over 40+ pages have been removed from the indexed and this page has been selected as the google preferred canonical.
Over 40+ pages have been removed from the indexed and this page has been selected as the google preferred canonical. https://studyplaces.com/about-us/ The pages affected by this include: https://studyplaces.com/50-best-college-party-songs-of-all-time-and-why-we-love-them/ https://studyplaces.com/15-best-minors-for-business-majors/ As you can see the content on these pages is totally unrelated to the content on the about-us page. Any ideas why this is happening and how to resolve.
Technical SEO | | pnoddy0 -
Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
Hi- I have a client that had thousands of dynamic php pages indexed by Google that shouldn't have been. He has since blocked these php pages via robots.txt disallow. Unfortunately, many of those php pages were linked to by high quality sites mulitiple times (instead of the static urls) before he put up the php 'disallow'. If we create 301 redirects for some of these php URLs that area still showing high value backlinks and send them to the correct static URLs, will Google even see these 301 redirects and pass link value to the proper static URLs? Or will the robots.txt keep Google away and we lose all these high quality backlinks? I guess the same question applies if we use the canonical tag instead of the 301. Will the robots.txt keep Google from seeing the canonical tags on the php pages? Thanks very much, V
Technical SEO | | Voodak0 -
Google showing https:// page in search results but directing to http:// page
We're a bit confused as to why Google shows a secure page https:// URL in the results for some of our pages. This includes our homepage. But when you click through it isn't taking you to the https:// page, just the normal unsecured page. This isn't happening for all of our results, most of our deeper content results are not showing as https://. I thought this might have something to do with Google conducting searches behind secure pages now, but this problem doesn't seem to affect other sites and our competitors. Any ideas as to why this is happening and how we get around it?
Technical SEO | | amiraicaew0 -
Can Not Save the SEO Settings on Attachement/Media Page
I am trying to save SEO settings to a wordpress gallery attachment page for a picture. When I fill up all info and hit save all the writing disappear from from the form. Is it a software bug or there is a solution for it??
Technical SEO | | ExpertSolutions0 -
Best way to create a shareable dynamic infographic - Embed / Iframe / other?
Hi all, After searching around, there doesn't seem to be any clear agreement in the SEO community of the best way to implement a shareable dynamic infographic for other people to put into their site. i.e. That will pass credit for the links to the original site. Consider the following example for the web application that we are putting the finishing touches on: The underlying site has a number of content pages that we want to rank for. We have created a number of infogrpahics showing data overlayed on top of a google map. The data continuously changes and there are javascript files that have to load in order to achieve the interactivity. There is one infographic per page on our site and there is a link at the bottom of the infographic that deep links back to each specific page on our site. What is the ideal way to implement this infographic so that the maximum SEO value is passed back to our site through the links? In our development version we have copied the youtube approach implemented this as an iframe. e.g. <iframe height="360" width="640" src="http://www.tbd.com/embed/golf" frameborder="0"></iframe>. The link at the bottom of that then links to http://www.tbd.com/golf This is the same approach that Youtube uses, however I'm nervous that the value of the link wont pass from the sites that are using the infographic. Should we do this as an embed object instead, or some other method? Thanks in advance for your help. James
Technical SEO | | jtriggs0 -
Is it bad to have your pages as .php pages?
Hello everyone, Is it bad to have your website pages indexed as .php? For example, the contact page is site.com/contact.php and not /contact. Does this affect your SEO rankings in any way? Is it better to have your pages without the extension? Also, if I'm working with a news site and the urls are dynamic for every article (ie site.com/articleid=2323.) Should I change all of those dynamic urls to static? Thank You.
Technical SEO | | BruLee0 -
/forum/ or /hookah-forum/
I'm building a new website on Hookah.org. It will have a forum and blog. Should I put them in Hookah.org/hookah-forum/ and Hookah.com/hookah-blog/ or Hookah.org/forum and Hookah.org/blog I think /forum/ and /blog/ are easier for users but am not sure how much adding the word hookah helps with SEO.
Technical SEO | | Heydarian0 -
Duplicate Page Content and Title for product pages. Is there a way to fix it?
We we're doing pretty good with our SEO, until we added product listing pages. The errors are mostly Duplicate Page Content/Title. e.g. Title: Masterpet | New Zealand Products MasterPet Product page1 MasterPet Product page2 Because the list of products are displayed on several pages, the crawler detects that these two URLs have the same title. From 0 Errors two weeks ago, to 14k+ errors. Is this something we could fix or bother fixing? Will our SERP ranking suffer because of this? Hoping someone could shed some light on this issue. Thanks.
Technical SEO | | Peter.Huxley590