Noindex follow on checkout pages in 2017
-
Hi,
My website really consists of 2 separate sites.-
Product site:
• Website with product pages.
• These product pages have SEO optimised content. -
Booking engine & checkout site:
• When a user clicks 'Book' on one of the product pages on the aforementioned product site they go to a seaparate website which is a booking engine and checkout.
• These pages are not quality, SEO optimised content, they only perform the function of booking and buying.
Q1) Should I set 'noindex follow' via the meta tag on all pages of the 'Booking engine and checkout' site?
ie.Q2) should i add anything to the book buttons on the product site?
I am hoping all this will somehow help concentrate the SEO juice onto the Product Site's pages by declaring the Booking engine and Checkout sites pages to be 'not of any content value'.
-
-
Hi
Ironically MOZ will pick this up as a problem as it reports anything that is noindexed!
For me I just ignore noindex as a problem in certain cases as clearly it makes perfect sense to noindex certain pages and indeed sometimes whole directories.
I sometimes find that developers have noindexed directories like /new-products or /sale but clearly there are better ways of handling the potential duplicate problem here by adding a canonical. In you case it makes no sense having Google index the checkout pages.
Regards Nigel
-
Hi Martin / Nigel,
Thanks for your responses, In regards to Q1.
By adding to the 'Booking engine and checkout' site's pages will this also stop Moz from Crawling these pages - and consequently remove 'issues' from their Moz Site Crawl 'issues count' as it currently crawls these pages and picks up issues?
-
Hi Nigel,
You're right, I didn't think about the duplicates from UTM previously.
Thanks for the update.
Best, Martin
-
Hi Martin
Surely if the traffic was coming from a different source then that would be in the URL of that source. Adding a UTM would simply create duplicate page content between the URL and the UTM tagged URL.
He'd then be faced with the tricky and potentially dangerous task of messing with parameters. I just wouldn't mess with creating UTM tagged URLs,
Apologies - I didn't mean to argue I just couldn't understand your logic.
Regards Nigel
-
Hey Nigel,
As far as I've understand the system of his websites, it consists of two separate websites (unless he meant "page" by the "site").
Then, I think it would be useful to add the UTM so he can see from exactly which source a user comes (since those are two separate websites).
Also, I suppose that by clicking on the book buttons on the product site, they will be redirected to the book site so you would basically add the UTMs in the URL.
If he meant by "site" only "page" then the solution would be different, of course.
Cheers, Martin
-
Hi Martin
Please can you explain why and how you would add UTM parameters to the book buttons on his website?
Thanks Nigel
-
Hey there,
Regarding Q1, I'd set , as you've said. Since the Booking site has no content value for the visitor, there's no need for it to be found in Google SERP.
Regarding Q2, you can add UTM parameters to make the analytics easier in GA.
Since the booking site has no "content value", there's nothing more you can really pass.
Hope it helps. Cheers, Martin
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Need only tens of pages to be indexed out of hundreds: Robots.txt is Okay for Google to proceed with?
Hi all, We 2 sub domains with hundreds of pages where we need only 50 pages to get indexed which are important. Unfortunately the CMS of these sub domains is very old and not supporting "noindex" tag to be deployed on page level. So we are planning to block the entire sites from robots.txt and allow the 50 pages needed. But we are not sure if this is the right approach as Google been suggesting to depend mostly on "noindex" than robots.txt. Please suggest whether we can proceed with robots.txt file. Thanks
Algorithm Updates | | vtmoz0 -
How is this possible? #2 ranking with NO on-page keywords, no backlinks, no sitemap...
Hi everybody. I have a question ... I'm totally stumped. This question is being asked today (November 16th, 2015) just after Google updated something in their algorithm. Nobody seems to know what they did. and it has something to do with the new "Rank Brain" system they're now using. My niche is Logo Design Software (https://www.thelogocreator.com). I had the keywords "logo creator" on the page roughly 7 times. After Google updated, I lost about 10 spots and as of this writing, I've dropped to #15. So, maybe I over optimized. fine. Noticing that for the keyword "logo creator" ... NONE of the top 14 spots actually have "logo creator" in their page title and NONE of them have more that 2 instances (if any) of the keyword "logo creator" on the actual page. So I removed ALL instances of my keyword "logo creator" from my home page - used the Webmaster's Fetch Tool and moved up a few spots instantly. So what the heck? And the #2 spot for that keyword is www.logomakr.com - they have NO words at all on their pages, no blog, no sitemap and far fewer links than anybody in the top 10. Can anybody reading this shed some light? Marc Marc Sylvester
Algorithm Updates | | Laughingbird
Laughingbird Software0 -
Page 2 to page 1
I've found a lot of times it does not take much activity to get a keyword from ranking on page 3 of Google or further down to page 2 but there seems to be a hurdle from page 2 to page 1. It is very frustrating to be between 11 and 15 but not being able to make that push to 9 or 10. Has anyone got or seen any data to justifiy this?
Algorithm Updates | | S_Curtis0 -
Google Page Rank not improving
Hi All, I have a site live with a homepage rank of 5, Ever since relaunching (on the same domain) 6 months ago the inner page rank has remained at NA. Its crawled pretty consistently, Can anyone think of a reason this may be happening? www.glowm.com
Algorithm Updates | | thebluecubeuk0 -
Correct usage of expired pages -410 or not?
Hi Mozzes, We're running a property portal that carries around 200.000 listings in two languages. All listings are updated several times per day and when one of our ads expire we report this via the "410 Gone", and place a link to our users: This ad has expired, click here to search for similar properties. Looking at our competition I seems that here are many different ways to deal with this, one popular being a 301 to the corresponding search result. We've tried to get directions from Google on what method they prefere, but as usual dead silence. Advices are mostly welcome.
Algorithm Updates | | PropertyPortal0 -
How Can I Prevent Duplicate Page Title Errors?
I am working on a website that has two different sections, one for consumers and one for business. However, the products and the product pages are essentially the same but, of course, the pricing and quantities may be different. We just have different paths based on the kind of customer. And, we get feeds from manufacturers for the content so it's difficult to change it. We want Google to index both sections of the site but we don't want to get hammered for duplicate page titles and content. Any suggestions? Thanks!
Algorithm Updates | | JillCS0 -
Has Google problems in indexing pages that use <base href=""> the last days?
Since a couple of days I have the problem, that Google Webmaster tools are showing a lot more 404 Errors than normal. If I go thru the list I find very strange URLs that look like two paths put together. For example: http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm If I check on which page Google found that path it is showing me the following URL: http://www.domain.de/languages/languageschools/havanna/spanishcourse.htm If I check the source code of the Page for the Link leading to the London Page it looks like the following: [...](languages/languageschools/london/london.htm) So to me it looks like Google is ignoring the <base href="..."> and putting the path together as following: Part 1) http://www.domain.de/laguages/languageschools/havanna/ instead of base href Part 2) languages/languageschools/london/london.htm Result is the wrong path! http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm I know finding a solution is not difficult, I can use absolute paths instead of relative ones. But: - Does anyone make the same experience? - Do you know other reasons which could cause such a problem? P.s.: I am quite sure that the CMS (Typo3) is not generating these paths randomly. I would like to be sure before we change the CMS's Settings to absolute paths!
Algorithm Updates | | SimCaffe0 -
Using ™ and ® in page titles
Is it bad to use registered trademark symbols in page titles? Does this somehow hurt in search rankings?
Algorithm Updates | | mlentner0