Pages i dont want customers to see
-
Hi,
I have a website and part of it is the admin page/s that my customers can log in and modify their pages( change pictures, change text, etc...)
When i run SEO tools to check my site i get errors or warnings about these pages ( duplicate content, meta description missing, etc...)
1. Is this affects my SEO rank?
2. can i mark them as pages that don't need to be ranked or checked by Google?
3. Is Rel no follow is part of the solution?
Thank you
i.
-
Thank you all for you answers!!!
-
Yes, they could affect your rankings, probably more so if you're getting errors and warnings from them, and even if you don't want to index them, depending on which kind of errors you're getting, it'd be probably a good idea to fix them up, as they will still be part of your site.
Now, if you use robots.txt those pages could get indexed anyway (i.e. someone else links to them), and could also stall ranking factors, read more here.
About the links, you'd probably be better off using 'noindex' instead of 'nofollow', to make sure pages aren't indexed and don't drain your juice, but I guess in this case the best choice would be to use a 'noindex, follow' meta tag within those pages, to make sure they aren't indexed and don't drain your site.
Although, if the content is sensitive (as you say, customer data) it'd be better to password protect it, that way you'd be blocking not only search engine bots but malicious robots, spyware, and uninvited users as well.
-
Hi iivgi,
If it's marked as duplicate content, it can definitely affect your SEO rank. You can add a directive in robots.txt to disallow crawling of those pages. The problem with that though, is that the page can still be indexed if there is an external link to it. The way to prevent that is to use:
in the section of those pages you don't want indexed. The pages may still get crawled, but the pages shouldn't show up in the SERPs.
-
Thank you for the quick response
Are these error / warnings affects my SEO rank?
i.
-
Depending on what platform you are using you are going to want to make these pages as no index in your robots file. Alternatively WordPress and other promising CMS allow you to mark a page not be indexed. using a plugin.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Combining products - edit existing product page or 301 redirect to new page?
We want to combine existing products - e.g. 'hand lotion' and 'body lotion' will become 'hand & body lotion'. As such, we'll need to combine the two product pages into one. What would be the best route to take in terms of SEO to do this? My initial reaction is to create a new product page and then 301 or 302 redirect the old products to the new product page depending on if the change is permanent or temporary. Would you agree? Or am I missing something?
On-Page Optimization | | SwankyApple1 -
SEO for a Italian customer
Ciao tutti, Should I include 300 characters for all my customers' page meta descriptions? Some colleagues told me about this but I'm not really sure. Thanks in avance Marco
On-Page Optimization | | BestSEOItaly3 -
When making content pages to a specific page; should you index it straight away in GSC or let Google crawl it naturally?
When making content pages to a specific page; should you index it straight away in GSC or let Google crawl it naturally?
On-Page Optimization | | Jacksons_Fencing0 -
Google Search Console issue: "This is how Googlebot saw the page" showing part of page being covered up
Hi everyone! Kind of a weird question here but I'll ask and see if anyone else has seen this: In Google Search Console when I do a fetch and render request for a specific site, the fetch and blocked resources all look A-OK. However, in the render, there's a large grey box (background of navigation) that covers up a significant amount of what is on the page. Attaching a screenshot. You can see the text start peeking out below (had to trim for confidentiality reasons). But behind that block of grey IS text. And text that apparently in the fetch part Googlebot does see and can crawl. My question: is this an issue? Should I be concerned about this visual look? Or no? Never have experienced an issue like that. I will say - trying to make a play at a featured snippet and can't seem to have Google display this page's information, despite it being the first result and the query showing a featured snippet of a result #4. I know that it isn't guaranteed for the #1 result but wonder if this has anything to do with why it isn't showing one. VmIqgFB.png
On-Page Optimization | | ChristianMKG0 -
Optimization for pages with lists of data
I am looking for some ideas on what best practices are for pages that contain lists similar to this page: http://www.backcountrysecrets.com/outdoor-sport/15/places-to-swim-and-swimming-holes.aspx Is it better to break up the list into seperate pages of 25 listings or keep everything on the same page?
On-Page Optimization | | kadesmith0 -
On-page: Over optimized images?
Hello guys. I have a small question about an on-page optimization for images. What I have: good title tag / good url structure good content (NOT keyword shuffled, its real content, for real people) images / gallery uploaded to folder named same as article name. For example: Great tips for bloggers [article name], great-tips-for-bloggers [folder name]. So my question is: Will Google harm me for this "too good" paths to images, article related image filenames, with mask like [gtips-img01], and if all images have titles / alt tags? Thank you guys.
On-Page Optimization | | infoo130 -
Avoiding "Duplicate Page Title" and "Duplicate Page Content" - Best Practices?
We have a website with a searchable database of recipes. You can search the database using an online form with dropdown options for: Course (starter, main, salad, etc)
On-Page Optimization | | smaavie
Cooking Method (fry, bake, boil, steam, etc)
Preparation Time (Under 30 min, 30min to 1 hour, Over 1 hour) Here are some examples of how URLs may look when searching for a recipe: find-a-recipe.php?course=starter
find-a-recipe.php?course=main&preperation-time=30min+to+1+hour
find-a-recipe.php?cooking-method=fry&preperation-time=over+1+hour There is also pagination of search results, so the URL could also have the variable "start", e.g. find-a-recipe.php?course=salad&start=30 There can be any combination of these variables, meaning there are hundreds of possible search results URL variations. This all works well on the site, however it gives multiple "Duplicate Page Title" and "Duplicate Page Content" errors when crawled by SEOmoz. I've seached online and found several possible solutions for this, such as: Setting canonical tag Adding these URL variables to Google Webmasters to tell Google to ignore them Change the Title tag in the head dynamically based on what URL variables are present However I am not sure which of these would be best. As far as I can tell the canonical tag should be used when you have the same page available at two seperate URLs, but this isn't the case here as the search results are always different. Adding these URL variables to Google webmasters won't fix the problem in other search engines, and will presumably continue to get these errors in our SEOmoz crawl reports. Changing the title tag each time can lead to very long title tags, and it doesn't address the problem of duplicate page content. I had hoped there would be a standard solution for problems like this, as I imagine others will have come across this before, but I cannot find the ideal solution. Any help would be much appreciated. Kind Regards5