Login webpage blocked by robots
-
Hi, the SEOMOZ crawl diagnostics shows that this page:
www.tarifakitesurfcamp.com/wp-login.php is blocked (noindex, nofollow)
Is there any problem with that?
-
thanks!
-
thanks!
-
Unless you have relevant information for your users on the log in page (i.e. for your private use) then it's probably a good idea not to index it!
-
Nope, that's perfectly fine since that's your login page for Wordpress.
If you're linking to the page from anywhere on your site (which you really shouldn't be), you could update the meta robots tag to (noindex, FOLLOW), but since it looks like the page has no links, it shouldn't be necessary.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are there detrimental effects of having multiple robot tags
Hi All, I came across some pages on our site that have multiple robot tags, but they have the same directives. Â Two are identical while one is for Google only. I know there aren't any real benefits from having it set up this way, but are there any detrimental effects such as slowing down the bots crawling these pages? name="googlebot" content="index, follow, noodp"/> Thanks!
On-Page Optimization | | STP_SEO0 -
What is the best way to block http://www.site.com/members/...
How do i block http://www.site.com/members/....name/activity/3202 and many more like this from getting spider showing up as duplicate in moz Regards Tai
On-Page Optimization | | Taiger0 -
Two Robots.txt files
Hi there Can somebody please help me that one of my client site have two robot.txt files (please see below). One txt file is blocked few folders and another one is blocked completely all the Search engines. Our tech team telling that due to some technical reasons they using second one which placed in inside the server and search engines unable to see this file. www.example.co.uk/robots.txt - Blocked few folderswww.example.co.uk/Robots.txt - Blocked all Search Engines I hope someone can give me the help I need in this one. Thanks in advance! Cheers,
On-Page Optimization | | TrulyTravel
Satla0 -
Blocking Subdomain from Google Crawl and Index
Hey everybody, how is it going? I have a simple question, that i need answered. I have a main domain, lets call it domain.com. Recently our company will launch a series of promotions for which we will use cname subdomains, i.e try.domain.com, or buy.domain.com. They will serve a commercial objective, nothing more. What is the best way to block such domains from being indexed in Google, also from counting as a subdomain from the domain.com. Robots.txt, No-follow, etc? Hope to hear from you, Best Regards,
On-Page Optimization | | JesusD3 -
Blocking Pages E-Commerce Site
Hello, I am working on a site with 1,000's of product pages, some of which do not have inventory in them. Should be blocking these pages in order to reduce bouce rate? How could i manage so many pages efficiently? It would takes weeks to comb through pages to determine which have inventory and which do not. They are also time sensitive as they are live events so dates are always changing. Thanks!
On-Page Optimization | | TP_Marketing0 -
How To Prevent Crawling Shopping Carts, Wishlists, Login Pages
What's the best way to prevent engines from crawling your websites shopping cart, wishlist, log in pags, ect... Obviously have it in robots.txt but is their any other form of action that should be done?
On-Page Optimization | | Romancing0 -
Does Google respect User-agent rules in robots.txt?
We want to use an inline linking tool (LinkSmart) to cross link between a few key content types on our online news site. LinkSmart uses a bot to establish the linking. The issue: There are millions of pages on our site that we don't want LinkSmart to spider and process for cross linking. LinkSmart suggested setting a noindex tag on the pages we don't want them to process, and that we target the rule to their specific user agent. I have concerns. We don't want to inadvertently block search engine access to those millions of pages. I've seen googlebot ignore nofollow rules set at the page level. Does it ever arbitrarily obey rules that it's been directed to ignore? Can you quantify the level of risk in setting user-agent-specific nofollow tags on pages we want search engines to crawl, but that we want LinkSmart to ignore?
On-Page Optimization | | lzhao0 -
Photogallery and Robots.txt
Hey everyone SEOMOZ is telling us that there are to many onpage links on the following page: http://www.surfcampinportugal.com/photos-of-the-camp/ Should we stop it from being indexed via Robots.txt? best regards and thanks in advance... Simon
On-Page Optimization | | Rapturecamps0