Robots.txt advice
-
Hey Guys,
Have you ever seen coding like this in a robots.txt, I have never seen a noindex rule in a robots.txt file before - have you?
user-agent: AhrefsBot
User-agent: trovitBot
User-agent: Nutch
User-agent: Baiduspider
Disallow: /User-agent: *
Disallow: /WebServices/
Disallow: /*?notfound=
Disallow: /?list=
Noindex: /?*list=
Noindex: /local/
Disallow: /local/
Noindex: /handle/
Disallow: /handle/
Noindex: /Handle/
Disallow: /Handle/
Noindex: /localsites/
Disallow: /localsites/
Noindex: /search/
Disallow: /search/
Noindex: /Search/
Disallow: /Search/
Disallow: ?I have never seen a noindex rule in a robots.txt file before - have you?
Any pointers? -
Never seen this, doubt it's any useful as this isn't part of any search engines recommended statements to use. I don't think this would have any impact on what search engine robots would look at as it's not a statement in the robots.txt documentation.
-
Best I could find was-
Unlike disallowed pages, noindexed pages don’t end up in the index and therefore won’t show in search results. Combine both in robots.txt to optimise your crawl efficiency: the noindex will stop the page showing in search results, and the disallow will stop it being crawled
From-https://www.deepcrawl.com/blog/best-practice/robots-txt-noindex-the-best-kept-secret-in-seo/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Scary bug in search console: All our pages reported as being blocked by robots.txt after https migration
We just migrated to https and created 2 days ago a new property in search console for the https domain. Webmaster Tools account for the https domain now shows for every page in our sitemap the warning: "Sitemap contains urls which are blocked by robots.txt."Also in the dashboard of the search console it shows a red triangle with warning that our root domain would be blocked by robots.txt. 1) When I test the URLs in search console robots.txt test tool all looks fine.2) When I fetch as google and render the page it renders and indexes without problem (would not if it was really blocked in robots.txt)3) We temporarily completely emptied the robots.txt, submitted it in search console and uploaded sitemap again and same warnings even though no robots.txt was online4) We run screaming frog crawl on whole website and it indicates that there is no page blocked by robots.txt5) We carefully revised the whole robots.txt and it does not contain any row that blocks relevant content on our site or our root domain. (same robots.txt was online for last decade in http version without problem)6) In big webmaster tools I could upload the sitemap and so far no error reported.7) we resubmitted sitemaps and same issue8) I see our root domain already with https in google SERPThe site is https://www.languagecourse.netSince the site has significant traffic, if google would really interpret for any reason that our site is blocked by robots we will be in serious trouble.
Intermediate & Advanced SEO | | lcourse
This is really scary, so even if it is just a bug in search console and does not affect crawling of the site, it would be great if someone from google could have a look into the reason for this since for a site owner this really can increase cortisol to unhealthy levels.Anybody ever experienced the same problem?Anybody has an idea where we could report/post this issue?0 -
New g(TLD) advice needed
Hey all, I'm a bit confused by conflicting advice, need some direct input. We're quite experienced in SEO but that doesn't mean we can't get better 🙂 I manage a very old, well established, very generic TLD portal that ranks very highly in MANY keywords. (If you know our domain, I'd appreciate not naming it here) (145 1-3 ranks, 342 1-20 ranks) but there are also many topics we want to improve upon. Lets say, for example, I own gold.com, but I've failed to rank for 'gold events' and I acquired gold.events. What is the thought as to using some of the g(TLD)s versus the original .com? In the example events.gold.com or gold.events or gold.com/events/? I really can't find a consensus on which would bemost effective for SEO purposes. In a more general aspect of the same question, we own MANY "gold.newg(TLD)" domains and are conflicted as to best use of all of them. All advice greatly appreciated. Nat
Intermediate & Advanced SEO | | WorldWideWebLabs0 -
Penguin 2.0 update, ranking dropped. Advice needed!
Hello After another penguin 2.0 update the website i've been working on dropped in rankings,some of keywords that i ranked in #1 are now on second and third page, you can see this screenshot here http://screencast.com/t/MramoXgTr 95% of my competitors were not even effected with this update at all, most of them don't even optimize their website for SEO, rather they use paid directories. First thing i did is analyzed my backing profile using OSE, to my surprise i found a lot of low quality domains pointing to my pages with a keyword in anchor text. A lot of them blog commenting and low quality article directories. Since i don't have control over these links and i cant remove them i used Disavow tool to do the job. For the past 3 months, i've been doing a lot of hight quality link building; such as
Intermediate & Advanced SEO | | KentR
press releases once in 2 months, squidoo lens and hubpages 3 posts a week for each keyword, youtube video, in fact my youtube video still ranks in #3 for high competitive search, i was involved in social media, posting tweets every week and Facebook posts. I really hope that someone can help me here with a good advice on getting my rankings back here's my website, let me know what do you think about it. Thank You0 -
A Begginer. Learning Lots. Need Advice.
About a month ago I took over a SEO for a small RV company here in Florida. My responsibilities include: SEO, Adwords, Video Production, Inventory Update, Newsletters, Social Media, Etc. I feel a little overwhelmed, but we are a small company and we probably won't be hiring more people. I'm weak in some areas and strong in others. In the SEO area I'm weak. My question is, in the SEO area, where should I be focusing most? I break my SEO responsibilities into a couple areas: Keyword Research (A lot of competition) Back-linking Social Media I know there are more. But where should be my main focus, and how should I go about doing it? Website is http://www.floridaoutdoorsrv.com I would kind of like some one to give me an idea of where I am, and what should I do next. SEOMoz has given me a lot of errors and it's little overwhelming. Thanks in advance for the advice!
Intermediate & Advanced SEO | | floridaoutdoorsrv0 -
Need advice on 301 domain redirection
Hello friends, We have two sites namely spiderman-example.com & avengers-example.com which sells the same product listed out under similar categories, since we are about to stop or put down the site “avengers-example.com” because we just want to concentrate in bringing up a single brand called spiderman-example.com. “Spiderman-example” has comparatively more visitors and conversion rates than ''avengers-example'' ie. 90 % more traffic and conversion. Avengers-example has a small fraction of loyal customers who still search for the brand-name & there are a hand-full of potential keywords those ranking on its own. So is it advisable to redirect Avengers-example to spiderman-example using 301-redirect? Will this help to gain any link-juice from Avengers-example? If so how can we effectively redirect between two domain’s with minimal loss in page authority & linkjuice to enhance ''spiderman-example''? Off beat:These names "Avengers" and "Spiderman" were just used as an example but the actual site names has no relation to the ones mentioned above.
Intermediate & Advanced SEO | | semvibe0 -
Robots.txt: Can you put a /* wildcard in the middle of a URL?
We have noticed that Google is indexing the language/country directory versions of directories we have disallowed in our robots.txt. For example: Disallow: /images/ is blocked just fine However, once you add our /en/uk/ directory in front of it, there are dozens of pages indexed. The question is: Can I put a wildcard in the middle of the string, ex. /en/*/images/, or do I need to list out every single country for every language in the robots file. Anyone know of any workarounds?
Intermediate & Advanced SEO | | IHSwebsite0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
Robots.txt 404 problem
I've just set up a wordpress site with a hosting company who only allow you to install your wordpress site in http://www.myurl.com/folder as opposed to the root folder. I now have the problem that the robots.txt file only works in http://www.myurl./com/folder/robots.txt Of course google is looking for it at http://www.myurl.com/robots.txt and returning a 404 error. How can I get around this? Is there a way to tell google in webmaster tools to use a different path to locate it? I'm stumped?
Intermediate & Advanced SEO | | SamCUK0