Can you use more than one meta robots tag per page?
-
If you want to add both "noindex, follow" and "noopd" should you add two meta robots tags or is there a way to combine both into one?
-
You can combine them all into one line .
The SEOmoz beginners guide has some info on it http://www.seomoz.org/beginners-guide-to-seo/search-engine-tools-and-services
For what its worth using Meta Robots to restrict crawling is actually better than simply using a Robots.txt. Largely because a Robot will still see a page url if it is linked to from another source and those links can pass page rank, but the page will never pass any forward.
-Phil
-
you can use:
<meta name="robots" content="noindex, follow, noodp" />
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can anyone please explain the real difference between backlinks, 301 links, and redirect links?which one is better to rank a website? i am looking for the help for one of my website
Can anyone please explain the real difference between backlinks, 301 links, and redirect links? which one is better to rank a website? I am looking for help for one of my website vacuum cleaners
Intermediate & Advanced SEO | | hshajjajsjsj3880 -
Importance (or lack of) Meta keywords tags and Tags in Drupal
I'm wondering should I put any effort in making Meta Keywords tags for my pages or normal Tags (they're separate in Drupal), since apparently first are not considered by most of search engines, while not sure about normal tags. Obviously SERPS has to determine partial valu of the page by content, thus consider keywords / tags to some extend. What's your opinion on that. Thank you.
Intermediate & Advanced SEO | | Optimal_Strategies1 -
Can noindexed pages accrue page authority?
My company's site has a large set of pages (tens of thousands) that have very thin or no content. They typically target a single low-competition keyword (and typically rank very well), but the pages have a very high bounce rate and are definitely hurting our domain's overall rankings via Panda (quality ranking). I'm planning on recommending we noindexed these pages temporarily, and reindex each page as resources are able to fill in content. My question is whether an individual page will be able to accrue any page authority for that target term while noindexed. We DO want to rank for all those terms, just not until we have the content to back it up. However, we're in a pretty competitive space up against domains that have been around a lot longer and have higher domain authorities. Like I said, these pages rank well right now, even with thin content. The worry is if we noindex them while we slowly build out content, will our competitors get the edge on those terms (with their subpar but continually available content)? Do you think Google will give us any credit for having had the page all along, just not always indexed?
Intermediate & Advanced SEO | | THandorf0 -
Over 30,000 pages but only 100 get traffic... can I kill the others?
I have a website with over 30,000 pages. But only around 100 are getting traffic from Google/being used by the company. How safe is it for me to kill the other pages? Usually I'd do rel canonical or 301's to scrap as much link juice as I can from them, but at 30,000 we just don't have any place to 301 the pages that makes sense and rel canonical to irrelevant pages seems... wrong? So my hope was to just kill the pages, reuse their content when needed, but pretty much start fresh. Let me know your thoughts. Thanks,
Intermediate & Advanced SEO | | jacob.young.cricut0 -
Duplicate title tags due to lightbox use
I am looking at a site and am pulling up duplicate title tags because of their lightbox use so... So they have a page: http://www.website.com/page and then a duplicate of that page: http://www.website.com/page?width=500&height=600 on a huge number of pages (using Drupal)... that kind of thing - what would be the best / cleanest solution?
Intermediate & Advanced SEO | | McTaggart0 -
Block in robots.txt instead of using canonical?
When I use a canonical tag for pages that are variations of the same page, it basically means that I don't want Google to index this page. But at the same time, spiders will go ahead and crawl the page. Isn't this a waste of my crawl budget? Wouldn't it be better to just disallow the page in robots.txt and let Google focus on crawling the pages that I do want indexed? In other words, why should I ever use rel=canonical as opposed to simply disallowing in robots.txt?
Intermediate & Advanced SEO | | YairSpolter0 -
WordPress redesign: using posts as pages?
Starting a redesign for an attorney who is currently using WordPress with an old framework that is no longer being supported, so I'm going to install a new WP and start from scratch. The site consists of about 30 static pages (practice areas, attorney profiles, etc.) and they write about 5 blog posts per month. I've always differentiated between posts and pages for WP sites I've done in the past, but this time around I thought it might be more clean (less files, and easier for their webmaster to make routine edits) if I just brought over the static pages as posts. However, the recent webinar on the Yoast SEO plugin mentioned using the month/day in the permalink structure for posts to avoid duplicate content issues. That would go against how I was thinking of setting it up, because I would have just generated the URL off the page title and make a separate category for "pages". Just wondering if anyone's used posts as pages before. While this seems like it would make things easier for the webmaster, I'm not sure it maximizes potential for SEO. Thanks.
Intermediate & Advanced SEO | | c2g0 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1