Block Baidu crawler?
-
Hello!
One of our websites receives a large amount of traffic from the Baidu crawler. We do not have any Chinese content or do any business with China since our market is Uk.
Is it a good idea to block the Baidu crawler in the robots.txt or could it have any adverse effects on SEO of our site?
What do you suggest?
-
I'm also trying to get this done as well, not sure if its doable on Volusion(don't use them).
Yandex actually crawls more than Baidu for me, and both don't benefit me at all(sucks when you pay for the bandwidth)
-
Thanks for that I have just looked that up-I didn't realise that this was such a common problem.
-
Hi
Further to Ally's answer, in my experiance Baidu tends to ignor the robot.txt, so just do it on the server side.
S
-
Thanks Ally for your answer, will now block Baidu
-
Hi Stefan,
You can block the Baidu crawler in in the robots.txt.
There should be no adverse affect to your site. As this is not an area you are targeting and has no future long term benerfit to your business. Blocking the crawler will mean that your server has less load to deal with from the unnecessary traffic you have been receiving.
You can block the spiders in the following ways:
- Robots.txt (below is code for Baidu)
User-agent: Baiduspider
User-agent: Baiduspider-video
User-agent: Baiduspider-image
Disallow: /- Blocking Spiders via the Apache Configuration File httpd.conf
See the below article for more details on this method
http://searchenginewatch.com/article/2067357/Bye-bye-Crawler-Blocking-the-Parasites
You may also want to check out:
I hope this helps,
Ally
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocked URL parameters can still be crawled and indexed by google?
Hy guys, I have two questions and one might be a dumb question but there it goes. I just want to be sure that I understand: IF I tell webmaster tools to ignore an URL Parameter, will google still index and rank my url? IS it ok if I don't append in the url structure the brand filter?, will I still rank for that brand? Thanks, PS: ok 3 questions :)...
Technical SEO | | catalinmoraru0 -
Why is it the crawler saying I have 9 Duplicate Page Titles?
Hi, I received my weekly web crawl and it is saying this: | 4 | Duplicate Page Content |
Technical SEO | | afrohairsolutions
| 22 | Missing Meta Description Tag |
| 9 | Duplicate Page Title |
| 1 | Title Element Too Long (> 70 Characters) |
| 1 | Title Element Too Short |
| 1 | 301 (Permanent Redirect) | I'm new to SEO and don't know how to fix this, I don't really see how I have Duplicate Page Content or Duplicate Page Title. This is my website: afrohairsolutions.co.uk Thank you in advance.0 -
Ajax Crawling | Blocked URLs Spike
http://www.zando.co.za/women/shoes/ (for example) Hello, I'm concerned that WMT is reporting a large spike in blocked URLs - now reporting more blocked URLs than good URLs. Our product recommendations get generated via an Ajax call and these autogenerated, unique, URLs are rendered in the /recommendations/ folder which sits in the root of our site: http://www.zando.co.za/recommendations/ I can't see how I can prevent Google from calling the Ajax - I can only assume that's what's happening.This is what the code typically looks like:
Technical SEO | | RocketZando0 -
Wordpress Blog Blocked by Metarobots
Upon receiving my first crawl report from new pro SEOMoz acc (yaay!) I've found that the wordpress blog plugged into my site hasn't been getting crawled due to being blocked by metarobots. I'm not a developer and have very little tech expertise, but a search dug up that the issue stemmed from the wordpress site settings > privacy > Ask search engines not to index this site option being selected. On checking the blog "Allow search engines to index this site" was selected so I'm unsure what else to check. My level of expertise means I'm not confident going into the back end of the site and I don't have a tech guy on site to speak to. Has anyone else had this problem? Is it common and will I need to consult a developer to get this fixed? Many thanks in advance for your help!
Technical SEO | | paj19790 -
SeoMoz crawler giving false positives?
SeoMoz crawler indicated a few times that my site has a duplicate home page error (http://mysite.com and www.mysite.com) I eliminated the the couple remaining internal links that pointed to http://mysite on a couple pages (all other internal links point to http://www.mysite.com) I ran the crawl again and it said no errors this time. I naturally thought the duplicate page error problem was fixed. However this morning I got the regularly scheduled crawl report from SeoMoz that said again I have those duplicate error pages. No changes were made to any of my site's pages between the crawls. That makes me wonder if the crawler is providing false positives at times or was wrong when it said on the crawl a couple days ago that I don't have any errors (no duplicate page error). Now, I don't know what to think.
Technical SEO | | finalfrontier0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Micro formats to block HTML text portions of pages
I have a client that wants to use micro formatting to keep a portion of their page (the disclaimer) from being read by the search engines. They want to do this because it will help with their keyword density on the rest of the page and block the “bad keywords” that come from their legally required disclaimer. We have suggested alternate methods to resolve this problem, but they do not want to implement those, they just want a POV from us explaining how this micro formatting process will work. And that’s where the problem is. I’ve never heard of this use case and can’t seem to find anyone who has. I'm posting the question to the Moz Community to see if anyone knows how microformats can keep copy from being crawled by the bots. Please include any links to sites that you know that are using micro formatting in this way. Have you implemented it and seen results? Do you know of a website that is using it now? We're looking for use cases please!
Technical SEO | | Merkle-Impaqt0 -
Can search engines penalize my site if I block IPs from some countries?
I have spotted that some countries in South America generate lot's of traffic on my site and I don't want to sell my service there. Can I be penalized for blocking IPs from certain counties? Thanks!
Technical SEO | | Xopie0