Block Baidu crawler?
-
Hello!
One of our websites receives a large amount of traffic from the Baidu crawler. We do not have any Chinese content or do any business with China since our market is Uk.
Is it a good idea to block the Baidu crawler in the robots.txt or could it have any adverse effects on SEO of our site?
What do you suggest?
-
I'm also trying to get this done as well, not sure if its doable on Volusion(don't use them).
Yandex actually crawls more than Baidu for me, and both don't benefit me at all(sucks when you pay for the bandwidth)
-
Thanks for that I have just looked that up-I didn't realise that this was such a common problem.
-
Hi
Further to Ally's answer, in my experiance Baidu tends to ignor the robot.txt, so just do it on the server side.
S
-
Thanks Ally for your answer, will now block Baidu
-
Hi Stefan,
You can block the Baidu crawler in in the robots.txt.
There should be no adverse affect to your site. As this is not an area you are targeting and has no future long term benerfit to your business. Blocking the crawler will mean that your server has less load to deal with from the unnecessary traffic you have been receiving.
You can block the spiders in the following ways:
- Robots.txt (below is code for Baidu)
User-agent: Baiduspider
User-agent: Baiduspider-video
User-agent: Baiduspider-image
Disallow: /- Blocking Spiders via the Apache Configuration File httpd.conf
See the below article for more details on this method
http://searchenginewatch.com/article/2067357/Bye-bye-Crawler-Blocking-the-Parasites
You may also want to check out:
I hope this helps,
Ally
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
"Url blocked by robots.txt." on my Video Sitemap
I'm getting a warning about "Url blocked by robots.txt." on my video sitemap - but just for youtube videos? Has anyone else encountered this issue, and how did you fix it if so?! Thanks, J
Technical SEO | | Critical_Mass0 -
Ajax Crawling | Blocked URLs Spike
http://www.zando.co.za/women/shoes/ (for example) Hello, I'm concerned that WMT is reporting a large spike in blocked URLs - now reporting more blocked URLs than good URLs. Our product recommendations get generated via an Ajax call and these autogenerated, unique, URLs are rendered in the /recommendations/ folder which sits in the root of our site: http://www.zando.co.za/recommendations/ I can't see how I can prevent Google from calling the Ajax - I can only assume that's what's happening.This is what the code typically looks like:
Technical SEO | | RocketZando0 -
Blocking https from being crawled
I have an ecommerce site where https is being crawled for some pages. Wondering if the below solution will fix the issue www.example.com will be my domain In the nav there is a login page www.example.com/login which is redirecting to the https://www.example.com/login If I just disallowed /login in the robots file wouldn't it not follow the redirect and index that stuff? The redirect part is what I am questioning.
Technical SEO | | Sean_Dawes0 -
Matching C Block
Hi Guys We have 2 sites that are in the same niche and competing for the same keywords. The sites are on seperate domains one is UK and one is .com They have their own IP's however have both have the same C Block... We have noticed that when the rankings for one site improves the other drops.... Could the C Block be causing this?
Technical SEO | | EwanFisher0 -
How does your crawler treat ajax links?
Hello! It looks like the seomoz crawler (and google) follows ajax links. Is this normal behavior? We have implemented the canonical element and that seems to resolve most of the duplicate content issues. Anything else we can do? Example: Krom
Technical SEO | | AJPro0 -
How to recover after blocking all the search engine spiders?
I have the following problem - one of my clients (a Danish home improvement company) decided to block all the international traffic (leaving only Scandiavian one), because they were getting a lot of spammers using their mail form to send e-mails. As you can guess this lead to blocking Google also since the servers of Google Denmark are located in the US. This lead to drop in their rankings. So my question is - What Shall I do now - wait or contact Google? Any help will be appreciated, because to be honest I had never see such thing in action until now 😄 Best Regards
Technical SEO | | GroupM0