Search engine blocked by robots-crawl error by moz & GWT
-
Hello Everyone,.
For My Site I am Getting Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag, Also google Webmaster Also not able to fetch my site, tajsigma.com is my site
Any expert Can Help please,
Thanx
-
When was your last crawl date in Google Webmaster Tools/Search Console? It may be that your site was crawled with some kind of problem with the robots.txt and hasn't been re-crawled since.
-
Yes , Exactly
I am also worried For that only, Can you please help to identify my site problem
Thnx
-
That's very strange. The robots.txt looks fine, but here's what I see when I search for your site on Google.
-
Headers look fine and as you correctly said your robots and meta robots are also ok.
I have also noted that doing a site:www etc in google search is also returning pages for your site so again showing it is being crawled and indexed.
To all intents and purposes it looks ok to me. Someone else may be able to shed more light on the issue if they have experienced this error to this degree.
-
www.tajsigma.com This the Domain For Query robots 605- code
-
Just to check it that the live one, or just a test in GSC. Can you send a link to your site maybe in a PM.
-
Yes, It is Also okay there see Attached screenshot
-
Have you followed the following and in Google Search Console tried testing your robots file.
If you are allowing all, I would maybe suggest simply removing your robots.txt all together so it defaults to just crawling everything.
-
Thanx, Tim Holmes For Your Quick reply
But My robots.txt File is
User-agent: *
allow: /Also in All pages i have Add Meta-tag
Then After Page is Not Getting Fetched or Crawl by GWT.
Thnx
-
Hello Falguni,
I believe the error is saying pretty much everything you need to know. Your Robots file or robots meta would appear to be blocking your site from being crawled.
Have you checked your robots.txt file in your root - or type in http://www.yourdomain.com/robots.txt
To ensure your site is being crawled and for robots to have complete access the following should be in place
**User-agent: ***
Disallow: To exclude all robots from the entire server**User-agent: ***
Disallow: **/**If it is the a meta tag causing the issue you will require, or have it removed to default to the below.As opposed to the following combinations which could result in some areas not being indexed, crawled etc
_Hope that helps
Tim_
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
International SEO and Indexing in Country Specific Search Engines
Hey everyone! My company has recently migrated to a new domain (www.napoleon.com) which includes migrating many TLD separate domains to the new. We have structured the website to have multi-language and regions, including regional content, product offerings etc. Our structure is as follows just to give an example. napoleon.com/en/ca/
Intermediate & Advanced SEO | | Napoleon.com
napoleon.com/fr/ca/
napoleon.com/en/us/
napoleon.com/de/de Currently, specifically the homepage version of the USA website is indexing into Canadian Search Engines, and I can't figure out why. It has been roughly 6 weeks since launch. Any thoughts on this? Thank you Dustin0 -
404s clinging on in Search Console
What is a reasonable length of time to expect 404s to be resolved in Search Console? There was a mass of 404s that were built up from directory changes and filtering URLs that have been fixed. These have all been fixed but of course there are some that slipped the net. How long is it reasonable to expect the old 404s that don't have any links to drop away from Search Console? New 404s are still being reported over 4 months later. 'First detected' is always showing as a date later than the fixed 404's date. Is this reasonable, i've never seen this being so resilient and not clean up like this? We manually fix these 404s and like popcorn more turn up. Just to add the bulk of 404s came into existence around a year ago and left for around 8 months.
Intermediate & Advanced SEO | | MickEdwards0 -
Pagination, Javascript & SEO
Hi I need some help identifying whether we need to rethink the way we paginated product pages, On this page http://www.key.co.uk/en/key/workbenches when clicking page 1,2, etc - we have javascript to sort the results, the URL displayed and the URL linked to are different. e.g. The URL for these paginated pages is for example: page2 http://www.key.co.uk/en/key/workbenches#productBeginIndex:30&orderBy:5&pageView:list& Then the arrows either side of pagination, link to the paginated landing page e.g. http://www.key.co.uk/en/key/workbenches?page=3 - this is where the rel/prev details are - done for Google However - when clicking on this arrow, the URL loaded is different again - http://www.key.co.uk/en/key/workbenches#productBeginIndex:60&orderBy:5&pageView:list& & doesn't take you http://www.key.co.uk/en/key/workbenches?page=3 I did not set this up, but I am concerned that the URL http://www.key.co.uk/en/key/workbenches?page=3 never actually loads, but it's linked to Google can crawl it. Is this a problem? I am looking to implement a view all option. Thank you
Intermediate & Advanced SEO | | BeckyKey0 -
Crawled page count in Search console
Hi Guys, I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console. History: Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this: Noindex filterpages. Exclude those filterspages in Search Console and robots.txt. Canonical the filterpages to the relevant categoriepages. This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day. To complicate the situation: We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well. Questions: - Excluding in robots.txt should result in Google not crawling those pages right? - Is this number of crawled pages normal for a website with around 1000 unique pages? - What am I missing? BxlESTT
Intermediate & Advanced SEO | | Bob_van_Biezen0 -
Website Indexing Issues - Search Bots will only crawl Homepage of Website, Help!
Hello Moz World, I am stuck on a problem, and wanted to get some insight. When I attempt to use Screaming Spider or SEO Powersuite, the software is only crawling the homepage of my website. I have 17 pages associated with the main domain i.e. example.com/home, example.com/sevices, etc. I've done a bit of investigating, and I have found that my client's website does not have Robot.txt file or a site map. However, under Google Search Console, all of my client's website pages have been indexed. My questions, Why is my software not crawling all of the pages associated with the website? If I integrate a Robot.txt file & sitemap will that resolve the issue? Thanks ahead of time for all of the great responses. B/R Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
How to identify 404 that get links from external sites (but not search engines)?
one of our site had a poor site architecture causing now about 10.000s of 404 being currently reported in google webmaster tools. Any idea about easily detecting among these thousands of 404, which ones are coming from links from external websites (so filtering out 404 caused by links from our own domain and 404 from search engines)? crawl bandwidth seems to be an issue on this domain. Anything that can be done to accelerate google removing these 404 pages from their index? Due to number of 404 manual submission in google wbt one by one is not an option.
Intermediate & Advanced SEO | | lcourse
Or do you believe that google automatically will stop crawling these 404 pages within a month or so and no action needs to be taken? thanks0 -
Robots.txt 404 problem
I've just set up a wordpress site with a hosting company who only allow you to install your wordpress site in http://www.myurl.com/folder as opposed to the root folder. I now have the problem that the robots.txt file only works in http://www.myurl./com/folder/robots.txt Of course google is looking for it at http://www.myurl.com/robots.txt and returning a 404 error. How can I get around this? Is there a way to tell google in webmaster tools to use a different path to locate it? I'm stumped?
Intermediate & Advanced SEO | | SamCUK0 -
HTTP Errors in Webmaster Tools
We recently added a 301 redirect from our non-www domain to the www version. As a result, we now have tons of HTTP errors (403s to be exact) in Webmaster Tools. They're all from over a month ago, but they still show up. How can we fix this?
Intermediate & Advanced SEO | | kylesuss0