Search engine blocked by robots-crawl error by moz & GWT
-
Hello Everyone,.
For My Site I am Getting Error Code 605: Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag, Also google Webmaster Also not able to fetch my site, tajsigma.com is my site
Any expert Can Help please,
Thanx
-
When was your last crawl date in Google Webmaster Tools/Search Console? It may be that your site was crawled with some kind of problem with the robots.txt and hasn't been re-crawled since.
-
Yes , Exactly
I am also worried For that only, Can you please help to identify my site problem
Thnx
-
That's very strange. The robots.txt looks fine, but here's what I see when I search for your site on Google.
-
Headers look fine and as you correctly said your robots and meta robots are also ok.
I have also noted that doing a site:www etc in google search is also returning pages for your site so again showing it is being crawled and indexed.
To all intents and purposes it looks ok to me. Someone else may be able to shed more light on the issue if they have experienced this error to this degree.
-
www.tajsigma.com This the Domain For Query robots 605- code
-
Just to check it that the live one, or just a test in GSC. Can you send a link to your site maybe in a PM.
-
Yes, It is Also okay there see Attached screenshot
-
Have you followed the following and in Google Search Console tried testing your robots file.
If you are allowing all, I would maybe suggest simply removing your robots.txt all together so it defaults to just crawling everything.
-
Thanx, Tim Holmes For Your Quick reply
But My robots.txt File is
User-agent: *
allow: /Also in All pages i have Add Meta-tag
Then After Page is Not Getting Fetched or Crawl by GWT.
Thnx
-
Hello Falguni,
I believe the error is saying pretty much everything you need to know. Your Robots file or robots meta would appear to be blocking your site from being crawled.
Have you checked your robots.txt file in your root - or type in http://www.yourdomain.com/robots.txt
To ensure your site is being crawled and for robots to have complete access the following should be in place
**User-agent: ***
Disallow: To exclude all robots from the entire server**User-agent: ***
Disallow: **/**If it is the a meta tag causing the issue you will require, or have it removed to default to the below.As opposed to the following combinations which could result in some areas not being indexed, crawled etc
_Hope that helps
Tim_
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Question on AMP
I'd like to utilize AMP for faster loading for one of my clients. However, it is essential that this client have chat. My developer is having trouble incorporating chat with AMP, and he claims that it isn't possible to integrate the two. Can anyone advise me as to whether this is accurate? If it is true that AMP and chat aren't compatible, are there any solutions to this issue? I'd appreciate any leads on this. Thanks!
Intermediate & Advanced SEO | | Joseph-Green-SEO0 -
What should I do after a failed request for validation (error with noindex, nofollow) in new Google Search Console?
Hi guys, We have the following situation: After an error message in new google search console for a large amount of pages with noindex, nofollow tag, a validation is requested before the problem is fixed. (it's incredibly stupid decision taken before asking the SEO team for advice) Google starts the validation, crawls 9 URLs and changes the status to "Failed". All other URLs are still in "pending" status. The problem has been fixed for more than 10 days, but apparently Google doesn't crawl the pages and none of the URLs is back in the index. We tried pinging several pages and html sitemaps, but there is no result. Do you think we should request for re-validation or wait more time? It there something more we could do to speed up the process?
Intermediate & Advanced SEO | | ParisChildress0 -
Best Strategy for FAQ & Canonical?
I have an FAQ database setup on my site and there's about 30 questions in 6 categories so 5 questions per category which is a pretty good page size for one category. I'm trying to determine the best strategy for publishing them from both a user and SEO standpoint. From a user standpoint, I want to have one page per category. Dumping them into a page with all 30 questions is not user-friendly and some categories are very unrelated to others. I should note that Google did already index a page that does have all the questions on it, but I was just planning on changing that page to just have 6 links to each of the category pages so then I don't have to bother with 301 redirect or removing the pages in the site's Search Console. There's also an option to to link the questions for the entire FAQ or from the category list to one page with just that question and answer. So my thinking at this point is to as I said, just change the page that has all 30 questions to a list of the categories and link to category pages having the questions for that category and disable the individual question pages. Or would it be beneficial from an SEO page to have google index the individual question pages and link back to the category page and put a canonical tag on the category pages? In other words the question then becomes, index the category pages or index the individual question pages? The other issue is the answers for some of the questions are lengthy, multiple paragraphs, and the FAQ has the option to have a hide/unhide feature on the answers so you can easily see all the questions first then expand the answers on the ones you are interested in. However I thought I heard Google discounts (doesn't ignore) content that is by default hidden on page load. I guess this would then give a reason for going with the indexing of the individual question pages. But it seems to me, you can't put the canonical tag on the category pages and point it to the individual question page. And if you put the canonical tag on the individual question page linking it to the category page, then the individual page won't necessarily get indexed will it?
Intermediate & Advanced SEO | | MrSem0 -
Crawled page count in Search console
Hi Guys, I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console. History: Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this: Noindex filterpages. Exclude those filterspages in Search Console and robots.txt. Canonical the filterpages to the relevant categoriepages. This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day. To complicate the situation: We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well. Questions: - Excluding in robots.txt should result in Google not crawling those pages right? - Is this number of crawled pages normal for a website with around 1000 unique pages? - What am I missing? BxlESTT
Intermediate & Advanced SEO | | Bob_van_Biezen0 -
Puzzling drop in search referrals.
Hello All, A few weeks ago I posted in the Q and A believing I had received a google penalty due to a sudden and considerable drop in referrals... http://www.seomoz.org/q/google-penalty-8 ... so I dug right to the bottom of my site and did a complete review of all my links. After clearing all the potentially problematic links, I wrote a very descriptive reconsideration request to google. 10 days or so later I received a 'no manual spam action found' response, which I guess is a good thing but now begs the question- what has gone wrong!? Over the past 4 or 5 months I've been doing some heavy work on the SEO of my site, www.madegood.org. This has all been white hat stuff (as far as I'm aware), and have been using my seomoz pro account to monitor progress. I've done lots of reading on the subject so think I'm up to scratch on most general good advice, in terms of link building and site structure. I've always tried to create very high quality content and have been building some great links from very authoritative sites including The Guardian newspaper and Sheffield University. My pro dashboard is telling me that my link analysis history is improving, despite my keyword performance declining. Is there anyone that can do a deep review of my site? I'm happy to share my analytics/webmaster tools etc info with you if that is helpful? I'm totally lost here and am becoming disheartened with all the hard work I've been putting into the SEO for my site. Thanks so much in advance, any help is gratefully received. Of course I can provide more info should you need it. Will
Intermediate & Advanced SEO | | madegood0 -
Is our robots.txt file correct?
Could you please review our robots.txt file and let me know if this is correct. www.faithology.com/robots.txt Thank you!
Intermediate & Advanced SEO | | BMPIRE0 -
Will blocking urls in robots.txt void out any backlink benefits? - I'll explain...
Ok... So I add tracking parameters to some of my social media campaigns but block those parameters via robots.txt. This helps avoid duplicate content issues (Yes, I do also have correct canonical tags added)... but my question is -- Does this cause me to miss out on any backlink magic coming my way from these articles, posts or links? Example url: www.mysite.com/subject/?tracking-info-goes-here-1234 Canonical tag is: www.mysite.com/subject/ I'm blocking anything with "?tracking-info-goes-here" via robots.txt The url with the tracking info of course IS NOT indexed in Google but IT IS indexed without the tracking parameters. What are your thoughts? Should I nix the robots.txt stuff since I already have the canonical tag in place? Do you think I'm getting the backlink "juice" from all the links with the tracking parameter? What would you do? Why? Are you sure? 🙂
Intermediate & Advanced SEO | | AubieJon0 -
High search volume keywords
The problem is that our index is not in serps anymore with the high volume keywords (Pfizer, Roche, johnson & johnson).
Intermediate & Advanced SEO | | bele
We still keep these keywords in title, but it brings not much results. We made page www.domain.com/pfizer , added there Pfizer products with unique descriptions.
Product pages started to drive visitors, but not the www.domain.com/pfizer page. If we add a blog to the top of this page and add unique posts about Pfizer company news, would it help?
In this case this page would be unique, refreshed with new info, and have rotating pfizer products. Maybe some other suggestions?0