Robot.txt error
-
I currently have this under my robot txt file:
User-agent: *
Disallow: /authenticated/
Disallow: /css/
Disallow: /images/
Disallow: /js/
Disallow: /PayPal/
Disallow: /Reporting/
Disallow: /RegistrationComplete.aspxWebMatrix 2.0
On webmaster > Health Check > Blocked URL
I copy and paste above code then click on Test, everything looks ok but then logout and log back in then I see below code under Blocked URL:
User-agent: *
Disallow: /
WebMatrix 2.0
Currently, Google doesn't index my domain and i don't understand why this happening. Any ideas?
Thanks
Seda
-
Thanks Irving, it worked
-
Try to spider your site with this link checker tool
bots cannot accept cookies and your site is requiring cookies to be enabled in order to be visited so Google cannot access the site because you are not allowing the visit without the cookie being dropped is most likely the issue.
Disable cookies on your browser and clear your cache and see what happens when you try to visit your site, are you blocked?
These discussions may possibly help
http://www.highrankings.com/forum/index.php/topic/3062-cookie-and-javascript/
http://stackoverflow.com/questions/5668681/seo-question-google-not-getting-past-cookies
-
Thanks Irving, I need a little more help, I am not quite sure if I understand it. What is it that needs to be fixed here?
-
I couldn't relay on SERPS as the website is old, it's been indexed for quite so i didn't think that SERP results would change that quick. I've been receiving the error since yesterday.
It's on SERPS today but would it be there tomorrow? The reason I am saying that is because when i change the Page Title, it doesnt get changed on SERPS instantly, it takes a day or so before i see the changes on SERPS.
-
TECHNICAL ISSUE
It's your cookie policy blocking bots from spidering. Need to fix that at the server level. Easy fix!
http://www.positivecollections.co.uk/cookies-policy.aspx
Your robots.txt is fine.
-
Okay. But that doesn't mean it isn't being indexed. Here's a fun test: Go to any page on your website and select a string of two or three sentences. Google it. Does the page come up in the SERPs?
(I did this to 3 pages on your site and it worked for all of them. Therefore, your site is being indexed.) Why do you need to Fetch?
-
When I click on Fetch As Google, i get 'Denied by robots.txt'' error.
-
That site is also being indexed. Again I ask, what makes you think it is not being indexed? (cause it is)
-
When I click on Fetch As Google, i get 'Denied by robots.txt'' error.
@Jesse: That's the main website, we've got other URLs.Error appears on positivecollections.co.uk
-
Thanks Irving,
www.positivecollections.co.uk is the url
I've tried to remove everything from the robot file and check again on webmaster, same thing happened It's just blocking the main link
-
Are you sure your site isn't being indexed?
Cause I went to your profile and if http://www.mtasolicitors.com/ is your site, then it is definitely being indexed.. What makes you think it isn't?
-
Are you sure there is nothing else in your robots.txt - you can share the url if you like
You can delete this it's doing nothing and don't need to attempt to block bad bots
WebMatrix 2.0
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Syndicated content with meta robots 'noindex, nofollow': safe?
Hello, I manage, with a dedicated team, the development of a big news portal, with thousands of unique articles. To expand our audiences, we syndicate content to a number of partner websites. They can publish some of our articles, as long as (1) they put a rel=canonical in their duplicated article, pointing to our original article OR (2) they put a meta robots 'noindex, follow' in their duplicated article + a dofollow link to our original article. A new prospect, to partner with with us, wants to follow a different path: republish the articles with a meta robots 'noindex, nofollow' in each duplicated article + a dofollow link to our original article. This is because he doesn't want to pass pagerank/link authority to our website (as it is not explicitly included in the contract). In terms of visibility we'd have some advantages with this partnership (even without link authority to our site) so I would accept. My question is: considering that the partner website is much authoritative than ours, could this approach damage in some way the ranking of our articles? I know that the duplicated articles published on the partner website wouldn't be indexed (because of the meta robots noindex, nofollow). But Google crawler could still reach them. And, since they have no rel=canonical and the link to our original article wouldn't be followed, I don't know if this may cause confusion about the original source of the articles. In your opinion, is this approach safe from an SEO point of view? Do we have to take some measures to protect our content? Hope I explained myself well, any help would be very appreciated, Thank you,
Intermediate & Advanced SEO | | Fabio80
Fab0 -
How do I get rid of my errors for Schema.org?
I put the Schema.org data on my item pages and it works great. However, when an item closes it removes the price. It showed an empty price and that causes an error. The site is now programmed to where if an item closes it removes the price component. This was done about 2 weeks ago and it is still showing a lot of errors. Any ideas?
Intermediate & Advanced SEO | | EcommerceSite0 -
Need help with huge spike in duplicate content and page title errors.
Hi Mozzers, I come asking for help. I've had a client who's reported a staggering increase in errors of over 18,000! The errors include duplicate content and page titles. I think I've found the culprit and it's the News & Events calender on the following page: http://www.newmanshs.wa.edu.au/news-events/events/07-2013 Essentially each day of the week is an individual link, and events stretching over a few days get reported as duplicate content. Do you have any ideas how to fix this issue? Any help is much appreciated. Cheers
Intermediate & Advanced SEO | | bamcreative0 -
Our Robots.txt and Reconsideration Request Journey and Success
We have asked a few questions related to this process on Moz and wanted to give a breakdown of our journey as it will likely be helpful to others! A couple of months ago, we updated our robots.txt file with several pages that we did not want to be indexed. At the time, we weren't checking WMT as regularly as we should have been and in a few weeks, we found that apparently one of the robots.txt files we were blocking was a dynamic file that led to the blocking of over 950,000 of our pages according to webmaster tools. Which page was causing this is still a mystery, but we quickly removed all of the entries. From research, most people say that things normalize in a few weeks, so we waited. A few weeks passed and things did not normalize. We searched, we asked and the number of "blocked" pages in WMT which had increased at a rate of a few hundred thousand a week were decreasing at a rate of a thousand a week. At this rate it would be a year or more before the pages were unblocked. This did not change. Two months later and we were still at 840,000 pages blocked. We posted on the Google Webmaster Forum and one of the mods there said that it would just take a long time to normalize. Very frustrating indeed considering how quickly the pages had been blocked. We found a few places on the interwebs that suggested that if you have an issue/mistake with robots.txt that you can submit a reconsideration request. This seemed to be our only hope. So, we put together a detailed reconsideration request asking for help with our blocked pages issue. A few days later, to our horror, we did not get a message offering help with our robots.txt problem. Instead, we received a message saying that we had received a penalty for inbound links that violate Google's terms of use. Major backfire. We used an SEO company years ago that posted a hundred or so blog posts for us. To our knowledge, the links didn't even exist anymore. They did.... So, we signed up for an account with removeem.com. We quickly found many of the links posted by the SEO firm as they were easily recognizable via the anchor text. We began the process of using removem to contact the owners of the blogs. To our surprise, we got a number of removals right away! Others we had to contact another time and many did not respond at all. Those we could not find an email for, we tried posting comments on the blog. Once we felt we had removed as many as possible, we added the rest to a disavow list and uploaded it using the disavow tool in WMT. Then we waited... A few days later, we already had a response. DENIED. In our request, we specifically asked that if the request were to be denied that Google provide some example links. When they denied our request, they sent us an email and including a sample link. It was an interesting example. We actually already had this blog in removem. The issue in this case was, our version was a domain name, i.e. www.domainname.com and the version google had was a wordpress sub domain, i.e. www.subdomain.wordpress.com. So, we went back to the drawing board. This time we signed up for majestic SEO and tied it in with removem. That added a few more links. We also had records from the old SEO company we were able to go through and locate a number of new links. We repeated the previous process, contacting site owners and keeping track of our progress. We also went through the "sample links" in WMT as best as we could (we have a lot of them) to try to pinpoint any other potentials. We removed what we could and again, disavowed the rest. A few days later, we had a message in WMT. DENIED AGAIN! This time it was very discouraging as it just didn't seem there were any more links to remove. The difference this time, was that there was NOT an email from Google. Only a message in WMT. So, while we didn't know if we would receive a response, we responded to the original email asking for more example links, so we could better understand what the issue was. Several days passed we received an email back saying that THE PENALTY HAD BEEN LIFTED! This was of course very good news and it appeared that our email to Google was reviewed and received well. So, the final hurdle was the reason that we originally contacted Google. Our robots.txt issue. We did not receive any information from Google related to the robots.txt issue we originally filed the reconsideration request for. We didn't know if it had just been ignored, or if there was something that might be done about it. So, as a last ditch final effort, we responded to the email once again and requested help as we did the other times with the robots.txt issue. The weekend passed and on Monday we checked WMT again. The number of blocked pages had dropped over the weekend from 840,000 to 440,000! Success! We are still waiting and hoping that number will continue downward back to zero. So, some thoughts: 1. Was our site manually penalized from the beginning, yet without a message in WMT? Or, when we filed the reconsideration request, did the reviewer take a closer look at our site, see the old paid links and add the penalty at that time? If the latter is the case then... 2. Did our reconsideration request backfire? Or, was it ultimately for the best? 3. When asking for reconsideration, make your requests known? If you want example links, ask for them. It never hurts to ask! If you want to be connected with Google via email, ask to be! 4. If you receive an email from Google, don't be afraid to respond to it. I wouldn't over do this or spam them. Keep it to the bare minimum and don't pester them, but if you have something pertinent to say that you have not already said, then don't be afraid to ask. Hopefully our journey might help others who have similar issues and feel free to ask any further questions. Thanks for reading! TheCraig
Intermediate & Advanced SEO | | TheCraig5 -
Panda Updates - robots.txt or noindex?
Hi, I have a site that I believe has been impacted by the recent Panda updates. Assuming that Google has crawled and indexed several thousand pages that are essentially the same and the site has now passed the threshold to be picked out by the Panda update, what is the best way to proceed? Is it enough to block the pages from being crawled in the future using robots.txt, or would I need to remove the pages from the index using the meta noindex tag? Of course if I block the URLs with robots.txt then Googlebot won't be able to access the page in order to see the noindex tag. Anyone have and previous experiences of doing something similar? Thanks very much.
Intermediate & Advanced SEO | | ianmcintosh0 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1 -
Disallow my store in robots.txt?
Should I disallow my store directory in robots.txt? Here is the URL: https://www.stdtime.com/store/ Here are my reasons for suggesting this: SEOMOZ finds crawl "errors" in there that I don't care about I don't think I care if the search engines index those pages I only have one product, and it is not an impulse buy My product has a 60 day sales cycle, so price is less important than features
Intermediate & Advanced SEO | | raywhite0 -
Using 2 wildcards in the robots.txt file
I have a URL string which I don't want to be indexed. it includes the characters _Q1 ni the middle of the string. So in the robots.txt can I use 2 wildcards in the string to take out all of the URLs with that in it? So something like /_Q1. Will that pickup and block every URL with those characters in the string? Also, this is not directly of the root, but in a secondary directory, so .com/.../_Q1. So do I have to format the robots.txt as //_Q1* as it will be in the second folder or just using /_Q1 will pickup everything no matter what folder it is on? Thanks.
Intermediate & Advanced SEO | | seo1234560