Robot.txt error
-
I currently have this under my robot txt file:
User-agent: *
Disallow: /authenticated/
Disallow: /css/
Disallow: /images/
Disallow: /js/
Disallow: /PayPal/
Disallow: /Reporting/
Disallow: /RegistrationComplete.aspxWebMatrix 2.0
On webmaster > Health Check > Blocked URL
I copy and paste above code then click on Test, everything looks ok but then logout and log back in then I see below code under Blocked URL:
User-agent: *
Disallow: /
WebMatrix 2.0
Currently, Google doesn't index my domain and i don't understand why this happening. Any ideas?
Thanks
Seda
-
Thanks Irving, it worked
-
Try to spider your site with this link checker tool
bots cannot accept cookies and your site is requiring cookies to be enabled in order to be visited so Google cannot access the site because you are not allowing the visit without the cookie being dropped is most likely the issue.
Disable cookies on your browser and clear your cache and see what happens when you try to visit your site, are you blocked?
These discussions may possibly help
http://www.highrankings.com/forum/index.php/topic/3062-cookie-and-javascript/
http://stackoverflow.com/questions/5668681/seo-question-google-not-getting-past-cookies
-
Thanks Irving, I need a little more help, I am not quite sure if I understand it. What is it that needs to be fixed here?
-
I couldn't relay on SERPS as the website is old, it's been indexed for quite so i didn't think that SERP results would change that quick. I've been receiving the error since yesterday.
It's on SERPS today but would it be there tomorrow? The reason I am saying that is because when i change the Page Title, it doesnt get changed on SERPS instantly, it takes a day or so before i see the changes on SERPS.
-
TECHNICAL ISSUE
It's your cookie policy blocking bots from spidering. Need to fix that at the server level. Easy fix!
http://www.positivecollections.co.uk/cookies-policy.aspx
Your robots.txt is fine.
-
Okay. But that doesn't mean it isn't being indexed. Here's a fun test: Go to any page on your website and select a string of two or three sentences. Google it. Does the page come up in the SERPs?
(I did this to 3 pages on your site and it worked for all of them. Therefore, your site is being indexed.) Why do you need to Fetch?
-
When I click on Fetch As Google, i get 'Denied by robots.txt'' error.
-
That site is also being indexed. Again I ask, what makes you think it is not being indexed? (cause it is)
-
When I click on Fetch As Google, i get 'Denied by robots.txt'' error.
@Jesse: That's the main website, we've got other URLs.Error appears on positivecollections.co.uk
-
Thanks Irving,
www.positivecollections.co.uk is the url
I've tried to remove everything from the robot file and check again on webmaster, same thing happened It's just blocking the main link
-
Are you sure your site isn't being indexed?
Cause I went to your profile and if http://www.mtasolicitors.com/ is your site, then it is definitely being indexed.. What makes you think it isn't?
-
Are you sure there is nothing else in your robots.txt - you can share the url if you like
You can delete this it's doing nothing and don't need to attempt to block bad bots
WebMatrix 2.0
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block in robots.txt instead of using canonical?
When I use a canonical tag for pages that are variations of the same page, it basically means that I don't want Google to index this page. But at the same time, spiders will go ahead and crawl the page. Isn't this a waste of my crawl budget? Wouldn't it be better to just disallow the page in robots.txt and let Google focus on crawling the pages that I do want indexed? In other words, why should I ever use rel=canonical as opposed to simply disallowing in robots.txt?
Intermediate & Advanced SEO | | YairSpolter0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Error reports showing pages that don't exist on website
I have a website that is showing lots of errors (pages that cannot be found) in google webmaster tools. I went through the errors and re-directed the pages I could. There are a bunch of remaining pages that are not really pages this is why they are showing errors. What's strange is some of the URL's are showing feeds which these were never created. I went into Google webmaster tools and looked at the remove URL tool. I am using this but I am confused if I need to be selecting "remove page from search results and cache" option or should I be selecting this other option "remove directory" I am confused on the directory. I don't want to accidentally delete core pages of the site from the search engines. Can anybody shed some light on this or recommend which I should be selecting? Thank you Wendy
Intermediate & Advanced SEO | | SOM240 -
Will blocking urls in robots.txt void out any backlink benefits? - I'll explain...
Ok... So I add tracking parameters to some of my social media campaigns but block those parameters via robots.txt. This helps avoid duplicate content issues (Yes, I do also have correct canonical tags added)... but my question is -- Does this cause me to miss out on any backlink magic coming my way from these articles, posts or links? Example url: www.mysite.com/subject/?tracking-info-goes-here-1234 Canonical tag is: www.mysite.com/subject/ I'm blocking anything with "?tracking-info-goes-here" via robots.txt The url with the tracking info of course IS NOT indexed in Google but IT IS indexed without the tracking parameters. What are your thoughts? Should I nix the robots.txt stuff since I already have the canonical tag in place? Do you think I'm getting the backlink "juice" from all the links with the tracking parameter? What would you do? Why? Are you sure? 🙂
Intermediate & Advanced SEO | | AubieJon0 -
Why should I add URL parameters where Meta Robots NOINDEX available?
Today, I have checked Bing webmaster tools and come to know about Ignore URL parameters. Bing webmaster tools shows me certain parameters for URLs where I have added META Robots with NOINDEX FOLLOW syntax. I can see canopy_search_fabric parameter in suggested section. It's due to following kind or URLs. http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1728 http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1729 http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=1730 http://www.vistastores.com/patio-umbrellas?canopy_fabric_search=2239 But, I have added META Robots NOINDEX Follow to disallow crawling. So, why should it happen?
Intermediate & Advanced SEO | | CommercePundit0 -
403, 301, 302, 404 errors & possible google penalty
William Rock ran a Xenu site scan on nlpca(dot)com and mentioned the following: ...ran a test with Xenu site scan and it found a lot of broken links with 403, 301, 302, 404 Errors. Other items found: Broken page-local links (also named 'anchors', 'fragmentidentifiers'): http://www.nlpca.com/DCweb/Interesting_NLP_Sites.html#null anchor occurs multiple timeshttp://www.nlpca.com/DCweb/Interesting_NLP_Sites.html#US not found Could somone give us an output of that list, and which ones of these errors do we need to clean up for SEO purposes? Thank you.
Intermediate & Advanced SEO | | BobGW0 -
Google Webmaster Tools Sitemap errors for phantom urls?
Two weeks ago we changed our urls so the correct addresses are all lowercase. Everything else 301 redirects to those. We have submitted and made sure that Google has downloaded our updated sitemap several times since. Even so, Webmaster Tools is reporting 33000 + errors in our sitemap for urls that are no longer in our sitemap and haven't been for weeks. It claims to have found the errors within the last couple of days but the sitemap has been updated for a couple of weeks and has been downloaded by Google at least three times since. Here is our sitemap: http://www.aquinasandmore.com/urllist.xml Here are a couple of urls that Webmaster Tools says are in the sitemap: http://www.aquinasandmore.com/catholic-gifts/Caroline-Gerhardinger-Large-Sterling-Silver-Medal/sku/78664
Intermediate & Advanced SEO | | IanTheScot
Redirect error unavailable
Oct 7, 2011
http://www.aquinasandmore.com/catholic-gifts/Catherine-of-Bologna-Small-Gold-Filled-Medal/sku/78706
Redirect error unavailable
Oct 7, 20110 -
Why is noindex more effective than robots.txt?
In this post, http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo, it mentions that the noindex tag is more effective than using robots.txt for keeping URLs out of the index. Why is this?
Intermediate & Advanced SEO | | nicole.healthline0