Have a Robots.txt Issue
-
I have a robots.txt file error that is causing me loads of headaches and is making my website fall off the SE grid. on MOZ and other sites its saying that I blocked all websites from finding it. Could it be as simple as I created a new website and forgot to re-create a robots.txt file for the new site or it was trying to find the old one? I just created a new one.
Google's website still shows in the search console that there are severe health issues found in the property and that it is the robots.txt is blocking important pages. Does this take time to refresh? Is there something I'm missing that someone here in the MOZ community could help me with?
-
Hi primemediaconsultants!
Did this get cleared up?
-
You not always have to do this, if you would go to domain.com/robots.txt then it should be removed maybe already. If that's the case you should be starting to see an increase in the number of pages crawled in Google Search Console.
-
This seems very helpful as I did remove it, and fetch as google, but i'm a complete novice. How do you clear server cache?
-
What does your robots.txt file contain? (or share the link)
Try removing it, clearing server cache and fetching as google again.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallowed "Search" results with robots.txt and Sessions dropped
Hi
Intermediate & Advanced SEO | | Frankie-BTDublin
I've started working on our website and I've found millions of "Search" URL's which I don't think should be getting crawled & indexed (e.g. .../search/?q=brown&prefn1=brand&prefv1=C.P. COMPANY|AERIN|NIKE|Vintage Playing Cards|BIALETTI|EMMA PAKE|QUILTS OF DENMARK|JOHN ATKINSON|STANCE|ISABEL MARANT ÉTOILE|AMIRI|CLOON KEEN|SAMSONITE|MCQ|DANSE LENTE|GAYNOR|EZCARAY|ARGOSY|BIANCA|CRAFTHOUSE|ETON). I tried to disallow them on the Robots.txt file, but our Sessions dropped about 10% and our Average Position on Search Console dropped 4-5 positions over 1 week. Looks like over 50 Million URL's have been blocked, and all of them look like all of them are like the example above and aren't getting any traffic to the site. I've allowed them again, and we're starting to recover. We've been fixing problems with getting the site crawled properly (Sitemaps weren't added correctly, products blocked from spiders on Categories pages, canonical pages being blocked from Crawlers in robots.txt) and I'm thinking Google were doing us a favour and using these pages to crawl the product pages as it was the best/only way of accessing them. Should I be blocking these "Search" URL's, or is there a better way about going about it??? I can't see any value from these pages except Google using them to crawl the site.0 -
Is robots met tag a more reliable than robots.txt at preventing indexing by Google?
What's your experience of using robots meta tag v robots.txt when it comes to a stand alone solution to prevent Google indexing? I am pretty sure robots meta tag is more reliable - going on own experiences, I have never experience any probs with robots meta tags but plenty with robots.txt as a stand alone solution. Thanks in advance, Luke
Intermediate & Advanced SEO | | McTaggart1 -
DeepCrawl Calls Incomplete Open Graph Tags and Missing Twitter Cards An Issue. How important is this?
Hi, Let me first say that I really like the tool DeepCrawl. So, not busting on them. More like I'm interested in the relative importance of two items they call as "Issues." Those items are "Incomplete Open Graph Tags" and "No Valid Twitter Cards." They call this out on every page. To define it a bit further, I'm interested in the importance as it relates to organic search.I'm also interested in there's some basic functionality we may have missed in our Share42 implementation. To me, it looks like the social sharing buttons work. Also, we use Share42 social sharing buttons, which are quite functional. If it would help, I could private message you an example url. Thanks! Best... Mike
Intermediate & Advanced SEO | | 945011 -
Google update this wknd or page title issue?
Hi, I've seen a big ranking drop for many major terms, for a particular site, just on Google. This happened Fri 20th or Sat 21st just gone. I don't see any news on an algorithm update over the weekend.I had changed many of the sites major page title protocols 2 weeks ago but a) I would have expected any negative effect before now and not all at once b) the protocols were carefully crafted to avoid traffic drops for major terms and c) i'm seeing traffic drops for keywords that still start at the beginning of the page title d) im seeing drops for some pages which are still using the OLD page titles. I had even tested the protocol on a number of pages in advance to ensure it wouldn't cause problems. As a bit of background - the title protocols were changed to make them more user friendly and less keyword heavy. CTR from search improved so was hoping for better not worse rankings! Ideas, gratefully appreciated.Andy
Intermediate & Advanced SEO | | AndyMacLean0 -
Robots.txt, does it need preceding directory structure?
Do you need the entire preceding path in robots.txt for it to match? e.g: I know if i add Disallow: /fish to robots.txt it will block /fish
Intermediate & Advanced SEO | | Milian
/fish.html
/fish/salmon.html
/fishheads
/fishheads/yummy.html
/fish.php?id=anything But would it block?: en/fish
en/fish.html
en/fish/salmon.html
en/fishheads
en/fishheads/yummy.html
**en/fish.php?id=anything (taken from Robots.txt Specifications)** I'm hoping it actually wont match, that way writing this particular robots.txt will be much easier! As basically I'm wanting to block many URL that have BTS- in such as: http://www.example.com/BTS-something
http://www.example.com/BTS-somethingelse
http://www.example.com/BTS-thingybob But have other pages that I do not want blocked, in subfolders that also have BTS- in, such as: http://www.example.com/somesubfolder/BTS-thingy
http://www.example.com/anothersubfolder/BTS-otherthingy Thanks for listening0 -
Multiple URL's exist for the same page, canonicaliazation issue?
All of the following URL's take me to the same page on my site: 1. www.mysite.com/category1/subcategory.aspx 2. www.mysite.com/subcategory.aspx 3. www.mysite.com/category1/category1/category1/subcategory.aspx All of those pages are canonicalized to #1, so is that okay? I was told the following my a company trying to make our sitemap: "the site's platform dynamically creates URLs that resolve as 200 and should be 404. This is a huge spider trap for any search engine and will make them wary of crawling the site." What would I need to do to fix this? Thanks!
Intermediate & Advanced SEO | | pbhatt0 -
Diagnosing duplicate content issues
We recently made some updates to our site, one of which involved launching a bunch of new pages. Shortly afterwards we saw a significant drop in organic traffic. Some of the new pages list similar content as previously existed on our site, but in different orders. So our question is, what's the best way to diagnose whether this was the cause of our ranking drop? My current thought is to block the new directories via robots.txt for a couple days and see if traffic improves. Is this a good approach? Any other suggestions?
Intermediate & Advanced SEO | | jamesti0 -
Dynamically generated page issues
Hello All! Our site uses dynamically generated pages. I was about to begin the process of optimising our product category pages www.pitchcare.com/shop I was going to use internal anchor text from some high ranking pages within our site but each of the product category pages already have 1745 links! Am I correct in saying that internal anchor text links works to a certain point? (maybe 10 or so links) So any new internal anchor text links will count for nothing? Thanks Todd
Intermediate & Advanced SEO | | toddyC0