Robots.txt was set to disallow for 14 days
-
We updated our website and accidentally overwrote our robots file with a version that prevented crawling ( "Disallow: /") We realized the issue 14 days later and replaced after our organic visits began to drop significantly and we quickly replace the robots file with the correct version to begin crawling again. With the impact to our organic visits, we have a few and any help would be greatly appreciated -
Will the site get back to its original status/ranking ?
If so .. how long would that take?
Is there anything we can do to speed up the process ?
Thanks
-
Thank you for the response.
We have been watching over the past week and there has been a very small change in the number of indexed urls in GSC and no change in the stats on the MOZ dashboard.
Is that normal? How often does MOZ update the stats?
-
This is commonly done intentionally, when launching a site on a new domain. Once the disallow is removed, the general practice is to request reindexing of the root domain page (and possibly some key pages with paths not likely to be found through navigation) in GSC, and also submitting (or re-submitting) your sitemaps directly in GSC (even though they also may/should be in your robots.txt file).
I'm not sure how long you can expect the search engines to take, since your situation is a bit unique where the site was indexed, and then disallowed temporarily. Just guessing based on launching brand new domains, the process should be quick to be indexed (perhaps a few days) but might be slower on regaining previous ranking positions (unsure on timing of this).
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can Schema handle two sets of business hours?
I have a client who, due to covid, will have two sets of business hours. Morning hours for business customers, and afternoon hours for general customers. Is it possible to designate this distinction in schema?
Intermediate & Advanced SEO | | bherman0 -
Would spiders successfully crawl a page with two distinct sets of content?
Hello all and thank you in advance for the help. I have a coffee company that sell both retail and wholesale products. These are typically the same product, just at different prices. We are planning on having a pop up for users to help them self identify upon their first visit asking if they are retail or wholesale clients. So if someone clicks retail, the cookie will show them retail pricing throughout the site and vice versa for those that identify themselves as wholesale. I can talk to our programmer to find out how he actually plans on doing this from a technical standpoint if it would be of assistance. My question is, how will a spider crawl this site? I am assuming (probably incorrectly) that whatever the "default" selection is (for example, right now now people see retail pricing and then opt into wholesale) will be the information/pricing that they index. So long story short, how would a spider crawl a page that has two sets of distinct pricing information displayed based on user self identification? Thanks again!
Intermediate & Advanced SEO | | ClayPotCreative0 -
Best practices for robotx.txt -- allow one page but not the others?
So, we have a page, like domain.com/searchhere, but results are being crawled (and shouldn't be), results look like domain.com/searchhere?query1. If I block /searchhere? will it block users from crawling the single page /searchere (because I still want that page to be indexed). What is the recommended best practice for this?
Intermediate & Advanced SEO | | nicole.healthline0 -
I need help with setting the preferred domain; www. or not??
Hi! I'm kinda new to the SEO game and struggling with this site I'm working on: http://www.moondoggieinc.com I set the preferred domain to www. in GWT but I'm not seeing it reroute to that? I can't seem to get any of my internal pages to rank, and I was thinking it's possiblly b/c of a duplicate content issue cause by this problem. Any help or guidance on the right way to set preferred domain for this site and whiy I can't get my internal pages to rank? THANKS! KristyO
Intermediate & Advanced SEO | | KristyO0 -
Setting up a Blog - Guest Authors
I am planning on setting up a blog in the next couple months. We would like to have 10 different categories that professionals can write about and submit articles. Any suggestions on which blogging software to put on the site. So far I have heard of WordPress and Joomla. (Not sure which is the best version) We want it to maybe have the bloggers logo or pictures, date, etc. None of us have had any experience with this so we needed some input. I have been reading everywhere that blogging is huge for SEO and so I just wanted to improve ours plus maybe drive more traffic with a high level blog for professionals. Any ideas or even books that I should go purchase to study up. When we do this it would be great to get it set up properly from the get go. 🙂 Boo
Intermediate & Advanced SEO | | Boodreaux0 -
Block an entire subdomain with robots.txt?
Is it possible to block an entire subdomain with robots.txt? I write for a blog that has their root domain as well as a subdomain pointing to the exact same IP. Getting rid of the option is not an option so I'd like to explore other options to avoid duplicate content. Any ideas?
Intermediate & Advanced SEO | | kylesuss12 -
Category Pages - Canonical, Robots.txt, Changing Page Attributes
A site has category pages as such: www.domain.com/category.html, www.domain.com/category-page2.html, etc... This is producing duplicate meta descriptions (page titles have page numbers in them so they are not duplicate). Below are the options that we've been thinking about: a. Keep meta descriptions the same except for adding a page number (this would keep internal juice flowing to products that are listed on subsequent pages). All pages have unique product listings. b. Use canonical tags on subsequent pages and point them back to the main category page. c. Robots.txt on subsequent pages. d. ? Options b and c will orphan or french fry some of our product pages. Any help on this would be much appreciated. Thank you.
Intermediate & Advanced SEO | | Troyville0 -
Block all search results (dynamic) in robots.txt?
I know that google does not want to index "search result" pages for a lot of reasons (dup content, dynamic urls, blah blah). I recently optimized the entire IA of my sites to have search friendly urls, whcih includes search result pages. So, my search result pages changed from: /search?12345&productblue=true&id789 to /product/search/blue_widgets/womens/large As a result, google started indexing these pages thinking they were static (no opposition from me :)), but i started getting WMT messages saying they are finding a "high number of urls being indexed" on these sites. Should I just block them altogether, or let it work itself out?
Intermediate & Advanced SEO | | rhutchings0