Robots.txt Help
-
I need help to create robots.txt file.
Please let me know what to add in the file. any real example or working example.?
-
Michael, from what i can tell, your website is built using WordPress. We typically recommend installing the Yoast SEO plugin and using that--which will help with your robots.txt file. If you need more information, take a look here: https://yoast.com/wordpress-robots-txt-example/
Generally, most of your site won't need to be disallowed in the robots.txt file, unless you're using tags and categories on your site. Yoast typically helps disallow the proper directories that you need to disallow.
One thing that you need to be aware of is the fact that you don't want to disallow your .CSS or .JS files on your site, many of the themes nowadays will put those files in your wp-admin folder--which by default typically gets disallowed.
-
This is the site I used to really get a good understanding of how to create a robots.txt file: http://www.robotstxt.org/
-
A very basic robots.txt file would look something like the below
User-agent: *
Sitemap: http://www.yourwebsite.com/sitemap.xml
Disallow: http://www.yourwebsite.com/url-you-dont-want-indexed
Disallow: http://www.yourwebsite.com/another-url-you-dont-want-indexedHope that helps
-
Include sitemaps. Disallow: Pages that you don't want indexed: search pages, login pages, core admin files.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Need help in de-indexing URL parameters in my website.
Hi, Need some help.
Intermediate & Advanced SEO | | ImranZafar
So this is my website _https://www.memeraki.com/ _
If you hover over any of the products, there's a quick view option..that opens up a popup window of that product
That popup is triggered by this URL. _https://www.memeraki.com/products/never-alone?view=quick _
In the URL you can see the parameters "view=quick" which is infact responsible for the pop-up. The problem is that the google and even your Moz crawler is picking up this URL as a separate webpage, hence, resulting in crawl issues, like missing tags.
I've already used the webmaster tools to block the "view" parameter URLs in my website from indexing but it's not fixing the issue
Can someone please provide some insights as to how I can fix this?0 -
HELP! How do I get Google to value one page over another (older) page that is ranking?
So I have a tactical question and I need mozzers. I'll use widgets as an example: 1- My company used to sell widgets exclusively and we built thousands of useful, branded unique pages that sell widgets. We have thousands of pages that are ranking for widgets.com/brand-widgets-for-sale. (These pages have been live for almost 2 years) 2- We've shifted our focus to now renting widgets. We have about 100 pages focused on renting the same branded widgets. These pages have unique content and photos and can be found at widgets.com/brand-widgets-for-rent. (These pages have been live for about 2-3 months) The problem is that when someone searches just for the brand name, the "for sale" pages dramatically outrank the "for rent" pages. Instead, I want them to find the "for rent" page. I don't want to redirect traffic from the "for sale" pages because someone might still be interested in buying (although as a company, we are super focused on renting). Solutions? "nofollow" the "for sale" pages with the idea that Google will stop indexing "for sale" and start valuing "for rent" over it? Remove "for sale" from sitemap. Help!!
Intermediate & Advanced SEO | | Vacatia_SEO0 -
HELP! How do I stop scraper sites - is there any recourse?
Our site has lots of unique content and photos and it is constantly being scraped and posted on other websites. Most of these are no-name sites that pop up and exist for adwords revenue. Aside from the fact that we don't want our content being copied, this is an SEO nightmare because they often link back to us from pages that are stuffed with keywords and have very low domain authority (it's a form of negative SEO). My question is: Does anyone have experience with fighting this phenonmenon? What have you done that is effective? Does anyone have experience with a service such as http://www.dmca.com/ProtectionPro.aspx ? Does it work/is it worth it? Any input is appreciated!
Intermediate & Advanced SEO | | YairSpolter0 -
Pages getting into Google Index, blocked by Robots.txt??
Hi all, So yesterday we set up to Remove URL's that got into the Google index that were not supposed to be there, due to faceted navigation... We searched for the URL's by using this in Google Search.
Intermediate & Advanced SEO | | bjs2010
site:www.sekretza.com inurl:price=
site:www.sekretza.com inurl:artists= So it brings up a list of "duplicate" pages, and they have the usual: "A description for this result is not available because of this site's robots.txt – learn more." So we removed them all, and google removed them all, every single one. This morning I do a check, and I find that more are creeping in - If i take one of the suspecting dupes to the Robots.txt tester, Google tells me it's Blocked. - and yet it's appearing in their index?? I'm confused as to why a path that is blocked is able to get into the index?? I'm thinking of lifting the Robots block so that Google can see that these pages also have a Meta NOINDEX,FOLLOW tag on - but surely that will waste my crawl budget on unnecessary pages? Any ideas? thanks.0 -
Please help :) Troubles getting 3 types of content de-indexed
Hi there,
Intermediate & Advanced SEO | | Ltsmz
I know that it takes time and I have already submitted a URL removal request 3-4 months ago.
But I would really appreciate some kind advice on this topic. Thank you in advance to everyone who contributes! 1) De-indexing archives Google had indexed all my:
/tag/
/authorname/
archives. I have set them as no-index a few months ago but they still appear in search engine.
Is there anything I can do to speed up this de-indexing? 2) De-index /plugins/ folder in wordpress site They have also indexed all my /plugins/ folder. So I have added a disallow /plugin/ in my robots.txt 3-4 months ago, but /plugins/ still appear in search engine. What can I do to get the /plugins/ folder de-indexed?
Is my disallow /plugins/ in robots.txt making it worse because google has already indexed it and not it can't access the folder? How do you solve this? 3) De-index a subdomain I had created a subdomain containing adult content, and have it completely deleted it from my cpanel 3months ago, but it still appears in search engines. Anything else I can do to get it de-indexed? Thank you in advance for your help!0 -
Need help with huge spike in duplicate content and page title errors.
Hi Mozzers, I come asking for help. I've had a client who's reported a staggering increase in errors of over 18,000! The errors include duplicate content and page titles. I think I've found the culprit and it's the News & Events calender on the following page: http://www.newmanshs.wa.edu.au/news-events/events/07-2013 Essentially each day of the week is an individual link, and events stretching over a few days get reported as duplicate content. Do you have any ideas how to fix this issue? Any help is much appreciated. Cheers
Intermediate & Advanced SEO | | bamcreative0 -
Does using robots.txt to block pages decrease search traffic?
I know you can use robots.txt to tell search engines not to spend their resources crawling certain pages. So, if you have a section of your website that is good content, but is never updated, and you want the search engines to index new content faster, would it work to block the good, un-changed content with robots.txt? Would this content loose any search traffic if it were blocked by robots.txt? Does anyone have any available case studies?
Intermediate & Advanced SEO | | nicole.healthline0 -
Subdomains - duplicate content - robots.txt
Our corporate site provides MLS data to users, with the end goal of generating leads. Each registered lead is assigned to an agent, essentially in a round robin fashion. However we also give each agent a domain of their choosing that points to our corporate website. The domain can be whatever they want, but upon loading it is immediately directed to a subdomain. For example, www.agentsmith.com would be redirected to agentsmith.corporatedomain.com. Finally, any leads generated from agentsmith.easystreetrealty-indy.com are always assigned to Agent Smith instead of the agent pool (by parsing the current host name). In order to avoid being penalized for duplicate content, any page that is viewed on one of the agent subdomains always has a canonical link pointing to the corporate host name (www.corporatedomain.com). The only content difference between our corporate site and an agent subdomain is the phone number and contact email address where applicable. Two questions: Can/should we use robots.txt or robot meta tags to tell crawlers to ignore these subdomains, but obviously not the corporate domain? If question 1 is yes, would it be better for SEO to do that, or leave it how it is?
Intermediate & Advanced SEO | | EasyStreet0