Exclude root url in robots.txt ?
-
Hi,
I have the following setup:
www.example.com/nl
www.example.com/de
www.example.com/uk
etc
www.example.com is 301'ed to www.example.com/nlBut now www.example.com is ranking instead of www.example.com/nl
Should is block www.example.com in robots.txt so only the subfolders are being ranked?
Or will i lose my ranking by doing this. -
Yes, when clicking the link in google you get redirected.
I will wait some time.Thank you.
-
The site just launched? It sounds like I am right, you just need to give Google some time to drop the page from the index.
When you find the homepage in the index, and you click the link, do you get redirected? If so, Google will eventually drop it.
-
Thanks for answering Philip,
Yes i really used a 301.
I used it in .htaccess
And i have set this up before lauching the site, so i should be good from the beginning.The site was launched last friday.
When searching for the brand name it shows up as example.com
When searching for my main keywords it shows example.com/ned/landing-page -
If you put disallow: / in your robots.txt file, you will tell bots not to crawl the homepage plus ALL interior pages. You'd be shooting yourself in the foot (or head, really).
Are you sure the redirect is setup properly? Is it definitely a 301 redirect, or maybe a 302 (temporary)? How long ago did you implement the redirect? If the 301 redirect is setup properly, and you're still seeing the homepage in the index, you might just need to wait for it to drop out.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is sitemap required on my robots.txt?
Hi, I know that linking your sitemap from your robots.txt file is a good practice. Ok, but... may I just send my sitemap to search console and forget about adding ti to my robots.txt? That's my situation: 1 multilang platform which means... ... 2 set of pages. One for each lang, of course But my CMS (magento) only allows me to have 1 robots.txt file So, again: may I have a robots.txt file woth no sitemap AND not suffering any potential SEO loss? Thanks in advance, Juan Vicente Mañanas Abad
Technical SEO | | Webicultors0 -
Blocked jquery in Robots.txt, Any SEO impact?
I've heard that Google is now indexing links and stuff available in javascript and jquery. My webmastertools is showing that some links are blocked in robots.txt of jquery. Sorry I'm not a developer or designer. I want to know is there any impact of this on my SEO? and also how can I unblock it for the robots? Check this screenshot: http://i.imgur.com/3VDWikC.png
Technical SEO | | hammadrafique0 -
A few misc Webmaster tools questions & Robots.txt etc
Hi I have a few general misc questions re Robots.tx & GWT: 1) In the Robots.txt file what do the below lines block, internal search ? Disallow: /?
Technical SEO | | Dan-Lawrence
Disallow: /*? 2) Also the sites feeds are blocked in robots.txt, why would you want to block a sites feeds ? **3) **What's the best way to deal with the below: - old removed page thats returning a 500 response code ? - a soft 404 for an old removed page that has no current replacement old removed pages returning a 404 The old pages didn't have any authority or inbound links hence is it best/ok to simply create a url removal request in GWT ? Cheers Dan0 -
Would these be considered dynamic URLs?
Hi, I have a (brand) new client (outdoor recreation), and it links to many different lodges. It's built in Wordpress (Pagelines), and the partner page link URLs. Although they do have the "?" in there, it's only has a single parameter. http://www.clientsite/?partners=partner-name Google is indexing the URLs, I do plan to increase the amount of content/on-page for each. Yet, weighing the risk/reward of rewriting all of these URLs.
Technical SEO | | csmithal0 -
Changing .html to .asp in URLs
Hi Mozzers, I have a question. The webmaster of a client of mine needs to make changes to some files which will effect the URL's. Essentially everything is staying the same but the end of the URL will change from .html to .asp. This is because the site will be dynamically loading content (perhaps from a database) (i.e. latest news to come from their blog etc..) In order to do this we would need to change the filenames of the whole website. (i.e. personnel.html would become personel.asp). Changing URLs can harm indexation but a small change to the end - would Google drop these pages? A 301 redirect is not possible from old URL to new. What impact would this have on Rankings? Thanks Gareth
Technical SEO | | Bush_JSM0 -
How to change URL of RSS Feed?
Hi, There are some websites that keeps on scraping my content. I have blocked them already from accessing my website using .htaccess but they still get my content via RSS feed. I have tried delaying the RSS feed but I think this affected google rankings. My question is, is there a way to change the URL of my RSS Feed? From: http://www.mysite.com/feed to http://www.mysite.com/feed2
Technical SEO | | Trigun0 -
Client accidently blocked entire site with robots.txt for a week
Our client was having a design firm do some website development work for them. The work was done on a staging server that was blocked with a robots.txt to prevent duplicate content issues. Unfortunately, when the design firm made the changes live, they also moved over the robots.txt file, which blocked the good, live site from search for a full week. We saw the error (!) as soon as the latest crawl report came in. The error has been corrected, but... Does anyone have any experience with a snafu like this? Any idea how long it will take for the damage to be reversed and the site to get back in the good graces of the search engines? Are there any steps we should take in the meantime that would help to rectify the situation more quickly? Thanks for all of your help.
Technical SEO | | pixelpointpress0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0