Robots.txt issue for international websites
-
In Google.co.uk, our US based (abcd.com) is showing:
A description for this result is not available because of this site's robots.txt – learn more
But UK website (uk.abcd.com) is working properly. We would like to disappear .com result totally, if possible. How to fix it?
Thanks in advance.
-
Can you share any information about your robots.txt?
-
My main problem is in the homepage. Both host similar type of products and brands.
You may check the screenshot. Sorry, I had to blanked out the text.
Thanks in advance.
-
Is it showing that for every page, or only some pages? If so, which types of pages? What's the contents of your robots.txt file for the US site?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I define that one area of my website is a regualr news (no subscription) and the other part of the website is news that only subscribers can read?
Hi I have a client that have a news website, he asked me if he can define one area of his website to be a regular news that google can show on google news search results (no subscription) and the other part of the website is news that only subscribers can read? Thanks Roy
Intermediate & Advanced SEO | | kadut1 -
Robots.txt gone wild
Hi guys, a site we manage, http://hhhhappy.com received an alert through web master tools yesterday that it can't be crawled. No changes were made to the site. Don't know a huge amount about the robots.txt configuration expect that using Yoast by default it sets it not to crawl wp admin folder and nothing else. I checked this against all other sites and the settings are the same. And yet 12 hours later after the issue Happy is still not being crawled and meta data is not showing in search results. Any ideas what may have triggered this?
Intermediate & Advanced SEO | | wearehappymedia0 -
Keyword Stuffing - Ecommerce websites
Hey Mozzers, Im undertaking a content audit and its going very well, we have written some better content for the first set of pages, it still needs some improvement but we have a good base and starting point from which we can make an SEO log and work on it over time. For the content I used the following formula for how many times to include a keyword Word Count / Length of Keyword. (eg. 600 words / 3 word keyword = 200). Then 1-4% of this (2-8 times). This has worked well for me in the past and has been a good base guide. I have ran the pages through Moz optimiser and every single page hit an A for keyword page optimisation. However many of the pages failed on keyword stuffing, which obviously has high priority. My dilemma is that, moz counts 15 as the cut off for keyword stuffing with the written text we have done really well with using it a set number of times. But these pages are product category pages. The keyword in the extreme of cases is listed 7-9 times in the side nav menu. 7-9 times in the product category listings. Take for example *** it is optimised for thermometers (i know it a tough single word keyword, and we have fairly modest aims with it, im using it here for example purposes). The word is used a good number of times within the article but is sent through the roof with the links to the sub categories. This page for example mentions the keyword 30 times. Can anybody suggest any ways to improve on this? Is how we display the categories in the nav bar and in the page excessive? As always many thanks!
Intermediate & Advanced SEO | | ATP0 -
International Subdomain Headache
My client set up a separate domain for their international clients, then set up separate subdomains for each country where they're active (so, for example, the original site is xx.com and the global is xxworldwide.com, with subdomains like mx.xxxworldwide.com). They auto-translated a large amount of content and put the translations on those international sites. The idea was to draw in native speakers. Now, I don't think this is a great practice, obviously, and I'm worried that it could hurt their original site (the xxx.com in the example above). My concern is that Google will see through the translated text, since it was handled with Google Translate, and penalize both sites. I don't think the canonical tag applies here, since Google recommends a no-follow for autotranslated text, but I've also never dealt with this type of situation before. Anyways, if you made it through all of that, congratulations. My question is whether xxx.com is getting any negative effects other than a potential loss of link juice -- and whether there's any legitimate way to present auto-translated text with a few minor changes without incurring a penalty.
Intermediate & Advanced SEO | | Ask44435230 -
301 issues
Hi, I have this site: www.berenjifamilylaw.com. We did a 301 from the old site: www.bestfamilylawattorney.com to the one above. It's been several weeks now and Google has indexed the new site, but still pulls the old one on search terms like: Los Angeles divorce lawyer. I'm curious, does anyone have experience with this? How long does it take for Google to remove the old site and start serving the new one as a search result? Any ideas or tips would be appreciated. Thanks.
Intermediate & Advanced SEO | | mrodriguez14400 -
Using Meta Header vs Robots.txt
Hey Mozzers, I am working on a site that has search-friendly parameters for their faceted navigation, however this makes it difficult to identify the parameters in a robots.txt file. I know that using the robots.txt file is highly recommended and powerful, but I am not sure how to do this when facets are using common words such as sizes. For example, a filtered url may look like www.website.com/category/brand/small.html Brand and size are both facets. Brand is a great filter, and size is very relevant for shoppers, but many products include "small" in the url, so it is tough to isolate that filter in the robots.txt. (I hope that makes sense). I am able to identify problematic pages and edit the Meta Head so I can add on any page that is causing these duplicate issues. My question is, is this a good idea? I want bots to crawl the facets, but indexing all of the facets causes duplicate issues. Thoughts?
Intermediate & Advanced SEO | | evan890 -
Blocking poor quality content areas with robots.txt
I found an interesting discussion on seoroundtable where Barry Schwartz and others were discussing using robots.txt to block low quality content areas affected by Panda. http://www.seroundtable.com/google-farmer-advice-13090.html The article is a bit dated. I was wondering what current opinions are on this. We have some dynamically generated content pages which we tried to improve after panda. Resources have been limited and alas, they are still there. Until we can officially remove them I thought it may be a good idea to just block the entire directory. I would also remove them from my sitemaps and resubmit. There are links coming in but I could redirect the important ones (was going to do that anyway). Thoughts?
Intermediate & Advanced SEO | | Eric_edvisors0 -
Does using robots.txt to block pages decrease search traffic?
I know you can use robots.txt to tell search engines not to spend their resources crawling certain pages. So, if you have a section of your website that is good content, but is never updated, and you want the search engines to index new content faster, would it work to block the good, un-changed content with robots.txt? Would this content loose any search traffic if it were blocked by robots.txt? Does anyone have any available case studies?
Intermediate & Advanced SEO | | nicole.healthline0