Robots.txt issue for international websites
-
In Google.co.uk, our US based (abcd.com) is showing:
A description for this result is not available because of this site's robots.txt – learn more
But UK website (uk.abcd.com) is working properly. We would like to disappear .com result totally, if possible. How to fix it?
Thanks in advance.
-
Can you share any information about your robots.txt?
-
My main problem is in the homepage. Both host similar type of products and brands.
You may check the screenshot. Sorry, I had to blanked out the text.
Thanks in advance.
-
Is it showing that for every page, or only some pages? If so, which types of pages? What's the contents of your robots.txt file for the US site?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Shopify Website Page Indexing issue
Hi, I am working on an eCommerce website on Shopify.
Intermediate & Advanced SEO | | Bhisshaun
When I tried Indexing my newly created service pages. The pages are not getting indexed on Google.
I also tried manual indexing of each page and submitted a sitemap but still, the issue doesn't seem to be resolved. Thanks0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Using folder blocked by robots.txt before uploaded to indexed folder - is that OK?
I have a folder "testing" within my domain which is a folder added to the robots.txt. My web developers use that folder "testing" when we are creating new content before uploading to an indexed folder. So the content is uploaded to the "testing" folder at first (which is blocked by robots.txt) and later uploaded to an indexed folder, yet permanently keeping the content in the "testing" folder. Actually, my entire website's content is located within the "testing" - so same URL structure for all pages as indexed pages, except it starts with the "testing/" folder. Question: even though the "testing" folder will not be indexed by search engines, is there a chance search engines notice that the content is at first uploaded to the "testing" folder and therefore the indexed folder is not guaranteed to get the content credit, since search engines see the content in the "testing" folder, despite the "testing" folder being blocked by robots.txt? Would it be better that I password protecting this "testing" folder? Thx
Intermediate & Advanced SEO | | khi50 -
Robots.txt Syntax
I have been having a hard time finding any decent information regarding the robots.txt syntax that has been written in the last few years and I just want to verify some things as a review for myself. I have many occasions where I need to block particular directories in the URL, parameters and parameter values. I just wanted to make sure that I am doing this in the most efficient ways possible and thought you guys could help. So let's say I want to block a particular directory called "this" and this would be an example URL: www.domain.com/folder1/folder2/this/file.html
Intermediate & Advanced SEO | | DRSearchEngOpt
or
www.domain.com/folder1/this/folder2/file.html In order for me to block any URL that contains this folder anywhere in the URL I would use: User-agent: *
Disallow: /this/ Now lets say I have a parameter "that" I want to block and sometimes it is the first parameter and sometimes it isn't when it shows up in the URL. Would it look like this? User-agent: *
Disallow: ?that=
Disallow: &that= What about if there is only one value I want to block for "that" and the value is "NotThisGuy": User-agent: *
Disallow: ?that=NotThisGuy
Disallow: &that=NotThisGuy My big questions here are what are the most efficient ways to block a particular parameter and block a particular parameter value. Is there a more efficient way to deal with ? and & for when the parameter and value are either first or later? Secondly is there a list somewhere that will tell me all of the syntax and meaning that can be used for a robots.txt file? Thanks!0 -
First Website
Hi Everyone, I have just published my first website and was wondering if anybody would like to help me with some hints and tips. This is my first time branching into SEO and could really do with some help. Any feedback would be greatly appreciated. The site address is www.theremovalistsguide.com.au which targets the furniture removal industry in Australia. Thanks for your help.
Intermediate & Advanced SEO | | RobSchofield0 -
Internal Site Structure Question (URL Formation and Internal Link Design)
Hi, I have an e-commerce website that has an articles section: There is an articles.aspx file that can be reached from the top menu and it holds links to all of the articles as follows: xxx.com/articles/article1.aspx
Intermediate & Advanced SEO | | BeytzNet
xxx.com/articles/article2.aspx I want to add several new articles under a new sections, for example a complete set of articles under the title of "buying guide" and the question is what would be the best way? I was thinking of adding a "computers-buying-guides.aspx" accessible from the top menu / footer and from it linking to: xxx.com/computer-buying-ghudes/what-to-check-prior-to-buying-a-laptop.aspx
xxx.com/computer-buying-ghudes/weight-vs-performance.aspx
etc. Any thoughts / recommendations? Thanks0 -
Production and Priority Issue for SEO and Website Usability
I am a NOVICE .........My website is about 4 months old. My developer/programmer only has 4-6 hours of work a week so it is going to take 4 months to finish two weeks of work. So I have to prioritize the things that are best for SEO (Our architecture is PHP,Apache and Zend) .** If you are interested I would be curious to how you would prioritize some or all of these. Or at least as many as you can until you get bored.** 1. Optimizing Cart/Conversion - 7 hrs - (Extremely low conversion rates)
Intermediate & Advanced SEO | | Boodreaux
2. Optimizing Speed for usability -10+ hrs (Very slow on initial load time) 10-14 sec
3. Filling in all Titles and Metadata - 2 hrs
4. Contact persistence with cookie...enter data only once. - 2 hrs
5. Social panels for sharing content - 3 hrs
6. Custom notifications for those who opt in. for updates - 5 hrs
7. Shorten 12 key URL's and optimize with key words - 3 hrs (I rank this very high)
8. Install Wordpress Blog - 5-10 hrs
9. RSS Feed - 5 hrs ( Run a feed real time on side of page)
10. Create Content Management System for me - 20 hrs (So I can make changes)
11. Keywords for H-1 Tags - 1 hr
12. At tag for images - 1 hr
13. Use of bold /italics - 2 hrs
14. Canonical tag in head - 3 hrs Any expert advice will be greatly appreciated. Boodreaux PS After studying SEO for 1 month I think the priorities should be #7,#3, #2, #1, #5 (on landing pages) #11, #12,#6, #4, #13, #14, #8, #9, #100 -
Does using robots.txt to block pages decrease search traffic?
I know you can use robots.txt to tell search engines not to spend their resources crawling certain pages. So, if you have a section of your website that is good content, but is never updated, and you want the search engines to index new content faster, would it work to block the good, un-changed content with robots.txt? Would this content loose any search traffic if it were blocked by robots.txt? Does anyone have any available case studies?
Intermediate & Advanced SEO | | nicole.healthline0