How to use robots.txt to block areas on page?
-
Hi,
Across the categories/product pages on out site there are archives/shipping info section and the texts are always the same. Would this be treated as duplicated content and harmful for seo?
How can I alter robots.txt to tell google not to crawl those particular text
Thanks for any advice!
-
Thanks for the info above. I think I'll find out if I can cut the text and try to put popup link.
-
Hi Laura
I have not used lazy loading except with images, however I did some reading around and it might be a solution. There is a large section in Google Webmasters that talks about how to make AJAX readable by a crawler/bot so obviously it is not normally readable (Google Webmaster on AJAX crawling).
The other option is to provide a summary on the product page for shipping info and link to a larger shipping info page (as suggested earlier) and get it to open on a new page/tab. At least this keeps the product page open too.
(Note good UX practice recommends you tell the user they will open a new page if they click on the link - this could be as simple as using the anchor text: "More Detailed Shipping Information (opens new page)".
cheers
Neil
-
Here is a tip that I use for my clients and I would recommend. Most CMS / Ecommerce platforms allow for you to put a category description in the page. But, what they do is when the page paginates is they use the same category description and just different products on the page (some use a querystring on the url, others use a shebang, others use other things).
What I recommend to my clients to escape any thin content issues is to point the canonical url of all of the paginated pages back to the 1st category page. At the same time I will add a noindex, follow tag to the header of the paginated pages. This is counter to what a lot of people do I think, but the reason I do it is because of thin content. Also you don't want your page 3 results cannibalizing your main category landing page results. Since no CMS that I know of lets you specify different category descriptions for each pagination of a category it seems like the only real choice. It also makes it where you do not really need to add rel=next and rel=previous to the paginated pages too.
-
Thanks, the info above is quite detailed.
We are not a shipping company those text are just to ensure visitors accordingly. The shipping info is quite long as we want to prompt as mush as we could to avoid customer leaving current page to search.
-
Hi Laura
I am not sure that you can use robots.txt to prevent a search engine bot from crawling a part of a page. Robots.txt is usually used to exclude a whole page.
The effect of the duplicate content on your search engine optimisation depends in part on how extensive is the duplication. In many cases it seems that Google won't penalise the duplicate content (it understands that some content will of necessity be duplicated) - see this video by Matt Cutts from Google.
Duplicate Content is Small (Short Paragraph)
From your question it sounds like you are talking about part of page and it sounds like a relatively small part - I assume you are not a shipping company so the shipping info would be a small part of the page.
In which case it may not affect your search engine optimisation at all (assuming you are not trying to rank for the shipping info).
As long as the content on the rest of the page is unique or different from other pages on the site.
Duplicate Content is Large (but not a page)
If the shipping info is substantial (say a couple of paragraphs or half the content on the page) then Google suggests you create a separate page with the substantial info on it and use a brief summary on other pages with a link to the separate page:
- Minimize boilerplate repetition: For instance, instead of including lengthy copyright text on the bottom of every page, include a very brief summary and then link to a page with more details. In addition, you can use the Parameter Handling tool to specify how you would like Google to treat URL parameters.
(from Google Webmaster: Duplicate Content)
Duplicated Pages
Much of the discussion about duplicated content is more about whole pages of duplicated content. The risk with these pages are that search engines may not know which to rank (or more to the point rank the one you don't want to rank). This is where you might use a rel=canonical tag or a 301 redirect to direct or hint to the search engine which page to use.
Moz has a good article on Duplicate Content.
All the best
Neil
-
Hiya,
First off the main answer is here - http://moz.com/learn/seo/robotstxt
an alternative solution might be use of the canonical tag meaning you're getting all the link juice rather than letting it fall off the radar. I wouldn't be overly worried about duplicate content its not a big bad wolf that will annihilate your website.
Best idea if you're worried about duplicate content is the canonical tag it has the benefit of keeping link juice where as the robots tends to mean you loose some link juice. One thing to remember those is the canonical tag means the pages will not be indexed (same as robots tag in the end) so if they are ranking (or getting page views) something to remember.
hope that helps.
Good luck.
-
Google smart enough to recognize what it is, it won't get you penalized for duplicate content.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt in subfolders and hreflang issues
A client recently rolled out their UK business to the US. They decided to deploy with 2 WordPress installations: UK site - https://www.clientname.com/uk/ - robots.txt location: UK site - https://www.clientname.com/uk/robots.txt
Technical SEO | | lauralou82
US site - https://www.clientname.com/us/ - robots.txt location: UK site - https://www.clientname.com/us/robots.txt We've had various issues with /us/ pages being indexed in Google UK, and /uk/ pages being indexed in Google US. They have the following hreflang tags across all pages: We changed the x-default page to .com 2 weeks ago (we've tried both /uk/ and /us/ previously). Search Console says there are no hreflang tags at all. Additionally, we have a robots.txt file on each site which has a link to the corresponding sitemap files, but when viewing the robots.txt tester on Search Console, each property shows the robots.txt file for https://www.clientname.com only, even though when you actually navigate to this URL (https://www.clientname.com/robots.txt) you’ll get redirected to either https://www.clientname.com/uk/robots.txt or https://www.clientname.com/us/robots.txt depending on your location. Any suggestions how we can remove UK listings from Google US and vice versa?0 -
Google is Still Blocking Pages Unblocked 1 Month ago in Robots
I manage a large site over 200K indexed pages. We recently added a new vertical to the site that was 20K pages. We initially blocked the pages using Robots.txt while we were developing/testing. We unblocked the pages 1 month ago. The pages are still not indexed at this point. 1 page will show up in the index with an omitted results link. Upon clicking the link you can see the remaining un-indexed pages. Looking for some suggestions. Thanks.
Technical SEO | | Tyler1230 -
Can i use "nofollow" tag on product page (duplicated content)?
Hi, im working on my webstore SEO. I got descriptions from official seller like "Bosch". I got more than 15.000 items so i cant create unique content for each product. Can i use nofollow tag for each product and create great content on category pages? I dont wanna lose rankings because duplicated content. Thank you for help!
Technical SEO | | pejtupizdo0 -
Switchboard Tags - Multiple desktop pages pointing to one mobile page
I have recently started to implement switchboard tags to connect our mobile and desktop pages, and to ensure that our mobile pages show up in rankings for mobile users. Because our desktop site is much deeper in content than our mobile site, there are a number of desktop pages we would like to have point to one mobile page. However, with the switchboard tags, this poses a problem because it requires multiple rel=canonical tags to be placed on the one mobile page. I'm assuming this will either confuse the search engines, or they will choose to ignore the rel=canonical tag altogether. Any ideas on how to approach this situation other than creating an equivalent mobile version of every desktop page or implementing a user agent detection redirect?
Technical SEO | | JBlank0 -
Which Pagination/Canonicalization Page Selection Approach Should be Used?
Currently working on a retail site that has a product category page with a series of pages related to each other i.e. page 1, page 2, page 3 and Show All page. These are being identified as duplicate content/title pages. I want to resolve this through the applications of pagination to the pages so that crawlers know that these pages belong to the same series. In addition to this I also want to apply canonicalization to point to one page as the one true result that rules them all. All pages have equal weight but I am leaning towards pointing at the ‘Show All’. Catch is that products consistently change meaning that I am sometimes dealing with 4 pages including Show All, and other times I am only dealing with one page (...so actually I should point to page 1 to play it safe). Silly question, but is there a hard and fast rule to setting up this lead page rule?
Technical SEO | | Oxfordcomma0 -
Warnings for blocked by blocked by meta-robots/meta robots Nofollow...how to resolve?
Hello, I see hundreds of notices for blocked by meta-robots/meta robots nofollow and it appears it is linked to the comments on my site which I assume I would not want to be crawled. Is this the case and these notices are actually a positive thing? Please advise how to clear them up if these notices can be potentially harmful for my SEO. Thanks, Talia
Technical SEO | | M80Marketing0 -
Need Help With Robots.txt on Magento eCommerce Site
Hello, I am having difficulty getting my robots.txt file to be configured properly. I am getting error emails from Google products stating they can't view our products because they are being blocked, and this past week, in my SEO dashboard, the URL's receiving search traffic dropped by almost 40%. Is there anyone that can offer assistance on a good template robots.txt file I can use for a Magento eCommerce website? The one I am currently using was found at this site here: e-commercewebdesign.co.uk/blog/magento-seo/magento-robots-txt-seo.php - However, I am getting problems from Google now because of it. I searched and found this thread here: http://www.magentocommerce.com/wiki/multi-store_set_up/multiple_website_setup_with_different_document_roots#the_root_folder_robots.txt_file - But I felt like maybe I should get some additional help on properly configuring a robots for a Magento site. Thanks in advance for any help. Please, let me know if you need more info to provide assistance.
Technical SEO | | JerDoggMckoy0 -
Does RogerBot read URL wildcards in robots.txt
I believe that the Google and Bing crawlbots understand wildcards for the "disallow" URL's in robots.txt - does Roger?
Technical SEO | | AspenFasteners0