To block with robots.txt or canonicalize?
-
I'm working with an apt community with a large number of communities across the US. I'm running into dup content issues where each community will have a page such as "amenities" or "community-programs", etc that are nearly identical (if not exactly identical) across all communities.
I'm wondering if there are any thoughts on the best way to tackle this. The two scenarios I came up with so far are:
Is it better for me to select the community page with the most authority and put a canonical on all other community pages pointing to that authoritative page?
or
Should i just remove the directory all-together via robots.txt to help keep the site lean and keep low quality content from impacting the site from a panda perspective?
Is there an alternative I'm missing?
-
I think the canonical idea is better than blocking the pages all together. Depending on how the site is laid out you may try and make the pages more specific to location being talked about. Maybe adding header tags with the location information as well as adding that info to the page title and meta-description. If it is not too time consuming, I'd try and make those pages more unique especially since you might be getting searches based on a location. Location specific pages may help in that regard.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt blocking Moz
Moz are reporting the robots.txt file is blocking them from crawling one of our websites. But as far as we can see this file is exactly the same as the robots.txt files on other websites that Moz is crawling without problems. We have never come up against this before, even with this site. Our stats show Rogerbot attempting to crawl our site, but it receives a 404 error. Can anyone enlighten us to the problem please? http://www.wychwoodflooring.com -Christina
Moz Pro | | ChristinaRadisic0 -
C-Block domains OSE
hi all quick question regarding c-block domains OSE tells me we have 70 c-block domains with a total 130 root domains, is it telling us 70 root domains re c-blocks as this is near impossible for us are c blocks listed as root domains or just links
Moz Pro | | Will_Craig0 -
Blocked by Meta Robots.
Hi, I get this warning on my reporting. Blocked by Meta Robots - This page is being kept out of the search engine indexes by meta-robots. what does that means ? and how to solve that, if i using wordpress as my website engine. and about rel=canonical , in which page I should put this tag, in original page, or in copy page ? thanks for all of your answer, it will be means a lot
Moz Pro | | theconversion0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
Does Rogerbot respect the robots.txt file for wildcards?
Hi All, Our robots.txt file has wildcards in it, which Googlebot recognizes. Can anyone tell me whether or not Rogerbot recognizes wildcards in the robots.txt file? We've done a Rogerbot site crawl since updating the robots.txt file and the pages that are set to disallow using the wildcards are still showing. BTW, Googlebot is not crawling these pages according to Webmaster Tools. Thanks in advance, Robert
Moz Pro | | AC_Pro0 -
Linking C Blocks - SEOMoz says its a good thing?
In the competitve analysis, one competitor have more Linking C Blocks, Seomoz has a tick by it almost like its a better thing. Surely a site with the same administrative relationship is not going to help you as much from a linking point of view.
Moz Pro | | sanchez19600 -
Does the SEOMoz weekly crawl that highlights no meta description tag, take into account if there is a meta robots noindex,follow tag on the pages it indicates the missing meta descriptions?
The weekly crawl website report is telling me that there are pages that have missing meta description tags, yet I've implemented meta robots tags to 'noindex, follow' those pages which are visible in those page source files. As far as Google Is concerned, surely this then won't be a problem since it is being instructed NOT to consider these specific pages for indexing. I am assuming that the weekly SEOmoz website crawl is simply throwing the missing meta description crawl findings into its report without itself observing that the particluar URL references contain the meta robots 'noindex,follow' tag ???? Appreciate if you can clairfy if this is the case. It would help me understand that (at least in terms of my efforts towards Google) your own crawl doesn't observe the meta robots tag instruction, hence the resultant report's flagging the discrepancy.
Moz Pro | | callassist0 -
Local search block seems to mess up ranking calculations!
Time and time again i see SEO moz come out with a new rankings report and tell me that I've gone up or down in the rankings by 9 and I get really excited. Then i go look at the search and once agian we are in the exact same position! What it seems like is sometimes SEO Moz decides to count the local search block, and other times it decides not too. Is there any way to fix this? To make it always count the local search block would be preferred.
Moz Pro | | adriandg0