To block with robots.txt or canonicalize?
-
I'm working with an apt community with a large number of communities across the US. I'm running into dup content issues where each community will have a page such as "amenities" or "community-programs", etc that are nearly identical (if not exactly identical) across all communities.
I'm wondering if there are any thoughts on the best way to tackle this. The two scenarios I came up with so far are:
Is it better for me to select the community page with the most authority and put a canonical on all other community pages pointing to that authoritative page?
or
Should i just remove the directory all-together via robots.txt to help keep the site lean and keep low quality content from impacting the site from a panda perspective?
Is there an alternative I'm missing?
-
I think the canonical idea is better than blocking the pages all together. Depending on how the site is laid out you may try and make the pages more specific to location being talked about. Maybe adding header tags with the location information as well as adding that info to the page title and meta-description. If it is not too time consuming, I'd try and make those pages more unique especially since you might be getting searches based on a location. Location specific pages may help in that regard.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Same linking c-blocks trend as competitor
I noticed in our competitive link report that our number of linking c-blocks has risen and fallen in the exact same pattern as one of our competitors. Is there a reason why this would be happening?
Moz Pro | | ZoomInformation0 -
Should I block .ashx files from being indexed ?
I got a crawl issue that 82% of site pages have missing title tags
Moz Pro | | thlonius
All this pages are ashx files (4400 pages).
Should I better removed all this files from google ?0 -
Blocked by Meta Robots.
Hi, I get this warning on my reporting. Blocked by Meta Robots - This page is being kept out of the search engine indexes by meta-robots. what does that means ? and how to solve that, if i using wordpress as my website engine. and about rel=canonical , in which page I should put this tag, in original page, or in copy page ? thanks for all of your answer, it will be means a lot
Moz Pro | | theconversion0 -
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them.
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them. So i get something like: http://example.com/page1, http://example.com/page2, http://example.com/page3, http://example.com/page4, Because I now have to open each in "Issue: Duplicate Page Content", and this takes a lot of time. The same for duplicate page title.
Moz Pro | | nvs.nim0 -
Why does SEOMoz crawler ignore robots.txt?
The SEOMoz crawler ignores robots.txt It also "indexes" pages marked as noindex. That means it is filling up the reports with things that don't matter. Is there any way to stop it doing that?
Moz Pro | | loopyal0 -
Robots review
Anything in this that would have caused Rogerbot to stop indexing my site? It only saw 34 of 5000+ pages on the last pass. It had no problems seeing the whole site before. User-agent: Rogerbot Disallow: /default.aspx?*
Moz Pro | | sprynewmedia
//Keep from crawling the CMS urls default.aspx?Tabid=234. Real home page is home.aspx Disallow: /ctl/
// Keep from indexing the admin controls Disallow: ArticleAdmin
// Keep from indexing article admin page Disallow: articleadmin
// same in lower case Disallow: /images/
// Keep from indexing CMS images Disallow: captcha
// keep from indexing the captcha image which appears to be a page to crawls. general rules lacking wildcards User-agent: * Disallow: /default.aspx Disallow: /images/ Disallow: /DesktopModules/DnnForge - NewsArticles/Controls/ImageChallenge.captcha.aspx0 -
How to get rid of the message "Search Engine blocked by robots.txt"
During the Crawl Diagnostics of my website,I got a message Search Engine blocked by robots.txt under Most common errors & warnings.Please let me know the procedure by which the SEOmoz PRO Crawler can completely crawl my website?Awaiting your reply at the earliest. Regards, Prashakth Kamath
Moz Pro | | 1prashakth0 -
Link Blocks
Sorry, perhaps a noob question. In relation to site explorer, have also searched and unable to find any information, wondered if anyone could advise as to what "Linking C Blocks" are? Found under the "Compare Link Metrics" tab. Thanks in advance. Lee
Moz Pro | | LeeMiller0