To block with robots.txt or canonicalize?
-
I'm working with an apt community with a large number of communities across the US. I'm running into dup content issues where each community will have a page such as "amenities" or "community-programs", etc that are nearly identical (if not exactly identical) across all communities.
I'm wondering if there are any thoughts on the best way to tackle this. The two scenarios I came up with so far are:
Is it better for me to select the community page with the most authority and put a canonical on all other community pages pointing to that authoritative page?
or
Should i just remove the directory all-together via robots.txt to help keep the site lean and keep low quality content from impacting the site from a panda perspective?
Is there an alternative I'm missing?
-
I think the canonical idea is better than blocking the pages all together. Depending on how the site is laid out you may try and make the pages more specific to location being talked about. Maybe adding header tags with the location information as well as adding that info to the page title and meta-description. If it is not too time consuming, I'd try and make those pages more unique especially since you might be getting searches based on a location. Location specific pages may help in that regard.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt file issues on Shopify server
We have repeated issues with one of our ecommerce sites not being crawled. We receive the following message: Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. Read our troubleshooting guide. Are you aware of an issue with robots.txt on the Shopify servers? It is happening at least twice a month so it is quite an issue.
Moz Pro | | A_Q0 -
Meta Robots query
Hi guys, I was ranking really well on my home page for certain keywords which has all dropped pretty dramatically over the last 3/4 weeks - I think the issue is since since the configuration of Yoast SEO Wordpress plugin. In March (when my rankings were strong) my crawl test showed the top data in the attached image, and in May (now the rankings have dropped severly) they show the bottom data. I don't fully understand canonical and Meta Robots so I am hoping someone can shed some light on the following points. 1. Will the change result in my loss of rankings.
Moz Pro | | RocketStats
2. How can I put it back to how it was in March? PS. I haven't had any Google penalties. Thanks,
Joshua RfTar0 -
What is Linking C-Blocks
Currently i am using MOZ pro tool under moz analyticls >> Moz Competitive Link Metrics >> history having a graph "Linking C-Blocks" Please help me understanding Linking C-Blocks, what is, How to build, how to define ...
Moz Pro | | shankar3334 -
Should I block .ashx files from being indexed ?
I got a crawl issue that 82% of site pages have missing title tags
Moz Pro | | thlonius
All this pages are ashx files (4400 pages).
Should I better removed all this files from google ?0 -
Moz campaign works around my robots.txt settings
My robots.txt file looks like this: User-agent: * Disallow: /*? Disallow: /search So, it should block (deindex) all dynamic URLs. If I check this url in Google: site:http://www.webdesign.org/search/page-1.html?author=47 Google tells me: A description for this result is not available because of this site's robots.txt – learn more. So far so good. Now, I ran a Moz SEO campaign and I got a bunch of duplicate page content errors. One of the links is this one: http://www.webdesign.org/search/page-1.html?author=47 (the same I tested in Google and it told me that the page is blocked by robots.txt which I want) So, it makes me think that Moz campaigns check files regardless of what robots.txt say? It’s my understanding User-agent: * should forbid Rogerbot from crawling as well. Am I missing something?
Moz Pro | | VinceWicks0 -
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them.
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them. So i get something like: http://example.com/page1, http://example.com/page2, http://example.com/page3, http://example.com/page4, Because I now have to open each in "Issue: Duplicate Page Content", and this takes a lot of time. The same for duplicate page title.
Moz Pro | | nvs.nim0 -
Sites Blocking Open Site Explorer? Penguin related.
Last week I was looking at a competitors site who has a link scheme going on and I could actually check the links for each anchor text. This week they don't work at all, do you think they're blocking the rogerbot on their domains? Or is there a problem with open site explorer? http://www.opensiteexplorer.org/anchors?site=www.decks.ca If you're interested in the background, all the links are to instant-home-biz . com which then redirects decks . ca - it's a tricky technique. Pretty much all of the links are from sketchy sites like: airpr23.xelr8it.biz/ airpr23.anzaland.net/ airpr23.vacation-4-free.com/airpr23.blogfreeradio.net/airpr23.blogomatik.com/http://www.morcandirect.com/mortgages/resources2.php which I thought Penguin was supposed to catch…
Moz Pro | | BeTheBoss0 -
Linking C Blocks - SEOMoz says its a good thing?
In the competitve analysis, one competitor have more Linking C Blocks, Seomoz has a tick by it almost like its a better thing. Surely a site with the same administrative relationship is not going to help you as much from a linking point of view.
Moz Pro | | sanchez19600