Should I remove all meta descriptions to avoid duplicates as a short term fix?
-
I’m currently trying to implement Matt Cutt’s advice from a recent YouTube video, in which he said that it was better to have no meta descriptions at all than duplicates.
I know that there are better alternatives, but, if forced to make a choice, would it be better to remove all duplicate meta descriptions from a site than to have duplicates (leaving a lone meta tag description on the home page perhaps?). This would be a short term fix prior to making changes to our CMS to allow us to add unique meta descriptions to the most important pages.
I’ve seen various blogs across the internet which recommend removing all the tags in these circumstances, but I’m interested in what people on Moz think of this.
The site currently has a meta description which is duplicated across every page on the site.
-
Yes, if you can quickly write unique meta descriptions you can start doing that immediately and if the site is not big and you manage to finish it in 1- 2 weeks there is no use to delete meta descriptions. If you see that this will occupy more then 2 weeks it is better to delete duplicated metadescriptions. But the best solution will be to write unique meta description immediately for home page and other important pages which rank right now.
-
Thanks Marc for answering what is in many ways an unfair question.
I definitely agree that the long term objective should be different and relevant meta descriptions as you say. It's also good to know that each of the approaches I suggested were ultimately bad practice, even if one of them is less bad than the other.
-
this is a tough choice between 2 bad "mistakes" - if you leave it blank or empty you don`t use the potential you have... if you have duplicates this is also the same PLUS being afraid what could happen because Matt Cutts mentioned this topic...
You can
t expect a really serious recommendation because both is a "no-go" to me at the moment. If you leave it blank, Google will decide what they display within their SERPs respective Snippets. If your sites have a good and relevant text this won
t end up in a desaster but if there is nothing which Google can pick from???! You no what I mean? (many shops have this problem then)On the other hand this announcement might be a hint to what already happend with the meta keywords... they have no relevance anymore... the only recommendation I would give you now: try it out!
Take one page or a few more and delete the meta description. Leave it blank and wait a little bit... sooner or later you will be able to see the effect within the SERPs then. Upon this result you can make your decision. Don`t do that with your start/home/entry page... use sites (URLs I mean) which are linked more deeper...
BUT your long-term goal should be: create different and relevant meta descriptions
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Penalty for duplicate content on the same website?
Is it possible to get a penalty for duplicate content on the same website? I have a old custom-built site with a large number of filters that are pre-generated for speed. Basically the only difference is the meta title and H1 tag, with a few text differences here and there. Obviously I could no-follow all the filter links but it would take an enormous amount of work. The site is performing well in the search. I'm trying to decide whether if there is a risk of a penalty, if not I'm loath to do anything in case it causes other issues.
Intermediate & Advanced SEO | | seoman100 -
Internal Duplicate Content Question...
We are looking for an internal duplicate content checker that is capable of crawling a site that has over 300,000 pages. We have looked over Moz's duplicate content tool and it seems like it is somewhat limited in how deep it crawls. Are there any suggestions on the best "internal" duplicate content checker that crawls deep in a site?
Intermediate & Advanced SEO | | tdawson091 -
Duplicate page wordpress title
Hi Moz Fan, I have a few question that confuse by now, According to my website used wordpress for create a blog.
Intermediate & Advanced SEO | | ASKHANUMANTHAILAND
but now i have got duplicate page title from wordpress blog and very weird because these duplicate
page title look like these term Lamborghini ASK Hanuman Club < title that was duplicate but they duplicate with the page like this
articles/13-วิธีง่ายๆ-เป็น-คนรวย-ได้/lamborghini-2/ , /articles/13-วิธีง่ายๆ-เป็น-คนรวย-ได้/lamborghini/ I don't know why Worpress create a new page that only have 1 pics, So I think that might be the effect
from Wordpress PHP Code Or wrong setting. Can anyone suggestion what should we do for these issues ?
because i have many duplicate page title from this case.1 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Duplicate on page content - Product descriptions - Should I Meta NOINDEX?
Hi, Our e-commerce store has a lot of product descriptions duplicated - Some of them are default manufacturer descriptions, some are descriptions because the colour of the product varies - so essentially the same product, just different colour. It is going to take a lot of man hours to get the unique content in place - would a Meta No INDEX on the dupe pages be ok for the moment and then I can lift that once we have unique content in place? I can't 301 or canonicalize these pages, as they are actually individual products in their own right, just dupe descriptions. Thanks, Ben
Intermediate & Advanced SEO | | bjs20101 -
REL canonicals not fixing duplicate issue
I have a ton of querystrings in one of the apps on my site as well as pagination - both of which caused a lot of Duplicate errors on my site. I added rel canonicals as a php condition so every time a specific string (which only exists in these pages) occurs. The rel canonical notification shows up in my campaign now, but all of the duplicate errors are still there. Did I do it right and just need to ignore the duplicate errors? Is there further action to be taken? Thanks!
Intermediate & Advanced SEO | | Ocularis0 -
Duplicate content clarity required
Hi, I have access to a masive resource of journals that we have been given the all clear to use the abstract on our site and link back to the journal. These will be really useful links for our visitors. E.g. http://www.springerlink.com/content/59210832213382K2 Simply, if we copy the abstract and then link back to the journal source will this be treated as duplicate content and damage the site or is the link to the source enough for search engines to realise that we aren't trying anything untoward. Would it help if we added an introduction so in effect we are sort of following the curating content model? We are thinking of linking back internally to a relevant page using a keyword too. Will this approach give any benefit to our site at all or will the content be ignored due to it being duplicate and thus render the internal links useless? Thanks Jason
Intermediate & Advanced SEO | | jayderby0 -
Any experience regarding what % is considered duplicate?
Some sites (including 1 or two I work with) have a legitimate reason to have duplicate content, such as product descriptions. One way to deal with duplicate content is to add other unique content to the page. It would be helpful to have guidelines regarding what percentage of the content on a page should be unique. For example, if you have a page with 1,000 words of duplicate content, how many words of unique content should you add for the page to be considered OK? I realize that a) Google will never reveal this and b) it probably varies a fair bit based on the particular website. However... Does anyone have any experience in this area? (Example: You added 300 words of unique content to all 250 pages on your site, that each had 100 words of duplicate content before, and that worked to improve your rankings.) Any input would be appreciated! Note: Just to be clear, I am NOT talking about "spinning" duplicate content to make it "unique". I am talking about adding unique content to a page that has legitimate duplicate content.
Intermediate & Advanced SEO | | AdamThompson0