Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Is it okay to copy and paste on page content into the meta description tag?
-
I have heard conflicting answers to this. I always figured that it was okay to selectively copy and paste on page content into the meta description tag.....especially if the onpage content is well written. How can it be duplicate content if it's pulling from the exact same page?
Does anybody have any feedback from a credible source about this?
Thanks.
-
If you feel that your are explaining the page the best you can in the meta description that go of it. I think that this is one of the most vital tags on the website. It brings people into your website.
-
Hey Vanguard Communications!
I don't see why doing this would hurt your sites rankings or be deemed as duplicate in any way.
Since SEO is an "experimental" process (which is why you've heard such conflicting answers on this), my best advice would be to give it a try and see how it plays out. Or as EGOL stated, try adding a few extra words if the page content is too short. Or even add a few extra words to make it differ from the page content. Best of luck to you!
-
Think about it like this:
Your meta description is a condensed version of what your page is about, including keywords.
Your opening statement is usually about what your page is about, including keywords.Sometimes you can modify them a bit to add in additional keywords, or to make them more focused on a given topic. To answer your original question, yes it is fine. It is NOT considered duplicate content.
When Matt Cutts is talking about duplications, he means dont have multiple pages with the same ones. Not to avoid having onpage content the same as meta descriptions.
""The way I think of it is you can either have a unique meta tag description, or you can choose to have no meta tag description, but I wouldn't have duplicate meta tag descriptions," Cutts said."
-
I also do this. The meta description is supposed to have a nice sentence or so that is relevant to the page and makes people click. If the content on your page can't do that, you have a bigger problem than meta descriptions.
-
I do this on lots of pages. LOTS.
Many of my pages have an
title
at the top of the page and a short description beneath it. That short description is also used as my meta description. Sometimes I add a few extra words if it is short. I don't think that this hurts me a bit in the search engines.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Alternate page with proper canonical tag Status: Excluded in Google webmaster tools.
In Google Webmaster Tools, I have a coverage issue. I am getting this error message: Alternate page with proper canonical tag Status: Excluded. It gives the below blog post page as an example. Any idea how to resolve? At one time, I was using handl utm grabber, but the plugin is deactivated on my website. https://www.savacations.com/turrialba-costa-ricas-garden-city/?utm_source=deleted&utm_medium=deleted&utm_term=deleted&utm_content=deleted&utm_campaign=deleted&gclid=deleted5.
Intermediate & Advanced SEO | | Alancito0 -
No index detected in robots meta tag GSC issue_Help Please
Hi Everyone, We just did a site migration ( URL structure change, site redesign, CMS change). During migration, dev team messed up badly on a few things including SEO. The old site had pages canonicalized and self canonicalized <> New site doesn't have anything (CMS dev error) so we are working retroactively to add canonicalization mechanism The legacy site had URL’s ending with a trailing slash “/” <> new site got redirected to Set of url’s without “/” New site action : All robots are allowed: A new sitemap is submitted to google search console So here is my problem (it been a long 24hr night for me 🙂 ) 1. Now when I look at GSC homepage URL it says that old page is self canonicalized and currently in index (old page with a trailing slash at the end of URL). 2. When I try to perform a live URL test, I get the message "No: 'noindex' detected in 'robots' meta tag" , so indexation cant be done. I have no idea where noindex is coming from. 3. Robots.txt in search console still showing old file ( no noindex there ) I tried to submit new file but old one still coming up. When I click on "See live robots.txt" I get current robots. 4. I see that old page is still canonicalized and attempting to index redirected old page might be confusing google Hope someone can help to get the new page indexed! I really need it 🙂 Please ping me if you need more clarification. Thank you ! Thank you
Intermediate & Advanced SEO | | bgvsiteadmin1 -
Are ALL CAPS construed as spamming if they are used in a meta description tag call to action?
I know this seems like an old school question. As a long time SEO I would never use ALL CAPS in a title tag (unless a brand name is capitalized). However I recently came across a Moz video about creating better calls to action in the meta description tags. Some of the examples had CTAs that were using all caps (i.e. CALL NOW! or LOWEST QUOTES!) I realize there is a debate about the user experience implications. However I'm more concerned about search engines penalizing websites that are using ALL CAPS CTAs in their meta description tags. Any feedback/advice would be appreciated. Thanks
Intermediate & Advanced SEO | | RosemaryB0 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0 -
Artist Bios on Multiple Pages: Duplicate Content or not?
I am currently working on an eComm site for a company that sells art prints. On each print's page, there is a bio about the artist followed by a couple of paragraphs about the print. My concern is that some artists have hundreds of prints on this site, and the bio is reprinted on every page,which makes sense from a usability standpoint, but I am concerned that it will trigger a duplicate content penalty from Google. Some people are trying to convince me that Google won't penalize for this content, since the intent is not to game the SERPs. However, I'm not confident that this isn't being penalized already, or that it won't be in the near future. Because it is just a section of text that is duplicated, but the rest of the text on each page is original, I can't use the rel=canonical tag. I've thought about putting each artist bio into a graphic, but that is a huge undertaking, and not the most elegant solution. Could I put the bio on a separate page with only the artist's info and then place that data on each print page using an <iframe>and then put a noindex,nofollow in the robots.txt file?</p> <p>Is there a better solution? Is this effort even necessary?</p> <p>Thoughts?</p></iframe>
Intermediate & Advanced SEO | | sbaylor0 -
Could you use a robots.txt file to disalow a duplicate content page from being crawled?
A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO. I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution?
Intermediate & Advanced SEO | | gregelwell0 -
Rel=canonical tag on original page?
Afternoon All,
Intermediate & Advanced SEO | | Jellyfish-Agency
We are using Concrete5 as our CMS system, we are due to change but for the moment we have to play with what we have got. Part of the C5 system allows us to attribute our main page into other categories, via a page alaiser add-on. But what it also does is create several url paths and duplicate pages depending on how many times we take the original page and reference it in other categories. We have tried C5 canonical/SEO add-on's but they all seem to fall short. We have tried to address this issue in the most efficient way possible by using the rel=canonical tag. The only issue is the limitations of our cms system. We add the canonical tag to the original page header and this will automatically place this tag on all the duplicate pages and in turn fix the problem of duplicate content. The only problem is the canonical tag is on the original page as well, but it is referencing itself, effectively creating a tagging circle. Does anyone foresee a problem with the canonical tag being on the original page but in turn referencing itself? What we have done is try to simplify our duplicate content issues. We have over 2500 duplicate page issues because of this aliasing add-on and want to automate the canonical tag addition, rather than go to each individual page and manually add this tag, so the original reference page can remain the original. We have implemented this tag on one page at the moment with 9 duplicate pages/url's and are monitoring, but was curious if people had experienced this before or had any thoughts?0 -
How to auto generate a unique meta description?
The site I am working on is a code nightmare for starters. I'm editing a file called layout that controls the section of each page. The programmer from a while back got unique titles by putting this piece of code in: <title><?= $this->metaTag ?></title> In all the different controllers and stuff I can see where the title is the name of the product plus review or something to that effect. How do I do this for the meta description? Right now the meta description is static in the layout file, and so every page has an identical one. I was hoping there was a way to make the meta description automatically use the first 140 characters on the page or something. Something like this:
Intermediate & Advanced SEO | | DanDeceuster0