Duplicate description problem in Wordpress.
-
Webmaster tools is flagging up duplicate descriptions for the page http://www.musicliveuk.com/live-acts. The page is one page in the wordpress page editor and the web designer set it up so that I can add new live acts from a seperate page editor on the left menu and that feeds into the page 'live-acts'. (it says under template 'live-acts-feed'. The problem is as I add more acts it creates new url's eg http://www.musicliveuk.com/live-acts/page/2 and http://www.musicliveuk.com/live-acts/page/3 etc...
I use the all in one SEO pack and webmaster tools tells me that page 2/3/4/ etc all have the same description. How can I overcome this? I can't write new descriptions for each page as the all in one SEO pack will only allow me to enter one for the page 'live-acts'.
-
Thanks!!
-
Hi Samuel, As you know that duplicate content in word press can easily be identified through various plugins. Here's is a way to How To Solve the Duplicate Content & Resolving Duplicate Content.
I hope that you will found the solution.
-
I would highly recomment the Yoast SEO plugin, and would recommend the use of breadcrumbs within the template. In the header.php of your theme you I would suggest you add a rel="noindex, follow" on category pages. I wrote an article detailing how to set this up, and so far its working well for me: http://www.laceytechsolutions.co.uk/blog/wordpress-development/avoiding-duplicate-content-in-wordpress
On another note, I have used a plugin called GigPress, for an event website I put together and this works wonders, I would highly recomment this plugin. I hope some of this information is useful to you.
-
Naghirniac is right, also I would consider changing to SEO by Yoast plugin (I have even used this plugin in conjunction with SEO Presser with great success) and write all your titles and descriptions.
The Yoast plugin has a great interface that allows you to see the Ttile, URL, and description exactly as it would look in Google search, excellent WP plugin....
-
Hy Samuel,
In this case one good option is to use the metatag rel cannonical. Do you know this? You will need to input in the page, inside the section the follow:
where example.com/index.html is the page that you want google consider. With this google will no more consider duplicate content for your other pages.
To have more detailed info, check this post out: http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content
Let's say a blog is publishing original content. Now let's say a second blog steals that original content via bot and publishes it as it's own. Now further assume the original blog doesn't notice this for several years. How much damage could this do to blog A for Google results? Any opinions?
Intermediate & Advanced SEO | | CYNOT0 -
Does Google ignore duplicate meta descriptions?
Hi there SEO mozzers, I am dealing with a website that has duplicate meta descriptions (we know is bad).As a punishment, Google totally ignores the meta descriptions and picks content from the website and displays it in SERP. I already read the https://moz.com/blog/why-wont-google-use-my-meta-description but I was wondering if there is more information/knowledge out there. Any tips are appreciated!
Intermediate & Advanced SEO | | Europarl_SEO_Team0 -
Internal Duplicate Content Question...
We are looking for an internal duplicate content checker that is capable of crawling a site that has over 300,000 pages. We have looked over Moz's duplicate content tool and it seems like it is somewhat limited in how deep it crawls. Are there any suggestions on the best "internal" duplicate content checker that crawls deep in a site?
Intermediate & Advanced SEO | | tdawson091 -
Duplicate Content with URL Parameters
Moz is picking up a large quantity of duplicate content, consists mainly of URL parameters like ,pricehigh & ,pricelow etc (for page sorting). Google has indexed a large number of the pages (not sure how many), not sure how many of them are ranking for search terms we need. I have added the parameters into Google Webmaster tools And set to 'let google decide', However Google still sees it as duplicate content. Is it a problem that we need to address? Or could it do more harm than good in trying to fix it? Has anyone had any experience? Thanks
Intermediate & Advanced SEO | | seoman100 -
E-commerce duplicate URLS
Hi I just realized that my e-commerce products do not have any difference except the SKUS, PRICE and THE PRODUCT name. Apart from each page has the same sidebar and a piece of content ( same ) under each product pages. And this is the reason why i am getting too many duplicate urls warning through Moz analytics. I do not have any other contents to add for each product because of the nature of the product. Only the price, product name and the SKUs will be different and rest will all be same for each products. How can i fix this ? Thanks
Intermediate & Advanced SEO | | MindlessWizard0 -
Pagination causing duplicate content problems
Hi The pagination on our website www.offonhols.com is causing duplicate content problems. Is the best solution adding add rel=”prev” / “next# to the hrefs As now the pagination links at the bottom of the page are just http://offonhols.com/default.aspx?dp=1
Intermediate & Advanced SEO | | offonhols
http://offonhols.com/default.aspx?dp=2
http://offonhols.com/default.aspx?dp=3
etc0 -
About robots.txt for resolve Duplicate content
I have a trouble with Duplicate content and title, i try to many way to resolve them but because of the web code so i am still in problem. I decide to use robots.txt to block contents that are duplicate. The first Question: How do i use command in robots.txt to block all of URL like this: http://vietnamfoodtour.com/foodcourses/Cooking-School/
Intermediate & Advanced SEO | | magician
http://vietnamfoodtour.com/foodcourses/Cooking-Class/ ....... User-agent: * Disallow: /foodcourses ( Is that right? ) And the parameter URL: h
ttp://vietnamfoodtour.com/?mod=vietnamfood&page=2
http://vietnamfoodtour.com/?mod=vietnamfood&page=3
http://vietnamfoodtour.com/?mod=vietnamfood&page=4 User-agent: * Disallow: /?mod=vietnamfood ( Is that right? i have folder contain module, could i use: disallow:/module/*) The 2nd question is: Which is the priority " robots.txt" or " meta robot"? If i use robots.txt to block URL, but in that URL my meta robot is "index, follow"0 -
Duplicate content clarity required
Hi, I have access to a masive resource of journals that we have been given the all clear to use the abstract on our site and link back to the journal. These will be really useful links for our visitors. E.g. http://www.springerlink.com/content/59210832213382K2 Simply, if we copy the abstract and then link back to the journal source will this be treated as duplicate content and damage the site or is the link to the source enough for search engines to realise that we aren't trying anything untoward. Would it help if we added an introduction so in effect we are sort of following the curating content model? We are thinking of linking back internally to a relevant page using a keyword too. Will this approach give any benefit to our site at all or will the content be ignored due to it being duplicate and thus render the internal links useless? Thanks Jason
Intermediate & Advanced SEO | | jayderby0