10,000 New Pages of New Content - Should I Block in Robots.txt?
-
I'm almost ready to launch a redesign of a client's website. The new site has over 10,000 new product pages, which contain unique product descriptions, but do feature some similar text to other products throughout the site.
An example of the page similarities would be the following two products:
-
Brown leather 2 seat sofa
-
Brown leather 4 seat corner sofa
Obviously, the products are different, but the pages feature very similar terms and phrases.
I'm worried that the Panda update will mean that these pages are sand-boxed and/or penalised.
Would you block the new pages? Add them gradually? What would you recommend in this situation?
-
-
Consider reversing your thinking from "what will be my loss to panda" into "what can I do to make this site kick ass".
Reach for opportunity, extend yourself.
If this was my site I would get a writer on those product descriptions to make them unquestionably unique, beef them up, add salesmanship and optimize them for search. This will give you substantive unique content, that converts better, pulls more long tail traffic and moves out of competition with other sites that do the minimal.
Sure, it will cost money but in the long run it could bring back a huge return.
My only caution on this is that if you make this investment in writing you need to do that on a site that has can pull reasonable traffic. If you do this on a site that has no links it will not do you much good. It is part of a marketing plan not a single item on a "to do" list.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can anyone tell me why this page has content wider than screen?
I am getting that error on my product pages. This link is in the errors http://www.wolfautomation.com/drive-accessory-safety-sto-module-i500 but when I look at it on mobile it is fine.
Intermediate & Advanced SEO | | Tylerj0 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
New URL Structure caused virtually All rankings to drop 5 to 10 positions in latest report ?.. Is this normal
Hi All, We changed out url structure on our website to both reduce both the size of our category url structure (reduce the number of layers '/ ' ) and also to replace the underscores we originally had to hyphens... We did this during a new site design. Anyway we relaunched it a week ago. We did the 301 redirects from old to new , new site maps etc, and the latest moz ranking report is showing most of them dropping 5 to 10 positions i.e from 3rd to 10th etc... Is this something to be expected , and then it should recover or should this be telling me alarm bells. I would have expected not such a negative shift in all my rankings ?.. Anyone thoughts of this would be greatly appreciated... thanks Pete .
Intermediate & Advanced SEO | | PeteC120 -
New Web Page Not Indexed
Quick question with probably a straightforward answer... We created a new page on our site 4 days ago, it was in fact a mini-site page though I don't think that makes a difference... To date, the page is not indexed and when I use 'Fetch as Google' in WT I get a 'Not Found' fetch status... I have also used the'Submit URL' in WT which seemed to work ok... We have even resorted to 'pinging' using Pinglar and Ping-O-Matic though we have done this cautiously! I know social media is probably the answer but we have been trying to hold back on that tactic as the page relates to a product that hasn't quite launched yet and we do not want to cause any issues with the vendor! That said, I think we might have to look at sharing the page socially unless anyone has any other ideas? Many thanks Andy
Intermediate & Advanced SEO | | TomKing0 -
Interlinking from unique content page to limited content page
I have a page (page 1) with a lot of unique content which may rank for "Example for sale". On this page I Interlink to a page (page 2) with very limited unique content, but a page I believe is better for the user with anchor "See all Example for sale". In other words, the 1st page is more like a guide with items for sale mixed, whereas the 2nd page is purely a "for sale" page with almost no unique content, but very engaging for users. Questions: Is it risky that I interlink with "Example for sale" to a page with limited unique content, as I risk not being able to rank for either of these 2 pages Would it make sense to "no index, follow" page 2 as there is limited unique content, and is actually a page that exist across the web on other websites in different formats (it is real estate MLS listings), but I can still keep the "Example for sale" link leading to page 2 without risking losing ranking of page 1 for "Example for sale"keyword phrase I am basically trying to work out best solution to rank for "Keyword for sale" and dilemma is page 2 is best for users, but is not a very unique page and page 2 is very unique and OK for users but mixed up writing, pictures and more with properties for sale.
Intermediate & Advanced SEO | | khi50 -
301 redirect or Link back from old to new pages
Hi all, We run a ticket agent, and have multiple events that occur year after year, for example a festival. The festival has a main page with each event having a different page for each year like the below: Main page
Intermediate & Advanced SEO | | gigantictickets
http://www.gigantic.com/leefest-tickets (main page) Event pages:
http://www.gigantic.com/leefest-2010-tickets/hawksbrook-lane-beckenham/2009-08-15-13-00-gce/11246a
http://www.gigantic.com/leefest-2010-tickets/highhams-hill-farm-warlingham/2010-08-14-13-00-gce/19044a
http://www.gigantic.com/leefest-2011-tickets/highhams-hill-farm-warlingham/2011-08-13-13-00-gce/26204a
http://www.gigantic.com/leefest-2012-tickets/highhams-hill-farm-warlingham/2012-06-29-12-00-gce/32168a
http://www.gigantic.com/leefest-2013/highhams-hill-farm/2013-07-12-12-00 my question is: Is it better to leave the old event pages active and link them back to the main page, or 301 redirect these pages once they're out of date? (leave them there until there is a new event page to replace it for this year) If the best answer is to leave the page there, should i use a canonical tag back to the main page? and what would be the best way to link back? there is a breadcrumb there now, but it doesn't seem to obvious for users to click this. Keywords we're aming for on this example are 'Leefest Tickets', which has good ranking now, the main page and 2012 page is listed. Thanks in advance for your help.0 -
What content should I block in wodpress with robots.txt?
I need to know if anyone has tips on creating a good robots.txt. I have read a lot of info, but I am just not clear on what I should allow and not allow on wordpress. For example there are pages and posts, then attachments, wp-admin, wp-content and so on. Does anyone have a good robots.txt guideline?
Intermediate & Advanced SEO | | ENSO0 -
Duplicate page Content
There has been over 300 pages on our clients site with duplicate page content. Before we embark on a programming solution to this with canonical tags, our developers are requesting the list of originating sites/links/sources for these odd URLs. How can we find a list of the originating URLs? If you we can provide a list of originating sources, that would be helpful. For example, our the following pages are showing (as a sample) as duplicate content: www.crittenton.com/Video/View.aspx?id=87&VideoID=11 www.crittenton.com/Video/View.aspx?id=87&VideoID=12 www.crittenton.com/Video/View.aspx?id=87&VideoID=15 www.crittenton.com/Video/View.aspx?id=87&VideoID=2 "How did you get all those duplicate urls? I have tried to google the "contact us", "news", "video" pages. I didn't get all those duplicate pages. The page id=87 on the most of the duplicate pages are not supposed to be there. I was wondering how the visitors got to all those duplicate pages. Please advise." Note, the CMS does not create this type of hybrid URLs. We are as curious as you as to where/why/how these are being created. Thanks.
Intermediate & Advanced SEO | | dlemieux0