Do I need to disallow the dynamic pages in robots.txt?
-
Do I need to disallow the dynamic pages that show when people use our site's search box? Some of these pages are ranking well in SERPs. Thanks!
-
These pages that produce soft 404 errors don't show products at all because these people search for our products that are not available.
-
Yes, done that.
-
Just having a quick look what Google say about them:
Here’s a list of steps to correct soft 404s to help both Google and your users:
- Check whether you have soft 404s listed in Webmaster Tools
- For the soft 404s, determine whether the URL:
- Contains the correct content and properly returns a 200 response (not actually a soft 404)
- Should 301 redirect to a more accurate URL
- Doesn’t exist and should return a 404 or 410 response
- Confirm that you’ve configured the proper HTTP Response by using Fetch as Googlebot in Webmaster Tools
- If you now return 404s, you may want to customize your 404 page to aid your users. Ourcustom 404 widget can help.
Have you followed these steps?
Andy
-
These soft 404s produce 200 status code. We already improved our pages when someone finds a product that is not on our list. But then, these dynamic pages are still considered as soft 404s by Google webmaster tools.
-
Well, I would try and fix why they are returning 404's as it would be a shame to block all results. Is this something you can do? Or is the a reason why just blocking is preferred?
-
Yeah, some of them produce soft 404 since there's no content at all but some dynamic pages that rank well show content.
Thanks,
JC
-
OK so when you search, you get back dynamic pages that are producing 404's, but you see the pages in the SERPs?
Just want to make sure i have this right
-
I agree with Andy. Many of our search result pages rank well (and actually convert quite well). I don't think you need to disallow them unless it's for content that doesn't exist. Even at that time, you may still want them up because you may offer complementary products and etc.
-
The reason why we want to block those pages is because they produce soft 404 errors. What should we do? Thanks Andy.
-
If they are ranking well, what is the reason for wanting to block them?
Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do about endless size pages
I'm working on a site that sells products that come in many different sizes. One product may come in 30 sizes.The products themselves are identical, except for the size. There are collections pages that are all of several kinds of product in a particular size and then there are individual product pages of one product the specific size. Collections pages for widgets size 30 is the same content as widgets size 29. A single product page for gold-widget-size-30 is the same content as the single product page gold-widget-size-29. To make matters worse, they all have the same tags and very little written content. The site is in Shopify. Last month there were almost 400 pages that produced visits on organic, mostly in the 1 to 4 per month range, but all together about 1000 visits. There are several hundred more that produced no traffic in organic, but are duplicate (except for size) and part of this giant ball of tangled string. What do you think I should do? Thanks... Mike
Intermediate & Advanced SEO | | 945010 -
Robots.txt Disallowed Pages and Still Indexed
Alright, I am pretty sure I know the answer is "Nothing more I can do here." but I just wanted to double check. It relates to the robots.txt file and that pesky "A description for this result is not available because of this site's robots.txt". Typically people want the URL indexed and the normal Meta Description to be displayed but I don't want the link there at all. I purposefully am trying to robots that stuff outta there.
Intermediate & Advanced SEO | | DRSearchEngOpt
My question is, has anybody tried to get a page taken out of the Index and had this happen; URL still there but pesky robots.txt message for meta description? Were you able to get the URL to no longer show up or did you just live with this? Thanks folks, you are always great!0 -
SEO: How to change page content + shift its original content to other page at the same time?
Hello, I want to replace the content of one page of our website (already indexeed) and shift its original content to another page. How can I do this without problems like penalizations etc? Current situation: Page A
Intermediate & Advanced SEO | | daimpa
URL: example.com/formula-1
Content: ContentPageA Desired situation: Page A
URL: example.com/formula-1
Content: NEW CONTENT! Page B
URL: example.com/formula-1-news
Content: ContentPageA (The content that was in Page A!) Content of the two pages will be about the same argument (& same keyword) but non-duplicate. The new content in page A is more optimized for search engines. How long will it take for the page to rank better?0 -
Large robots.txt file
We're looking at potentially creating a robots.txt with 1450 lines in it. This will remove 100k+ pages from the crawl that are all old pages (I know, the ideal would be to delete/noindex but not viable unfortunately) Now the issue i'm thinking is that a large robots.txt will either stop the robots.txt from being followed or will slow our crawl rate down. Does anybody have any experience with a robots.txt of that size?
Intermediate & Advanced SEO | | ThomasHarvey0 -
Robots.txt Blocking - Best Practices
Hi All, We have a web provider who's not willing to remove the wildcard line of code blocking all agents from crawling our client's site (user-agent: *, Disallow: /). They have other lines allowing certain bots to crawl the site but we're wondering if they're missing out on organic traffic by having this main blocking line. It's also a pain because we're unable to set up Moz Pro, potentially because of this first line. We've researched and haven't found a ton of best practices regarding blocking all bots, then allowing certain ones. What do you think is a best practice for these files? Thanks! User-agent: * Disallow: / User-agent: Googlebot Disallow: Crawl-delay: 5 User-agent: Yahoo-slurp Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: * Crawl-delay: 5 Disallow: /new_vehicle_detail.asp Disallow: /new_vehicle_compare.asp Disallow: /news_article.asp Disallow: /new_model_detail_print.asp Disallow: /used_bikes/ Disallow: /default.asp?page=xCompareModels Disallow: /fiche_section_detail.asp
Intermediate & Advanced SEO | | ReunionMarketing0 -
Guru Needed
Hello, We are interested in finding someone experienced in SEO, who is willing to take a look at my work that I am doing for my employer, on an ongoing basis. This would help me ensure that I am on the right track and we are doing the right thing. Even tough I have some knowledge with SEO, I am still novice in the field. I also have background in marketing and management. I believe by having someone to mentor, would help me to grow professionally, as I feel very passionate about SEO, but to also deliver good work to my employer. I believe someone's experience, reviews and suggestions about our campaigns can help, and I would not expect someone to give out their time freely. Therefore, if required we could talk about a financial agreement. Thank you for your time, and looking for your replies. Monica
Intermediate & Advanced SEO | | monicapopa1 -
Using Meta Header vs Robots.txt
Hey Mozzers, I am working on a site that has search-friendly parameters for their faceted navigation, however this makes it difficult to identify the parameters in a robots.txt file. I know that using the robots.txt file is highly recommended and powerful, but I am not sure how to do this when facets are using common words such as sizes. For example, a filtered url may look like www.website.com/category/brand/small.html Brand and size are both facets. Brand is a great filter, and size is very relevant for shoppers, but many products include "small" in the url, so it is tough to isolate that filter in the robots.txt. (I hope that makes sense). I am able to identify problematic pages and edit the Meta Head so I can add on any page that is causing these duplicate issues. My question is, is this a good idea? I want bots to crawl the facets, but indexing all of the facets causes duplicate issues. Thoughts?
Intermediate & Advanced SEO | | evan890 -
Optimize the category page or a content page?
Hi, We wish to start ranking on a specific keyword ("log house prices" in italian). We have two options on what pages we should optimize for this keyword: A long content page (1000+ words with images) Log houses category page, optimized for the keyword (we have 50+ houses on this page, together with a short price summary). I would think that we have better chances with ranking with option nr.2 , but then we can't use that page for ranking with a more short-tail keyword (like "log houses"). What would you suggest? Is there maybe a third option for this?
Intermediate & Advanced SEO | | JohanMattisson0