Is it bad practice to create pages that 404?
-
We have member pages on our site that are initially empty, until the member does some activity. Currently, since all of these pages are soft 404s, we return a 404 for all these pages and all internal links to them are js links (not links as far as bots are concerned). As soon as the page has content, we switch it to 200 and make the links into regular hrefs.
After doing some research, I started thinking that this is not the best way to handle this situation. A better idea would be to noindex/follow the pages (before they have content) and let the links to these pages be real links.
I'd love to hear input and feedback from fellow Mozzers. What are your thoughts?
-
Yair,
See the infographic on this page regarding rel nofollow tags in links, and when you may want to consider using them. Specifically, see the part about User Generated Content:
http://searchengineland.com/infographic-nofollow-tag-172157
However, Google can decide to crawl whatever they want to crawl, whether it is a nofollowed link, links on a page with a nofollow meta tag, or javascript links. If you really want to keep Google out of those portions of the site you should use the robots.txt disallow statement, as I mentioned in your other thread, or use the X-Robots-Tag as described here.
-
Thanks Everett,
As far as I know, nofollows don't conserve crawl budget. The bots will crawl the link, they just wont transfer any PR.
-
I'm sure Jane meant that it would block indexation of the page.
In my opinion you should probably noindex,follow (robots meta tag) the pages and make the internal links just normal links, possibly with a rel= nofollow link attribute, until the user fills out the profile.
I will go look into your other question as well.
PS: The drawback of this solution is that bots will still be spending crawl budget crawling those URLs if you are linking to them internally. [Edited]
-
Thanks for the clear and concise answer, Jane. You hit the nail right on the head! I appreciate your input.
One question, though. You say that noindex will block bot access to these pages. I'm pretty sure the bots will still crawl the pages (if they find them), just they won't be indexed and presumably they won't be "counted against us" like 404 pages. Is that what you meant?
If you have a minute, maybe you can help me out with this question next: http://moz.com/community/q/internal-nofollows
(Side note: Er_Maqul was referring to the original version of the question (before I edited it) where I had mistakenly written that we nofollow the links.)
-
Hi there,
A 404 certainly isn't the best way to handle a new URL / page before it is populated with content. It is good that Google isn't finding these pages yet (as you state in a later comment), but keep in mind that it could - the pages aren't linked to, but there is never any particular guarantee about what Google will and won't find. It's highly unlikely if you don't link to them, but still - it's not worth taking the risk. As you also say, there's no stopping anyone else from linking to the pages and / or for Google to go on a exploratory mission of its own.
As a note, internal 404s / 410s ("Gone") are perfectly okay if they're appropriate for the situation, i.e. a page has been removed. Not ever removed resource has to be 301ed elsewhere. This isn't the case here though.
To my mind, blocking bot access to these pages while they are empty is a better option, and noindex / follow would achieve this. I believe Er_Maqul has misunderstood what you were saying here - there is no "nofollow" in this situation.
-
Got it, I see.
Well, let's see here. I will state I am no expert in this realm. This is much more of a job for the likes of EGOL or RobertFischer. EGOL in particular with his intimate knowledge of NOINDEX
http://www.mattcutts.com/blog/google-noindex-behavior/
(Scroll down and look at the first post on Matt Cutt's blog there.)
That being said, I still have a few thoughts.
I think you certainly could continue to do what you are doing, I also think that Er_Maquai brought up a point that I touched on as well. Obviously the best scenario is to avoid a 404, and to have original content. Now unfortunately if you don't have have any content to write, or the person doesn't in this case, that set's you up for thin content, and a lot of duplicate content.
It seems to me there is no way to avoid having the extra pages without some sort of script or coding which houses the profiles somewhere other than on a separate page or on a flux capacitor somewhere. So going off that you could create a generic "no profile page" that gets published and use the rel=canonical tag.
I take back my prior statement about "ANY" content. Thin, pointless content, is thin and pointless, and won't benefit you at all. I hope that wasn't interpreted that way.
Again, I think this one is somewhat out of my scope of help, and it might even be worth calling in an SEO professional who specializes in forums for a second opinion. It's like having surgery, gotta go to another Dr to verify your diagnosis.
Sorry I couldn't give a better answer!
-
Wow - thanks for the thorough response, HashtagHustler!
Let me explain a little better...
We get hundreds of signups a day. Each new member has a profile page, which is empty until they do something. Sometimes they never do. So we don't link to the empty pages and they return a 404. As soon as the page has some content, we do link to it and it returns a 200.
Google is not reporting 404s for these pages because they are not linked to. In the pat, when we did link to them, Google reported them as soft 404s.
The current system is working fine.
My question is simply if it makes more sense to allow Google to find these pages (link to them) but noindex them since they do not have content (and are considered soft 404s by Google) or if we should continue doing it the way we are today (which makes me a little uncomfortable since we are creating 1000's of pages - that theoretically may be linked to by other sites - that are 404s)?
-
Yes... all 404 can hurt your SEO campaign, because even if they doesn't count for points (i don't know these), at least the spider ignores a bit more or crawl your page more slowly. Because this, you need to get away all your possible 404 errors.
Think, a empty page can have something. You can use a default text, or use a noindex tag while the page are empty, or simply make a standard profile and link to them all empty pages with a rel=canonical (I think there's the best option). But having more pages, even with low quantity of text, that's better who i doesn't have the pages itself, and, very much better than have 404 errors.
Also, think one more thing. Even more changes have your page, more indexed you're get and low times between indexing. There's another reason to have the pages working, even when they doesn't have any custom info. Because the creation of the page are a change, and customization of them are another change too.
-
Good Morning Yair!
A 404 is a 404 plain and simple. And if Google was able to report it, then they were able to get to it. Which basically means, someone else could. I'm not sure what platform you are using currently using but there are plenty of easy options for a quick fix.
If you wanted a simple option you could just throw up a 302 if you are planning on putting content up SOON. Temporary redirects differ from 301 because they are temporary. Google has to evaluate if they should keep the old page, so in this case, when you are planning on launching the old one (well, technically new one) you might be able to use that to your advantage.
A better idea would be to noindex/follow the pages (before they have content) and let the links to these pages be follow links.
I was unsure exactly what you meant by this. The only difference between follow and no follow links is that you are telling Google not to let your linkjuice carry over to the page you are linking to. You are linking to the page without vetting them. My apologies if you already knew this, but I was slightly confused by your sentence. Google will still go to that page to check it out.
Another option: change your schedule and don't put up pages that far in advance that you aren't planning on publishing. I edit everything offline. Google likes new stuff, especially content. Of course they take longevity into account as well, but as far as making a big splash, putting a website up piecemeal is like having people show up at your birthday saying "Suprise!" when you answer the door when you were the one who invited them. The notion is nice, but it just doesn't have the same effect. Not only in the Google realm, but even more so in Social Media.
My favorite option, and my personal recommendation would be to play the cards you were dealt. Get rid of the 404 and the 200 and embrace that you have a new page! Go on and write a profile piece for the member. Write some sort of Biographical data that can act as a placeholder. At this point it doesn't even need to be stuffed with keywords and amazing seo phrases.
On Second Thought: I'm not sure exactly what your forum is for, and if this issue is specific to a few members or if you are referring to bulk membership, but I have a few ideas on how you might be able to extrapolate some of the signup data into even a simple post to avoid getting a 404! Even if you parsed some of the forms from signup and created a simple little one page that displayed, I think that could help.
At the end of the day, Google loves forums that are strong and authoritative. They also understand that every single person on a forum isn't going to post and isn't going to interact. So depending on what kind of forum you have, and what exactly you are doing, some of the forum issues will just have to be accepted. I think it would be more valuable to clean up negative linking and and analyze internal issues and build form recognition etc. than fix soft 404's coming from a handful of users. Again all of this depends on the size of your pool.
Also, you could just track their usage, if someone is logging in all the time and not posting then fine, I would deal with it because who knows what they are doing in the real world. If someone made an account in 1999, and then threw their computer out of the window in Y2K and never bought another one because they still believe that everything crashed, well then maybe its time to fix that 404.
Hope that helps!
Sorry if my train of thought is a little off this morning.... not enough coffee!
-
Thanks for your quick response, Er.
You are correct about the 404s and I realized that what I wrote in the question was a mistake. We don't have any internal links to these pages (not even nofollow). Until there is content on the page, we make all links to the page into js links. I corrected this in the question now.
Concerning what you said about the pages being useful for SEO even without content: I don't think this is correct. Before we started 404ing the empty profile pages, Webmaster was reporting them as soft 404s. Doesn't this mean that they were hurting us (especially since we have many of them)?
-
Any 404 is bad for SEO. Make a little page for the profile without data, and you can gain a lot of SEO even if your pages doesn't have content.
Even if the links have a nofollow, google follows them to see what is in the other side. For this reason, avoid ALWAYS you can to have a link to a 404 page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is my home page ranking much higher than my collection page?
Hi everyone, Why is my client's home page ranking high for a certain keyword phrase rather than a collection page I have which is well optimised for this keyword? The collection page is on the 10th SERPs page. I did see there were keywords used in the footer of page and the keyword was also used in some intro text on the home page so I removed the keyword from these two places nearly 2 weeks ago and requested google to reindex both the collection page and home page and I've not seen any improvement of the collection page's ranking in SERPs. I also changed the meta description and meta title as the ctr was poor but there wasn''t that many impressions either. It is a competitive keyword organically so maybe the collection page's authority is just not good enough compared to the competitors hence why they are choosing the home page as it has higher page authority however this still is not helpful to searchers who land on home page. Does anyone have any ideas of what else I can do to get google to rank the ocllection page higher for the keyword instead of home page?
Intermediate & Advanced SEO | | TZ19820 -
What are the best practices for microdata?
Not too long ago, Dublin Core was all the rage. Then Open Graph data exploded, and Schema seems to be highly regarded. In a best-case scenario, on a site that's already got the basics like good content, clean URLs, rich and useful page titles and meta descriptions, well-named and alt-tagged images and document outlines, what are today's best practices for microdata? Should Open Graph information be added? Should the old Dublin Core be resurrected? I'm trying to find a way to keep markup light and minimal, but include enough microdata for crawlers to get a better sense of the content and its relationships to other subdomains and sites.
Intermediate & Advanced SEO | | WebElaine0 -
Indexing Dynamic Pages
http://www.oreillyauto.com/site/c/search/Wiper+Blade/03300/C0047.oap?make=Honda&model=Accord&year=2005&vi=1430764 How is O'Reilly getting this page indexed? It shows up in organic results for [2005 honda accord windshield wiper size].
Intermediate & Advanced SEO | | Kingof50 -
Duplicate content within sections of a page but not full page duplicate content
Hi, I am working on a website redesign and the client offers several services and within those services some elements of the services crossover with one another. For example, they offer a service called Modelling and when you click onto that page several elements that build up that service are featured, so in this case 'mentoring'. Now mentoring is common to other services therefore will feature on other service pages. The page will feature a mixture of unique content to that service and small sections of duplicate content and I'm not sure how to treat this. One thing we have come up with is take the user through to a unique page to host all the content however some features do not warrant a page being created for this. Another idea is to have the feature pop up with inline content. Any thoughts/experience on this would be much appreciated.
Intermediate & Advanced SEO | | J_Sinclair0 -
Do you add 404 page into robot file or just add no index tag?
Hi, got different opinion on this so i wanted to double check with your comment is. We've got /404.html page and I was wondering if you would add this page to robot text so it wouldn't be indexed or would you just add no index tag? What would be the best approach? Thanks!
Intermediate & Advanced SEO | | Rubix0 -
Retail Store Detail Page and Local SEO Best Practices
We are working with a large retailer that has specific pages for each store they run. We are interested in leveraging the best practices that are out their specifically for local search. Our current issue is around URL design for the stores pages themselves. Currently, we have store URL's such as: /store/12584 The number is a GUID like character that means nothing to search engines or, frankly, humans. Is there a better way we could model this URL for increased relevancy for local retail search? For example: adding store name:
Intermediate & Advanced SEO | | mongillo
www.domain.com/store/1st-and-denny-new-york-city/23421
(example http://www.apple.com/retail/universityvillage/) fully explicit URI www.domain.com/store/us/new-york/new-york-city/10027/bronx/23421
(example http://www.patagonia.com/us/patagonia-san-diego-2185-san-elijo-avenue-cardiff-by-the-sea-california-92007?assetid=5172) the idea with this second version is that we'd make the URL structure more rich and detailed which might help for local search. Would there be a best practice or recommendation as to how we should model this URL? We are also working to create an on-page optimization but we're specifically interested in local seo strategy and URL design.0 -
Too many on page links - product pages
Some of the pages on my client's website have too many on page links because they have lists of all their products. Is there anything I should/could do about this?
Intermediate & Advanced SEO | | AlightAnalytics0 -
301 or 404?
My client has a classified ads website with hundreds of thousands of classified ads. These ads expire quite fast. When the ad expires it gets removed. At the moment this results in a 404 page and thus hundreds of thousands of 404 erros in Webmasters Tools. From what I know this damages SERP results due to slow indexing of important sites and 404 being just plain bad SEO. I suggested doing a 301 from the expired ads to a upper category but this feels like cheating. The content hasn't actually moved, it has been removed. What would you suggest?
Intermediate & Advanced SEO | | PanuKuuluvainen0