404-like content
-
A site that I look after is having lots of soft 404 responses for pages that are not 404 at all but unique content pages.
the following page is an example:
http://www.professionalindemnitynow.com/medical-malpractice-insurance-clinics
This page returns a 200 response code, has unique content, but is not getting indexed. Any ideas?
To add further information that may well impact your answer, let me explain how this "classic ASP" website performs the SEO Friendly url mapping:
All pages within the custom CMS have a unique ID which are referenced with an ?intID=xx parameter.
The custom 404.asp file receives a request, looks up the ID to find matching content in the CMS, and then server.transfers the visitor to the correct page.
Like I said, the response codes are setup correctly, as far as Firebug can tell me.
any thoughts would be most appreciated.
-
Scott, if you fix the problem by using the global.ax fiile, remeber to the make sure that the 404 page does then retuurn a 404.
-
i think how google does detects a soft 404 is like this.
http://www.professionalindemnitynow.com/gobblygook should return a 404, but returns a 200, so they now know that you site is prone to soft 404's
but how do they then decide what pages on the site are and what are not s404's is not clear. From reading, my best understanding is that they then look for simularities, to the know s404, such as timings, and other criteria. -
This is the point. it should return a 404, but instead returns a 200, this is what is called a soft 404.
See my other comment on how to fix. -
The page we are discussing is not listed in the image you shared.
I checked one link which is listed: http://www.professionalindemnitynow.com/business-consultants-quote
The top of the page says "Error - The page you have tried to access cannot be found"
While the page returns a 200 header code, Google is likely seeing the page header text and recognizing it as a "404-like" page as they shared.
-
You could try using either the global.asx file or a http model to do the rewiring, global.asx would be the easiest.
from memory the begin_request event would be the one to use.
the thing is you need to do the rewriting earlier in the event cycle.
-
Thanks Yannick. Completely agree with the content of the page using the keywords too frequently. This is the site owner claiming to "understand" SEO! I will advise him that he needs to calm down the keyword stuffing.
I'm going to add the page, and other similar landing pages that are used for Adwords, to the public sitemap
-
The reason I refer to it as a soft 404, is the listing within webmaster tools. See attached image for more examples.
You're right - it is not on the sitemap which I need to address, but still dont see why Google detect this as a 404 when it clearly 200's.
Thanks for your response.
-
Hi Scott.
I am confused why you refer to the link you shared as a soft-404. http://www.professionalindemnitynow.com/medical-malpractice-insurance-clinics. The page title is "Medical Malpractice Insurance for Clinics" which is a perfect match for the URL. The page returns a 200 response header code. By all counts this appears as the proper page which should be returned and not a 404 in any way.
If you have a 404 error log file which shows this page as a 404 error, that issue is completely internal to your site. From the perspective of Google and the rest of the world your site is working perfectly. If the only place the page shows as a 404 is your log file, you want to check with a developer to determine exactly what is triggering the file entry.
With respect to indexing, I support Yannick's findings.
-
I'd say: the URL isn't accessible via the menu? Can't find it anywhere? I tried looking under http://www.professionalindemnitynow.com/Medical-malpractice-insurance but couldn't find a link to the page. Is the page only located in your sitemap? That might be why it isn't indexed. Link to it (more!)
The other thing is o/c: high keyword density/spammy usage of the keywords you are targetting
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What should I do with all these 404 pages?
I have a website that Im currently working on that has been fairly dormant for a while and has just been given a face lift and brought back to life. I have some questions below about dealing with 404 pages. In Google WMT/search console there are reports of thousands of 404 pages going back some years. It says there are over 5k in total but I am only able to download 1k or so from WMT it seems. I ran a crawl test with Moz and the report it sent back only had a few hundred 404s in, why is that? Im not sure what to do with all the 404 pages also, I know that both Google and Moz recommend a mixture of leaving some as 404s and redirect others and Id like to know what the community here suggests. The 404s are a mix of the following: Blog posts and articles that have disappeared (some of these have good back-links too) Urls that look like they used to belong to users (the site used to have a forum) which where deleted when the forum was removed, some of them look like they were removed for spam reasons too eg /user/buy-cheap-meds-online and others like that Other urls like this /node/4455 (or some other random number) Im thinking I should permanently redirect the blog posts to the homepage or the blog but Im not sure what to do about all the others? Surely having so many 404s like this is hurting my crawl rate?
Technical SEO | | linklander0 -
404 Errors in WMT
Currently my website have about 10,000 404 errors for my site as wordpress is adding /feed/ to the end of all url in my website.. Should I restrict /feed/ from the robot txt?
Technical SEO | | thewebguy30 -
Advice on Duplicate Page Content
We have many pages on our website and they all have the same template (we use a CMS) and at the code level, they are 90% the same. But the page content, title, meta description, and image used are different for all of them. For example - http://www.jumpstart.com/common/find-easter-eggs
Technical SEO | | jsmoz
http://www.jumpstart.com/common/recognize-the-rs We have many such pages. Does Google look at them all as duplicate page content? If yes, how do we deal with this?0 -
How to protect against duplicate content?
I just discovered that my company's 'dev website' (which mirrors our actual website, but which is where we add content before we put new content to our actual website) is being indexed by Google. My first thought is that I should add a rel=canonical tag to the actual website, so that Google knows that this duplicate content from the dev site is to be ignored. Is that the right move? Are there other things I should do? Thanks!
Technical SEO | | williammarlow0 -
How to avoid duplicate content penalty when our content is posted on other sites too ?
For recruitment company sites, their job ads are posted muliple times on thier own sites and even on other sites too. These are the same ads (job description is same) posted on diff. sites. How do we avoid duplicate content penalty in this case?
Technical SEO | | Personnel_Concept0 -
Duplicate content and http and https
Within my Moz crawl report, I have a ton of duplicate content caused by identical pages due to identical pages of http and https URL's. For example: http://www.bigcompany.com/accomodations https://www.bigcompany.com/accomodations The strange thing is that 99% of these URL's are not sensitive in nature and do not require any security features. No credit card information, booking, or carts. The web developer cannot explain where these extra URL's came from or provide any further information. Advice or suggestions are welcome! How do I solve this issue? THANKS MOZZERS
Technical SEO | | hawkvt10 -
Linking to unrelated content
Hi, Just wanted to know, linking to unrelated content will harm the site? I know linking to unrelated content is not good. But wanted to know weather any chances are there or not. I have a site related to health and the other one related to technology. The technology site is too good having PR 6 and very good strong backlinks. And the health related site has very much tough competition, So i wanted to know may be i could link this health site to technology site to get good link from it. Can you suggest me about it. waiting for your replies...
Technical SEO | | Dexter22387874870 -
Are recipes excluded from duplicate content?
Does anyone know how recipes are treated by search engines? For example, I know press releases are expected to have lots of duplicates out there so they aren't penalized. Does anyone know if recipes are treated the same way. For example, if you Google "three cheese beef pasta shells" you get the first two results with identical content.
Technical SEO | | RiseSEO0