Are there ways to avoid false positive "soft 404s" by Google
-
Sometimes I get alerts from Google Search Console that it has detected soft 404s on different websites, and since I take great care to never have true soft 404s, they are always false positives.
Today I got one on a website that has pages promoting some events. The language on the page for one event that has sold out says that "tickets are no longer available" which seems to have tripped up Google into thinking the page is a soft 404.
It's kind of incredible to me that in the current era we're in, with things like chatGPT that Google doesn't seem to understand natural language. But that has me thinking, are there some strategies or best practices we can use in how we write copy on the page so Google doesn't flag it as soft 404? It seems like anything that could tell a user that an item isn't available could trip it up into thinking it is a 404. In the case of my page, it's actually important information we need to tell the public that an event has sold out, but to use their interest in that event to promote other events. so I don't want the page deindexed or not to rank well!
-
@IrvCo_Interactive Google's algorithms are not perfect and sometimes can misinterpret the content on a page.
In terms of strategies or best practices for writing copy on a page to avoid triggering a soft 404, one approach is to ensure that the content is unique, relevant, and provides value to the user. Make sure that the page contains substantial content that gives context and information about the event, even if it is sold out. This can include details about past events, photos, videos, or testimonials from attendees.
You can also consider using structured data markup to explicitly indicate that the event is sold out, which can help Google better understand the page's content. This can be done using the "eventStatus" property in the Schema.org markup.
Another approach is to use clear and specific language when describing the event's availability. Instead of using phrases like "no longer available," consider using language like "this event is sold out" or "tickets for this event are no longer available." This can help make it clear to both users and search engines that the page is not a soft 404.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404s on subfolder - how to redirect?
Hi all,
Technical SEO | | MFSMarketing
we have a lot of 404s to subfolders. Eg
www.website.com/blog-post-title/imagename/
www.website.com/blog-post-title/author/ We don't have these subfolders or blog posts anymore.
How do i redirect them? These links (404s) don't seem to have any value or backlinks. Thanks,
Stef0 -
Using http: shorthand inside canonical tag ("//" instead of "http:") can cause harm?
HI, I am planning to launch a new site, and shortly after to move to HTTPS. to save the need to change over 5,000 canonical tags in pages the webmaster suggested we implement inside the rel canonical "//" instead of the absolute path, would that do any damage or be a problem? oranges-south-dakota" />
Technical SEO | | Kung_fu_Panda0 -
Rel="publisher" validation error in html5
Using HTML5 I am getting a validation error on in my HTML Validation error: Bad value publisher for attribute rel on element link: Not an absolute IRI. The string publisher is not a registered keyword or absolute URL. This just started showing up on Tuesday in validation errors. Never showed up in the past. Has something changed?
Technical SEO | | RoxBrock0 -
Thousands of 404s
Hi there, I'm working on a site that has a ridiculous number of 404s being returned by webmaster tools. We believe this was because there was an onpage error that was amending the urls and adding in folders that shouldn't have been in a big spiral i.e. /salons/uk/teeth became something like /salons/uk/teeth/salons/edinburgh/hair/teeth... Anyway, we think the issue is now sorted, but these pages were indexed it seems, and so it looks like Google is still searching for them when it crawls the site. What's my best move? It's the sheers volume (over 13,000) that has me concerned so I thought it best to seek some expert advice before continuing. Thanks in advance!
Technical SEO | | LeahHutcheon0 -
URL Structure for "Find A Professional" Page
I've read all the URL structure posts out there, but I'm really undecided and would love a second opinion. Currently, this is how the developer has our professionals directory working: 1. You search by inputting your Zip Code and selecting a category (such as Pool Companies) and we return all professionals within a X-mile radius of that ZIP. This is how the URL's are structured... 1. Main Page: /our-professionals 2. The URL looks like this after a search for "Deck Builders" in ZIP 19033: /our-professionals?zipcode=19033&HidSuppliers=&HiddenSpaces=&HidServices=&HidServices_all=[16]%2C&HidMetroareas=&srchbox= 3. When I click one of the businesses, URL looks like this: viewprofile.php?id=409 I know how to go about doing this, but I'm undecided on the best structure for the URL's. Maybe for results pages do this: find-professionals/deck-builders/philadelphia-pa-19033 And for individual pro's profiles do this: /deck-builders/philadelphia-pa-19033/Billys-Deck-Service Any input on how to best structure this so that we can have a good chance of showing in SERPs for "Deck Builders near New Jersey" and the such, would be much appreciated.
Technical SEO | | zDucketz0 -
SeoMoz crawler giving false positives?
SeoMoz crawler indicated a few times that my site has a duplicate home page error (http://mysite.com and www.mysite.com) I eliminated the the couple remaining internal links that pointed to http://mysite on a couple pages (all other internal links point to http://www.mysite.com) I ran the crawl again and it said no errors this time. I naturally thought the duplicate page error problem was fixed. However this morning I got the regularly scheduled crawl report from SeoMoz that said again I have those duplicate error pages. No changes were made to any of my site's pages between the crawls. That makes me wonder if the crawler is providing false positives at times or was wrong when it said on the crawl a couple days ago that I don't have any errors (no duplicate page error). Now, I don't know what to think.
Technical SEO | | finalfrontier0 -
I have a ton of "duplicated content", "duplicated titles" in my website, solutions?
hi and thanks in advance, I have a Jomsocial site with 1000 users it is highly customized and as a result of the customization we did some of the pages have 5 or more different types of URLS pointing to the same page. Google has indexed 16.000 links already and the cowling report show a lot of duplicated content. this links are important for some of the functionality and are dynamically created and will continue growing, my developers offered my to create rules in robots file so a big part of this links don't get indexed but Google webmaster tools post says the following: "Google no longer recommends blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools." here is an example of the links: | | http://anxietysocialnet.com/profile/edit-profile/salocharly http://anxietysocialnet.com/salocharly/profile http://anxietysocialnet.com/profile/preferences/salocharly http://anxietysocialnet.com/profile/salocharly http://anxietysocialnet.com/profile/privacy/salocharly http://anxietysocialnet.com/profile/edit-details/salocharly http://anxietysocialnet.com/profile/change-profile-picture/salocharly | | so the question is, is this really that bad?? what are my options? it is really a good solution to set rules in robots so big chunks of the site don't get indexed? is there any other way i can resolve this? Thanks again! Salo
Technical SEO | | Salocharly0 -
Rel="canonical" for PFDs?
Hello there, We have a lot of PDFs that seem to end up on other websites. I was wondering if there was a way to make sure that our website gets the credit/authority as the original creator. Besides linking directly from the PDF copy to our pages, is anyone aware of strategy for letting Google know that we are the original publishers? I know search engines can index HTML versions of PDFs, so is there anyway to get them to index a rel="canonical" tag as well? Thoughts/Ideas?
Technical SEO | | Tektronix0