422 vs 404 Status Codes
-
We work with an automotive industry platform provider and whenever a vehicle is removed from inventory, a 404 error is returned. Being that inventory moves so quickly, we have a host of 404 errors in search console.
The fix that the platform provider proposed was to return a 422 status code vs a 404. I'm not familiar with how a 422 may impact our optimization efforts. Is this a good approach, since there is no scalable way to 301 redirect all of those dead inventory pages.
-
Thanks Mike.
Your initial solution would be preferred, but its not scalable. We are talking about over 100 websites with varying levels of inventory.
I was thinking along the lines of the keeping the 404 or 410 status. It was just odd when the vendor proposed a 422 error, when its not a preferred option in Google's support pages. I was just wondering if anyone used the 422 response code before and if so, why.
-
Personally I think you should set up a process whereby every time a vehicle and/or part is removed, you have someone automatically 301 it to the previous step in the site navigation. So when "blue widget 3" is removed from the site, anyone landing on that page or who has it bookmarked winds up on the "Widget" category page. Now there may not be an easy way to do it right this second because of how many there are now, but if you get in the habit of doing it and slowly work toward fixing the others then you'll be in a good position in the future to keep this from being an issue again.
Now if you really don't want to attempt that... 404s aren't necessarily horrible (too many can be). If your site is properly serving 404s then you won't be penalized for it but in this case you might want to consider using 410 status codes. Its a stronger signal for removal than a 404 and you don't plan on the product ever coming back so marking it Gone should get it removed from the index faster while also helping to keep you from competing against yourself in the SERPs when a new but similar product comes into stock.
-
Do pages of vehicles that are in inventory for a short time actually deliver monetizable traffic?
If the answer is no, because they are up for such a short amount of time, you would have to weigh the value of having them indexable in the first place vs creating an ever-growing list of missing pages.
Having a lot of 404s or 422s is a bit of a negative. Is there really no way to add the step of 301ing to their removal?
Making the pages non-indexable via noindex once they are indexed will not remove them. You either have to 301 and/or request removal from the G's index. Is there a programmatic way to turn their removal into a 301 to the top inventory category page?
Good luck!
-
A 422 is an unprocessable error, which I think will have as much impact as a 404 (page not found error).
You could make pages non indexable once a vehicle has been removed from the inventory. This shouldn't impact you SEO efforts.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Soft 404 error
Hello friends
Technical SEO | | industriestaedt
This is my site
https://www.alihosseini.org/ In the search console I have a soft 404 error
How can I fix this error?
I use WordPress0 -
Disallowing URL Parameters vs. Canonicalizing
Hi all, I have a client that has a unique search setup. So they have Region pages (/state/city). We want these indexed and are using self-referential canonicals. They also have a search function that emulates the look of the Region pages. When you search for, say, Los Angeles, the URL changes to _/search/los+angeles _and looks exactly like /ca/los-angeles. These search URLs can also have parameters (/search/los+angeles?age=over-2&time[]=part-time), which we obviously don't want indexed. Right now my concern is how best to ensure the /search pages don't get indexed and we don't get hit with duplicate content penalties. The options are this: Self-referential canonicals for the Region pages, and disallow everything after the second slash in /search/ (so the main search page is indexed) Self-referential canonicals for the Region pages, and write a rule that automatically canonicalizes all other search pages to /search. Potential Concern: /search/ URLs are created even with misspellings. Thanks!
Technical SEO | | Alces1 -
Can bad html code hurt your website from ranking ?
Hello,For example if I search for “ Bike Tours in France” I am looking for a page with a list of tours in France.Does it mean that if my html doesn’t have list * in the code but only that apparently doesn’t have any semantic meaning for a search engine my page won’t rank because of that ?Example on this page : https://bit.ly/2C6hGUn According to W3schools: "A semantic element clearly describes its meaning to both the browser and the developer. Examples of non-semantic elements: <div> and - Tells nothing about its content. Examples of semanticelements: <form>, , and- Clearly defines its content."Has anyone any experience with something similar ?Thank you, </form>
Technical SEO | | seoanalytics0 -
Xml sitemaps giving 404 errors
We have recently made updates to our xml sitemap and have split them into child sitemaps. Once these were submitted to search console, we received notification that the all of the child sitemaps except 1 produced 404 errors. However, when we view the xml sitemaps in a browser, there are no errors. I have also attempted crawling the child sitemaps with Screaming Frog and received 404 responses there as well. My developer cannot figure out what is causing the errors and I'm hoping someone here can assist. Here is one of the child sitemaps: http://www.sermonspice.com/sitemap-countdowns_paged_1.xml
Technical SEO | | ang0 -
Expired domain 404 crawl error
I recently purchased a Expired domain from auction and after I started my new site on it, I am noticing 500+ "not found" errors in Google Webmaster Tools, which are generating from the previous owner's contents.Should I use a redirection plugin to redirect those non-exist posts to any new post(s) of my site? or I should use a 301 redirect? or I should leave them just as it is without taking further action? Please advise.
Technical SEO | | Taswirh1 -
HTACCESS redirect vs. forwarding
I'm having trouble using htaccess redirect to redirect a subdomain to a new domain on a different server. Tech support at godaddy suggested I forward the subdomain. The subdomain has already been cached by google. Will forwarding in this way have the same affect (SEO wise) as an htaccess redirect??
Technical SEO | | triple90 -
500 error codes caused by W3 Total Cache plugin?
Hello Everyone, I operate a site (http://www.nationalbankruptcyforum.com) that has been receiving 500 error codes in Webmaster Tools as of late. This morning, webmaster tools showed 129 500 crawling errors. I've included one of the URLs that contained an error message here: http://www.nationalbankruptcyforum.com/marriage-and-bankruptcy/do-my-wife-and-i-both-have-to-file-for-bankruptcy/ I've been getting these errors now for about 3 weeks and they've mostly been on obscure, strange URLs (lots of numbers etc.) however, this morning they started showing up on pages that will actually be trafficked by users. I'm really not sure where they're coming from, although I do believe it's a software issue as I've had my hosting company take a look to no avail. I have had some development work done recently and am running the W3 Total Cache plugin (my site is built on WP). I also run the Yoast SEO plugin and rely on it to publish an XML sitemap among other things. Anyone have any idea where these 500 errors originate from? Thanks, John
Technical SEO | | oconn1460 -
Dealing with 404 pages
I built a blog on my root domain while I worked on another part of the site at .....co.uk/alpha I was really careful not to have any links go to alpha - but it seems google found and indexed it. The problem is that part of alpha was a copy of the blog - so now soon we have a lot of duplicate content. The /alpha part is now ready to be taken over to the root domain, the initial plan was to then delete /alpha. But now that its indexed I'm worried that Ill have all these 404 pages. I'm not sure what to do.. I know I can just do a 301 redirect for all those pages to go to the other ones in case a link comes on but I need to delete those pages as the server is already very slow. Or does a 301 redirect mean that I don't need those pages anymore? Will those pages still get indexed by google as separate pages? Please assist.
Technical SEO | | borderbound0