Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
403s vs 404s
-
Hey all,
Recently launched a new site on S3, and old pages that I haven't been able to redirect yet are showing up as 403s instead of 404s.
Is a 403 worse than a 404? They're both just basically dead-ends, right? (I have read the status code guides, yes.)
-
Oh I'm sorry I clearly misunderstood the question.
I have not seen any studies or testing done on this, but I have to assume that they are ignored by spiders entirely. I certainly don't think they are more damaging than a 404 would be. A 404 tends to be ignored and only registered if a certain amount of time passes and the page is still not found. Google doesn't make it a habit to instantly remove URLs unless you ask them to.
At the very worst, the 403/404 error would de-index that particular URL but this should not affect the rankings of your other pages and your actual site. And I think it'll take at least a good 30 days before Google will stop crawling those. That said, it shouldn't be crawling them at all if there aren't any links pointing to them either internally or externally. And if there are links pointing to the pages in question, you should be redirecting them via 301. That is of course if they are links you want.
Hope this was more helpful.
-
Hi Jesse,
Thanks for your response!
I understand the reason the 403s are happening; I was more curious as to whether they are more damaging to rankings when hit by a spider than a 404 would be
-
403s are forbiddens that are only returned if the server is told to block access to the file. If the site had been built with Wordpress in the past and has directories that match current directories, it may be returning 403 errors as the sitemap differs..
This is hard to explain and I think my wording it is confusing.
Say you had on your old site domain.com/blog/ and that went to your blog's index but now you have domain.com/blog/contents.html as your index. Well the /blog/ command would be trying to pull a directory and your server would normally automatically return a 403 forbidden for such requests.
Does this make sense? Might not be what's going on, but it's one possibility.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are there ways to avoid false positive "soft 404s" by Google
Sometimes I get alerts from Google Search Console that it has detected soft 404s on different websites, and since I take great care to never have true soft 404s, they are always false positives. Today I got one on a website that has pages promoting some events. The language on the page for one event that has sold out says that "tickets are no longer available" which seems to have tripped up Google into thinking the page is a soft 404. It's kind of incredible to me that in the current era we're in, with things like chatGPT that Google doesn't seem to understand natural language. But that has me thinking, are there some strategies or best practices we can use in how we write copy on the page so Google doesn't flag it as soft 404? It seems like anything that could tell a user that an item isn't available could trip it up into thinking it is a 404. In the case of my page, it's actually important information we need to tell the public that an event has sold out, but to use their interest in that event to promote other events. so I don't want the page deindexed or not to rank well!
Technical SEO | | IrvCo_Interactive0 -
Personalized Content Vs. Cloaking
Hi Moz Community, I have a question about personalization of content, can we serve personalized content without being penalized for serving different content to robots vs. users? If content starts in the same initial state for all users, including crawlers, is it safe to assume there should be no impact on SEO because personalization will not happen for anyone until there is some interaction? Thanks,
Technical SEO | | znotes0 -
Direct link vs 302 redirect
So we have recently relaunched a site that we manage. As part of this we have changed the domain. The webdesign agency that built the new site have implemented a direct link from the old domain to the new domain. What is best practice a direct link or a 302 redirect? Thanks
Technical SEO | | cbarron0 -
Www2 vs www problem
Hi, I have a website that has an old version and a new version. The content is not duplicate on the different versions.
Technical SEO | | TihomirPetrov
The point is that the old version uses www. and non-www before the domain and the new one uses www2. My questions is: Is that a problem and what should be done? Thank you in advance!0 -
Are thousands of 404s a problem?
An ecommerce site I work on has around 16,000 URLs that are 404s in Webmaster Tools. The vast majority are for products that are no longer stocked by the site, which is a natural occurrence in ecommerce. But my question is, could these possibly be harming rankings?
Technical SEO | | creativemay1 -
Root directory vs. subdirectories
Hello. How much more important does Google consider pages in the root directory relative to pages in a subdirectory? Is it best to keep the most important pages of a site in the root directory? Thanks!
Technical SEO | | nyc-seo0 -
Noindex vs. page removal - Panda recovery
I'm wondering whether there is a consensus within the SEO community as to whether noindexing pages vs. actually removing pages is different from Google Pandas perspective?Does noindexing pages have less value when removing poor quality content than physically removing ie. either 301ing or 404ing the page being removed and removing the links to it from the site? I presume that removing pages has a positive impact on the amount of link juice that gets to some of the remaining pages deeper into the site, but I also presume this doesn't have any direct impact on the Panda algorithm? Thanks very much in advance for your thoughts, and corrections on my assumptions 🙂
Technical SEO | | agencycentral0 -
Syndication: Link back vs. Rel Canonical
For content syndication, let's say I have the choice of (1) a link back or (2) a cross domain rel canonical to the original page, which one would you choose and why? (I'm trying to pick the best option to save dev time!) I'm also curious to know what would be the difference in SERPs between the link back & the canonical solution for the original publisher and for sydication partners? (I would prefer not having the syndication partners disappeared entirely from SERPs, I just want to make sure I'm first!) A side question: What's the difference in real life between the Google source attribution tag & the cross domain rel canonical tag? Thanks! PS: Don't know if it helps but note that we can syndicate 1 article to multiple syndication partners (It would't be impossible to see 1 article syndicated to 50 partners)
Technical SEO | | raywatson0