How to avoid 404 errors when taking a page off?
-
So...
We are running a blog that was supposed to have great content.
Working at SEO for a while, I discovered that is too much keyword stuffing and some SEO shits for wordpress, that was supposed to rank better.
In fact. That worked, but I'm not getting the risk of getting slaped by the Google puppy-panda.
So we decided to restard our blog from zero and make a better try.
So. Every page was already ranking in Google.
SEOMoz didn't make the crawl yet, but I'm really sure that the crawlers would say that there is a lot of 404 errors.
My question is: can I avoid these errors with some tool in Google Webmasters in sitemaps, or shoud I make some rel=canonicals or 301 redirects.
Does Google penalyses me for that? It's kinda obvious for me that the answer is YES.
Please, help
-
Thanks for your help.
-
There won't be any penalty if 404 pages are redirected to other pages. At the same time Google prefers to see 404 error for pages that are not available.
Couple of steps may reduce 404s:
1. Check on page / internal links if they are working URLs and update them with valid URLs
2. If you are linking your pages from different site, check the URLs used
3. If you are using free or paid directories, paid campaigns etc. check if the URLs are valid
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How long does it take for Moz to discover links to pages
Hi folks, Our website is doing well in the Google rankings relative to our competitors who often have higher "Domain authority" than us as reported by Moz. I'm wondering how closely Moz's "Domain Authority" correlates with Google's. In particular, I wonder how long it takes Moz to discover inbound links. For instance our page at http://www.educationquizzes.com/ks3/english has many inbound links from pages on an outstanding educational website and yet our page authority is given by Moz as a measly "1"! Any insights would be very much appreciated.
Technical SEO | | colinking0 -
How to avoid duplicate content on internal search results page?
Hi, according to Webmaster Tools and Siteliner our website have an above-average amount of duplicate content. Most of the pages are the search results pages, where it finds only one result. The only difference in this case are the TDK, H1 and the breadcrumbs. The rest of the layout is pretty static and similar. Here is an example for two pages with "duplicate content": https://soundbetter.com/search/Globo https://soundbetter.com/search/Volvo Edit: These are legitimate results that happen to have the same result. In this case we want users to be able to find the audio engineers by 'credits' (musicians they've worked with). Tags. We want users to rank for people searching for 'engineers who worked with'. And searching for two different artists (credit tags) returns this one service provider, with different urls (the tag being the search parameter) hence the duplicate content. I guess every e-commerce/directory website faces this kind of issue. What is the best practice to avoid duplicate content on search results page?
Technical SEO | | ShaqD1 -
Best action to take for "error" URLs?
My site has many error URLs that Google webmaster has identified as pages without titles. These are URLs such as: www.site.com/page???1234 For these URLs should I: 1. Add them as duplicate canonicals to the correct page (that is being displayed on the error URLs) 2. Add 301 redirect to the correct URL 3. Block the pages in robots.txt Thanks!
Technical SEO | | theLotter0 -
Googlebot take 5 times longer to crawl each page
Hello All From about mid September my GWMT has show that the average time to crawl a page on my site has shot up from an average of 130ms to an average of 700ms and peaks at 4000ms. I have checked my server error logs and found nothing there, I have checked with the hosting comapny and there are no issues with the server or other sites on the same server. Two weeks after this my ranking fell by about 950 places for most of my keywords etc.I am really just trying to eliminate this as a possible cause, of these ranking drops. Or was it the Pand/ EMD algo that has done it. Many Thanks Si
Technical SEO | | spes1230 -
Secondary Pages Indexed over Primary Page
I have 4 pages for a single product Each of the pages link to the Main page for that product Google is indexing the secondary pages above my preferred landing page How do I fix this?
Technical SEO | | Bucky0 -
404 Errors - How to get rid of them?
Hi, I am starting an SEO job on an academic site that has been completely redone. The SEOMoz crawl detected three 404 Errors to pages that cannot be found anywhere on either Joomla or the server. What can I do to solve this? Thanks!!
Technical SEO | | michalseo0 -
I have 15,000 pages. How do I have the Google bot crawl all the pages?
I have 15,000 pages. How do I have the Google bot crawl all the pages? My site is 7 years old. But there are only about 3,500 pages being crawled.
Technical SEO | | Ishimoto0