Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What to do with removed pages and 404 error
-
I recently removed about 600 'thin' pages from my site which are now showing as 404 errors in WMT as expected. As I understand it I should just let these pages 404 and eventually they'll be dropped from the index. There are no inbound links pointing at them so I don't need to 301 them. They keep appearing in WMT as 404's though so should I just 'mark as fixed' until they stop appearing? Is there any other action I need to take?
-
If they are truly gone, then a 410 would be the best option for you. Since they are indexed even if there are no links pointing at them, people can still find them besed upon what they are searching for. You never know when your link will show up, because you dont know how long google is going to take to get rid of the links.
http://www.checkupdown.com/status/E410.html
"The 410 error is primarily intended to assist the task of Web maintenance by notifying the client system that the resource is intentionally unavailable and that the Web server wants remote links to the URL to be removed. Such an event is common for URLs which are effectively dead i.e. were deliberately time-limited or simply orphaned. The Web server has complete discretion as to how long it provides the 410 error before switching to another error such as 404"
We did this for a client that needed old defunct pages removed. Once you set the pages to return a 410, and use Google url removal tool, you should see them dropping off really quick. (all of ours were gone within a month) Having that many pages return a 404 may be hurting the experience of your users as when they see a 404, they go right for the back button.
-
410 is the recommended way to tell search engines the page is gone. all of the things mentioned above are a facet of how you should deal with this issue. sorry for the brevity and terrible punction. moz forum is a pretty iffy thing via mobile. my eggs are getting cold.
-
Hi!
The reason why these pages keep popping up in WMT is that they have already been indexed. You could try to remove them from Google's index by using the removal tool in WMT (https://www.google.com/webmasters/tools/url-removal) or by setting up "301 Redirect" for them to more ideal pages.
Hope this helps
Anders -
Hi,
I would look at this from two perspectives.
1. These thins pages could have been beefed-up with some unique content or at least the content could have been re-written to make them unique. Personally, I prefer to make the duplicate pages unique instead of deleting them.This of course depends on the number of pages and the level of duplication.
2. Now that these pages have been removed from the website, you should be erasing all the links to these from within the website from all the places like, sitemaps and internal linking so that the search engines do not find a link pointing to them that might end-up in a 404 error. You should also consider if there have been any references left to these pages from third-party web properties.
Best regards,
Devanur Rafi
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How Removing Zombie pages effect on domain authority?
Hi. Recently I got a project (removing zombie pages here: https://www.alamto.com/ ) As you can see this site has about 20k indexed page on google and it seems I should remove about 6000 useless indexed page. does removing (Noindex) these pages affect on the site metrics? Which metrics would affected? and how? Thanks.
On-Page Optimization | | jafarfahi1 -
Category pages, should I noindex them?
Hi there, I have a question about my blog that I hope you guys can answer. Should I no index the category and tag pages of my blog? I understand they are considered as duplicate content, but what if I try to work the keyword of that category? What would you do? I am looking forward to reading your answers 🙂
On-Page Optimization | | lucywrites0 -
FAQ page structure
I have read in other discussions that having all questions on an FAQ page is the way to go and then if the question has an answer worthy of its own page, you should abbreviate the answer and link to the page with more content. My question is when using some templates in WP, they have a little + button you can click and it reveal the answer to the question. Does this hurt SEO versus having all text visible and then using headers/subheaders? An example of the + button https://fyrfyret.dk/faq/
On-Page Optimization | | OrlandSEO1 -
Page Title Length
Hi Gurus, I understand that it is a good practice is to use 50-60 characters for the a page title length. Google appends my brand name to the end of each title (15 characters including spaces) it index. Do I need to count what google adds as part of the maximum recommended length? i.e.
On-Page Optimization | | SunnyMay
is the maximum 50-60 characters + the 15 characters brand name Google adds to the end of the title or 50-60 including the addition? Many thanks!
Lev0 -
Why are http and https pages showing different domain/page authorities?
My website www.aquatell.com was recently moved to the Shopify platform. We chose to use the http domain, because we didn't want to change too much, too quickly by moving to https. Only our shopping cart is using https protocol. We noticed however, that https versions of our non-cart pages were being indexed, so we created canonical tags to point the https version of a page to the http version. What's got me puzzled though, is when I use open site explorer to look at domain/page authority values, I get different scores for the http vs. https version. And the https version is always better. Example: http://www.aquatell.com DA = 21 and https://www.aquatell.com DA = 27. Can somebody please help me make sense of this? Thanks,
On-Page Optimization | | Aquatell1 -
How to remove subdomains in a clean way?
Hello, I have a main domain example.com where I have my main content and then I created 3 subdomains one.example.com, two.example.com and three.example.com I think the low ranking of my subdomains is affecting the ranking of my main domain, the one I care the most. So, I decided to get rid of the subdomains. The thing is that only for one.example.com I could transfer the content to my main domain and create 301 redirects. For the other two subdomains I cannot integrate the content in my main domain as it doesn't make sense. Whats the cleanest way to make them dissapear? (just put a redirect to my main domain even if the content is not the same) or just change the robots to "noindex" and put a 404 page in the index of each subdomain. I want to use the way that will harm the least the performance with Google. Regards!
On-Page Optimization | | Gaolga0 -
Should i remove the nofollow from mediawiki?
We have a website which uses mediawiki for public documentation. The moz crawler keeps nagging us that 50% of our sites have the nofollow-metatag. (And noindex for that matter). This is information pages and such in mediawiki. From a SEO perspective: Should we remove these tags? I assume they probably do not hurt? If we shouldn't remove the tags: Is there any way to get moz to ignore these pages so we can get rid of this "noise" in the moz-panel?
On-Page Optimization | | Host10 -
Avoiding "Duplicate Page Title" and "Duplicate Page Content" - Best Practices?
We have a website with a searchable database of recipes. You can search the database using an online form with dropdown options for: Course (starter, main, salad, etc)
On-Page Optimization | | smaavie
Cooking Method (fry, bake, boil, steam, etc)
Preparation Time (Under 30 min, 30min to 1 hour, Over 1 hour) Here are some examples of how URLs may look when searching for a recipe: find-a-recipe.php?course=starter
find-a-recipe.php?course=main&preperation-time=30min+to+1+hour
find-a-recipe.php?cooking-method=fry&preperation-time=over+1+hour There is also pagination of search results, so the URL could also have the variable "start", e.g. find-a-recipe.php?course=salad&start=30 There can be any combination of these variables, meaning there are hundreds of possible search results URL variations. This all works well on the site, however it gives multiple "Duplicate Page Title" and "Duplicate Page Content" errors when crawled by SEOmoz. I've seached online and found several possible solutions for this, such as: Setting canonical tag Adding these URL variables to Google Webmasters to tell Google to ignore them Change the Title tag in the head dynamically based on what URL variables are present However I am not sure which of these would be best. As far as I can tell the canonical tag should be used when you have the same page available at two seperate URLs, but this isn't the case here as the search results are always different. Adding these URL variables to Google webmasters won't fix the problem in other search engines, and will presumably continue to get these errors in our SEOmoz crawl reports. Changing the title tag each time can lead to very long title tags, and it doesn't address the problem of duplicate page content. I had hoped there would be a standard solution for problems like this, as I imagine others will have come across this before, but I cannot find the ideal solution. Any help would be much appreciated. Kind Regards5