Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
301 vs 410 redirect: What to use when removing a URL from the website
-
We are in the process of detemining how to handle URLs that are completely removed from our website? Think of these as listings that have an expiration date (i.e. http://www.noodle.org/test-prep/tphU3/sat-group-course). What is the best practice for removing these listings (assuming not many people are linking to them externally).
- 301 to a general page (i.e. http://www.noodle.org/search/test-prep)
- Do nothing and leave them up but remove from the site map (as they are no longer useful from a user perspective)
- return a 404 or 410?
-
Hi Alexander,
If not a lot of people are linking to them AND if the page has an expiration date of sorts, you'd probably want to 410 the page.
404 and 410 both tell Google that the page no longer exists; however, the 410 (GONE) is more specific than the 404 (NOT FOUND).
The 410 "should" naturally get the page out of Google's index faster as well.
Or, if it is an ongoing thing, you could consider changing your URL to /group-course and then just constantly change the content on that particular page, letting the URL accumulate inbound links and drive traffic... I don't know exactly how you are using the page, but just an option.
Hope this helps.
Mike
-
404 is a good way to tell Google that a page is gone, which will then help it drop from index. Removing it from a site map won't remove it from index.
If you can't 301 to a good page for a user experience (for example, it'd be jarring to click on a SERP for a bike tire and be 301'd to a page about women's bike shirts) then there's not a lot of value in doing so.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why has my website been removed from Bing?
I have a website that has recently been removed from Bing's index, but can't figure out why. The website isn't new, and it is indexed just fine on Google. These are the steps I've tried: The website is verified in Bing Webmaster Tools and successfully submitted the sitemap. I tested the URL to ensure that Bingbot is allowed to crawl the site I submitted URLs to Bing via the URL Submission tool There isn't a "noindex" on the site preventing it from being indexed When I do a URL Inspection, an error message comes up saying "The inspected URL is known to Bing but has some issues which are preventing us from serving it to our users. We recommend you to follow Bing Webmaster Guidelines." I contacted Bing to ask whether the website was removed in error, but received a reply that the website doesn't comply with Bing's quality guidelines, but they wouldn't go into detail as to which guidelines the website isn't meeting. The website URL is https://www.pardeehospital.org. Can anyone offer any advice or insight as to why Bing won't index our site? Thank you!
Intermediate & Advanced SEO | | lindsey.steinkamp0 -
301 redirect hops from non-https and www
It's best practice to minimize the amount of 301 redirect hops. Ideally only one redirect hop. It's also best practice to 301 redirect (or at least canonical) your non-https and/or your non-www (or www) to the canonical protocol/subdomain. The simplest (and possibly the most common) way to implement canonical protocol/subdomain redirects is through a load balancer or before your app processes the request. Both of which will just blanket 301 to the canonical domain/protocol regardless if the path exists or not In which case, you could have: Two hops. i.e. hop #1 http://example.com/foo to https://example.com/foo, hop #2 https://example.com/foo to https://example.com/bar 301 to a 404. Let's say https://example.com/dog never existed, but somebody for whatever reason linked to it (maybe a typo). If I request https://www.example.com/dog, the load balancer would 301 to a 404 page. Either scenario above should be fairly rare. However, you can't control how people link to you. Should I care about either above scenario? I could have my app attempt to check if the page exists before forwarding, but that code could be complicated.
Intermediate & Advanced SEO | | dsbud0 -
Two websites vs each other owned by same company
My client owns a brand and came to me with two ecommerce websites. One website sells his specific brand product and the other sells general products in his niche (including his branded product). Question is my client wants to rank each website for basically the same set of keywords. We have two choices I'd like feedback on- Choice 1 is to rank both websites for same keyword groupings so even if they are both on page 1 of the serps then they take up more real estate and share of voice. are there any negative possibilities here? Choice 2 is to recommend a shift in the position of the general industry website to bring it further away from the industry niche by focusing on different keywords so they don't compete with each other in the serps. I'm for choice 1, what about you?
Intermediate & Advanced SEO | | Rich_Coffman0 -
Moving half my website to a new website: 301?
Good Morning! We currently have two websites which are driving all of our traffic. Our end goal is to combine the two and fold them into each other. Can I redirect the duplicate content from one domain to our main domain even though the URL's are different. Ill give an example below. (The domains are not the real domains). The CEO does not want to remove the other website entirely yet, but is willing to begin some sort of consolidation process. ABCaddiction.com is the main domain which covers everything from drug addiction to dual diagnosis treatment. ABCdualdiagnosis.com is our secondary website which covers everything as well. Can I redirect the entire drug addiction half of the website to ABCaddiction.com? With the eventual goal of moving everything together.
Intermediate & Advanced SEO | | HashtagHustler0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
.htaccess 301 Redirect Help! Specific Redirects and Blanket Rule
Hi there, I have the following domains: OLD DOMAIN: domain1.co.uk NEW DOMAIN: domain2.co.uk I need to create a .htaccess file that 301 redirects specific, individual pages on domain1.co.uk to domain2.co.uk I've searched for hours to try and find a solution, but I can't find anything that will do what I need. The pages on domain1.co.uk are all kinds of filenames and extensions, but they will be redirected to a Wordpress website that has a clean folder structure. Some example URL's to be redirected from the old website: http://www.domain1.co.uk/charitypage.php?charity=357 http://www.domain1.co.uk/adopt.php http://www.domain1.co.uk/register/?type=2 These will need to be redirected to the following URL types on the new domain: http://www.domain2.co.uk/charities/ http://www.domain2.co.uk/adopt/ http://www.domain2.co.uk/register/ I would also like a blanket/catch-all redirect from anything else on www.domain1.co.uk to the homepage of www.domain2.co.uk if there isn't a specific individual redirect in place. I'm literally tearing my hair out with this, so any help would be greatly appreciated! Thanks
Intermediate & Advanced SEO | | Townpages0 -
301 Redirection and apostrophes in URLs
Hi I am experiencing trouble getting any redirects with apostrophes in the URLs to 301 redirect in order to eliminate 404 errors. I have tried replacing the instance of the apostrophe in the source URL field to %27 and variations of this but to no avail. The site is a wordpress site (the old URLS are legacies from the old Business Catalyst site) and I am using the redirection plug in. I have gone into some detail with a helpful soul here http://wordpress.org/support/topic/how-to-deal-with-apostrophes-in-source-url but unfortunately to no result. If anyone has any idea how to solve this puzzle I would be grateful for the help. Example: http://www.tesselaars.com/blog/Inside_Flowers/post/Online_Marketing_for_Florists_Part_1%E2%80%93_A_Website_You_Won%27t_Regret/
Intermediate & Advanced SEO | | Seamoose0 -
Htaccess Redirect with %C2%A0 in URL
Below is my setup for redirects in .htaccess file in my root word press installation. The www to non-www works well, so no problems there Other page redirects work well, too (example: redirect 301 /some-page/ http://mysite.com/another-page/ (I didn't post those because I have a few too many : ) So here it goes... RewriteEngine On
Intermediate & Advanced SEO | | pepsimoz
RewriteCond %{HTTP_HOST} ^www.mysite.com$ [NC]
RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress redirect 301 /archives/10-college- majors/ http://mysite.com/archives/10-college-majors/ redirect 301 /archives/10-college-%20majors/ http://mysite.com/archives/10-college-majors/ redirect 301 /archives/10-college-%C2%A0majors/ http://mysite.com/archives/10-college-majors/ I'm having a problem with the last 301 redirect: redirect 301 /archives/10-college-%C2%A0majors/ http://mysite.com/archives/10-college-majors/ not working... As you can see I've tried using other varations of the "space" but no go. I also used a redirect in cPanel's Redirect screen; testing all the possible options + wildcard I've also tried this: http://serverfault.com/questions/201829/using-special-characters-in-apache-mod-rewrite-rule (perhaps unsuccessfully, because it caused a 500 server error and it's a different situation in my case) I also saw something here: http://www.webmasterworld.com/apache/3908682.htm but I don't know if it works and how I would implement that + do so without compromising ALL other redirects. Note: the URL displays with a space in the address bar of all major web browsers: http://mysite.com/10-college- majors/ and goes to a 404 page I have a goregous page / PR6 / high authority site linking to the URL on my site, but they copied the URL with a space somehow. I contacted the person responsible for the website and he claims it works fine (aka he didn't check it). Is there a clean way to redirect ONLY this problematic URL without compromising other redirects, etc? Any ideas would be great. I'll respond with progress. Thanks in advance. UPDATE the redirect works, and it did work. Even so, when looking at source of page linking to mine, the URL looks like this: ``` http://mysite.com/archives/10-college- majors/ Clicking the URL in Source View in FireFox takes me to ``` http://mysite.com/archives/10-college-%C2%A0majors/ none of my 301 redirects should direct there. I don't have any redirect plugins either.0