Should old pages that have being 301 redirected but have no/mimimal traffic be deleted?
-
In other words, I have pages from years ago that are redirected but how can I tell if traffic still flows through them? And if there is no or minimal traffic should the 301 be deleted?
Linck
-
Nick,
Thanks a lot for your advice. I will leave them in place and live with the clutter. Thanks again!
Linck
-
Hi Linck,
There's really no good reason to delete a 301 in my opinion. I've deleted them in the past for the same reason and you'll get crawl errors in google and other engines. Are the 301'd pages still indexed in the search engines. Do they show up in google webmaster tools reports for crawled pages? Even if they don't there's really no reason besides minimizing clutter to delete a 301. A 301 redirect is a "permanently moved" but if somebody is still hitting the old page you'd want them to know that your content has moved, obviously. Here's quick video from google if you're interested on 301's. https://support.google.com/webmasters/answer/93633?hl=en
Good luck,
-Nick
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Changing a product page from "example.com" to "example.com/keyword" affect SEO and Ranking?
We're in a situation to move the page from "example.com" to "example.com/keyword". And adding new content to the "example.com" page. Does this change affect our ranking? If so how can we overcome this problem? Can anyone help?
On-Page Optimization | | Mohamednatheem0 -
Why are http and https pages showing different domain/page authorities?
My website www.aquatell.com was recently moved to the Shopify platform. We chose to use the http domain, because we didn't want to change too much, too quickly by moving to https. Only our shopping cart is using https protocol. We noticed however, that https versions of our non-cart pages were being indexed, so we created canonical tags to point the https version of a page to the http version. What's got me puzzled though, is when I use open site explorer to look at domain/page authority values, I get different scores for the http vs. https version. And the https version is always better. Example: http://www.aquatell.com DA = 21 and https://www.aquatell.com DA = 27. Can somebody please help me make sense of this? Thanks,
On-Page Optimization | | Aquatell1 -
Listing all services on one page vs separate pages per service
My company offers several generalized categories with more specific services underneath each category. Currently the way it's structured is if you click "Voice" you get a full description of each voice service we offer. I have a feeling this is shooting us in the foot. Would it be better to have a general overview of the services we offer on the "Voice" page that then links to the specified service? The blurb about the service on the overview page would be unique, not taken from the actual specific service's page.
On-Page Optimization | | AMATechTel0 -
On page links
Hi I am really intrigued by Bloomberg strategy. if you look at their article pages they are full with internal links done with what I assume to be an automated process (too many pages to be done manually). it seems to work for them. I would love to hear your opinions.
On-Page Optimization | | ciznerguy
http://www.bloomberg.com/news/2014-11-26/uber-said-close-to-raising-funding-at-up-to-40b-value.html0 -
Too many page links warning... but each link has canonical back to main page? Is my page OK?
The Moz crawl warns me many of my pages have too many links, like this page http://www.webjobz.com/jobs/industry/Accounting ...... has 269 links but many of the links are like this /jobs/jobtitles/Accounting?k=&w=3&hiddenLocationID=463170&depth=2 and are used to refine search criteria.... when you click on those links they all have a canonical link back to http://www.webjobz.com/jobs/industry/Accounting Is my page being punished for this? Do I have to put "no follow" tags on every link I do not want the bots to follow and if I do so is Roger (moz bot) not going to count this as a link?
On-Page Optimization | | Webjobz0 -
Redirecting to homepage ok?
I deleted a bunch of category pages (renamed mostly actually and thought they'd be auto redirected like my blog posts are but they weren't) so I used a plugin that reroutes any 404 page to the homepage. Is that the best thing to do in this situation? Google Webmasters says there are about 84 404-errors and this should get rid of those right? Is there anything SEO BAD about doing it this way?
On-Page Optimization | | dealblogger0 -
Locating Duplicate Pages
Hi, Our website consists of approximately 15,000 pages however according to our Google Webmaster Tools account Google has around 26,000 pages for us in their index. I have run through half a dozen sitemap generators and they all only discover the 15,000 pages that we know about. I have also thoroughly gone through the site to attempt to find any sections where we might be inadvertently generating duplicate pages without success. It has been over six months since we did any structural changes (at which point we did 301's to the new locations) and so I'd like to think that the majority of these old pages have been removed from the Google Index. Additionally, the number of pages in the index doesn't appear to be going down by any discernable factor week on week. I'm certain it's nothing to worry about however for my own peace of mind I'd like to just confirm that the additional 11,000 pages are just old results that will eventually disappear from the index and that we're not generating any duplicate content. Unfortunately there doesn't appear to be a way to download a list of the 26,000 pages that Google has indexed so that I can compare it against our sitemap. Obviously I know about site:domain.com however this only returned the first 1,000 results which all checkout fine. I was wondering if anybody knew of any methods or tools that we could use to attempt to identify these 11,000 extra pages in the Google index so we can confirm that they're just old pages which haven’t fallen out of the index yet and that they’re not going to be causing us a problem? Thanks guys!
On-Page Optimization | | ChrisHolgate0 -
Home page ranking dropped below internal pages
The index page for a site I manage has dropped significantly - internal pages rank above it. It's a new site, 2 months old but was ranking at 1st. Any suggestions as to how I can debug this?
On-Page Optimization | | OptioPublishing0