I know I'm missing pages with my page level 301 re-directs. What can I do?
-
I am implementing page level re-directs for a large site but I know that I will inevitably miss some pages. Is there an additional safety net root level re-direct that I can use to catch these pages and send them to the homepage?
-
It really depends on the platform you're on and the way the page level redirects are set up, but if you list all the rules for the existing pages, you can always add a redirect at the very end. If implemented properly, anything left over should just that rule.
The alternative is to build a custom 404 handler that actually implements a 301-redirect to the new site.
I'd agree with this post, though - if the content really is dead, in some cases, it's better to let it 404 -
http://www.seroundtable.com/archives/022739.html
If you're really starting over, and for pages that aren't very active (no links, very little traffic), it can make more sense to clean things up. There's no one-sized-fits-all answer - it depends a lot on the scope of the site and the nature of the change.
-
So there is not a way to put some kind of catch all re-direct without doing it at the root and redirecting everything to the homepage?
-
ErrorDocument 404 /
In the htaccess. Or spider the frick out of the site with Screaming Frog. There's an article on this;
http://www.seomoz.org/blog/8-ways-to-find-old-urls-after-a-failed-site-migration-whiteboard-friday
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl solutions for landing pages that don't contain a robots.txt file?
My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?
Technical SEO | | Nomader1 -
Fetch and Render misses middle chunk of page
Hey folks, I was checking out a site with the Search Console's "Fetch and Render" function and found something potentially worrisome. A big chunk of the middle of the page (the homepage) shows up as empty space in the preview render window. The site isn't doing so hot in terms of rankings, and I'm wondering if this issue is causing it (since it could indicate that 80% of the copy on the homepage is invisible to Google) A few other details: The specific content isn't showing in either view. Both the "What Google sees" and "What the visitor sees" are missing this chunk of the page The content IS visible in cached versions of the page The html for the content seems to be in the The "Fetch" part returns "Complete" as opposed to "Partial" so I don't THINK it's a matter of javascript stuff getting blocked by robots.txt This website was built using the Wordpress theme "Suco" and the parts of the page that aren't rendering are all built with the Themify Builder tool Not ALL of the Themify Builder elements are showing up as blank. There's a slider element that's rendering just fine Any ideas on what could cause whole portions of a page not to show up in Fetch and Render? Thanks!
Technical SEO | | BrianAlpert780 -
Why are my 301 redirects and duplicate pages (with canonicals) still showing up as duplicates in Webmaster Tools?
My guess is that in time Google will realize that my duplicate content is not actually duplicate content, but in the meantime I'd like to get your guys feedback. The reporting in Webmaster Tools looks something like this. Duplicates /url1.html /url2.html /url3.html /category/product/url.html /category2/product/url.html url3.html is the true canonical page in the list above._ url1.html,_ and url2.html are old URLs that 301 to url3.html. So, it seems my bases are covered there. _/category/product/url.html _and _/category2/product/url.html _ do not redirect. They are the same page as url3.html. Each of the category URLs has a canonical URL of url3.html in the header. So, it seems my bases are covered there as well. Can I expect Google to pick up on this? Why wouldn't it understand this already?
Technical SEO | | bearpaw0 -
How to create site map for large site (ecommerce type) that has 1000's if not 100,000 of pages.
I know this is kind of a newbie question but I am having an amazing amount of trouble creating a sitemap for our site Bestride.com. We just did a complete redesign (look and feel, functionality, the works) and now I am trying to create a site map. Most of the generators I have used "break" after reaching some number of pages. I am at a loss as to how to create the sitemap. Any help would be greatly appreciated! Thanks
Technical SEO | | BestRide0 -
Product Level 301 Redirects Best Practice
When creating a 301 mapping file for product pages, what is best practice
Technical SEO | | Bucktown
for which version of the URL to redirect to? Base directory or one
subdirectory/category path. Example Old URL: www.example.com/clothing/pants/blue-pants-123 Which of the following should be the new target URL: www.example.com/apparel/pants/blue-pants-123 www.example.com/apparel/blue-apparel/blue-pants-123 www.example.com/apparel/collections/spring-collection/blue-pants-123 www.example.com/blue-pants-123 This is assuming the canonical tag will be www.example.com/blue-pants-123. Also, if www.example.com/blue-pants-123 cannot be reached via site
navigation would it be detrimental to make that the target URL if Google
cannot crawl that naturally? Thanks0 -
I have a 404 error on my site i can't find.
I have looked everywhere. I thought it might have just showed up while making some changes, so while in webmaster tools i said it was fixed.....It's still there. Even moz pro found it. error is http://mydomain.com/mydomain.com No idea how it even happened. thought it might be a plugin problem. Any ideas how to fix this?
Technical SEO | | NateStewart0 -
Can't for the life of me figure out how this is possible !! Any ideas ?
I would imagine it's not all that easy to rank on 1st page ( not going for 1st position here ) for https://www.google.com.au/search?q=credi+cards. I am looking at the AU market. For some reason which I can't figure out Everyday Money Credit Card ( https://www.woolworthsmoney.com.au/ ) ranks number 4. The home page redirect to https://www.woolworthsmoney.com.au/wowm/wps/wcm/connect/wowmoney/wowmoney/home/home/ Why have your homepage in this format ? I would love to hear any theories you guys might have. It does not look like they have a strong link profile , I could not figure out how old the domain was or what other possible reason there is for the site to rank .
Technical SEO | | RuchirP0 -
I just found something weird I can't explain, so maybe you guys can help me out.
I just found something weird I can't explain, so maybe you guys can help me out. In Google http://www.google.nl/#hl=nl&q=internet. The number 3 result is a big telecom provider in the Netherland called Ziggo. The ranking URL is https://www.ziggo.nl/producten/internet/. However if you click on it you'll be directed to https://www.ziggo.nl/#producten/internet/ HttpFox in FF however is not showing any redirects. Just a 200 status code. The URL https://www.ziggo.nl/#producten/internet/ contains a hash, so the canonical URL should be https://www.ziggo.nl/. I can understand that. But why is Google showing the title and description of https://www.ziggo.nl/producten/internet/, when the canonical URL clearly is https://www.ziggo.nl/? Can anyone confirm my guess that Google is using the bulk SEO value (link juice/authority) of the homepage at https://www.ziggo.nl/ because of the hash, but it's using the relevant content of https://www.ziggo.nl/producten/internet/ resulting in a top position for the keyword "internet".
Technical SEO | | NEWCRAFT0