How to find temporary redirects of existing site you don't control?
-
I am getting ready to move a clients site from another company. They have like 35 tempory redirects according to MOZ.
Question is, how can I find out then current redirects so I can update everything for the new site? Do I need access to the current htaccess file to do this?
-
You can find the 35 temporary redirects that Moz reports using the Screaming Frog tool. You'll see the redirects for individual links under the "Response Codes" tab. Look for the "Redirect URI" column.
The fastest way to find all of the redirects is to go to "Reports" > "Redirect Chains." This will show all the redirects on the site. I think you have to purchase a license for this feature.
If you are trying to find redirects that have been set up for incoming links from external sites, you'll have to access the .htaccess file. I also do a site:domain.com search in Google just to see if there are old links still in the index. Then keep an eye on 404 errors in Google Webmaster Tools after the site launches.
-
Thankyou, nice tool but I don't see where they are redirecting to?
http://screencast.com/t/B4ocR5dAiB
I am redoing this site that someone else did and the url's will be changing a bit to be more seo friendly so I should redirect all his previous url's permanent to then new ones correct in case any blog articles are floating around out there pointing back to the old?
Was looking for the current redirects so could update them also
-
Was going to suggest the same thing!
-
Use a tool like Screaming Frog to crawl the site. You'll be able to see the response codes from each page and the redirected URL's. A temporary redirect will have a 302 status code.
-
You can find out the redirection process through two methods one is htaccess another one control panel once you login click on redirect you will come to see what's redirection they are using for the website and what are those pages
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
Hi- I have a client that had thousands of dynamic php pages indexed by Google that shouldn't have been. He has since blocked these php pages via robots.txt disallow. Unfortunately, many of those php pages were linked to by high quality sites mulitiple times (instead of the static urls) before he put up the php 'disallow'. If we create 301 redirects for some of these php URLs that area still showing high value backlinks and send them to the correct static URLs, will Google even see these 301 redirects and pass link value to the proper static URLs? Or will the robots.txt keep Google away and we lose all these high quality backlinks? I guess the same question applies if we use the canonical tag instead of the 301. Will the robots.txt keep Google from seeing the canonical tags on the php pages? Thanks very much, V
Technical SEO | | Voodak0 -
Matt Cutts says 404 unavailable products on the 'average' ecommerce site.
If you're an ecommerce site owner, will you be changing how you deal with unavailable products as a result of the recent video from Matt Cutts? Will you be moving over to a 404 instead of leaving the pages live still? For us, as more products were becoming unavailable, I had started to worry about the impact of this on the website (bad user experience, Panda issues from bounce rates, etc.). But, having spoken to other website owners, some say it's better to leave the unavailable product pages there as this offers more value (it ranks well so attracts traffic, links to those pages, it allows you to get the product back up quickly if it unexpectedly becomes available, etc.). I guess there's many solutions, for example, using ItemAvailability schema, that might be better than a 404 (custom or not). But then, if it's showing as unavailable on the SERPS, will anyone bother clicking on it anyway...? Would be interested in your thoughts.
Technical SEO | | Coraltoes770 -
Why can't I redirect 302 errors to 301's?
I've been advised by IT that due to the structure of our website (they don't use sub-folders) it's not possible to change 302's to 301's. Is this correct, or am I being fobbed off?
Technical SEO | | lindsaytuerena0 -
Why isn't my site not searchable from google?
I am having a hard time figuring out why is it that when I search for my website name, it didn't show up in google's search result? Here's a link to my site. I've been twiddling for days looking for answers in my google webmaster tools. Here's a link of the crawl stats from google webmaster tool. As you can see it is actually crawling some pages. However my looking at my indexed status, I am getting 0 as you can see here (http://cl.ly/image/3G1R1p0b3k1P). I've double checked for my robots.txt and nothing seemed to be out of the ordinary there. I am not blocking anything. Any ideas why?
Technical SEO | | herlamba0 -
Duplicate page errors from pages don't even exist
Hi, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages don't even exist. My website has around 40-50 pages but SEO report shows that 375 pages have been crawled. My guess is that the errors have something to do with my recent htaccess configuration. I recently configured my htaccess to add trailing slash at the end of URLs. There is no internal linking issue such as infinite loop when navigating the website but the looping is reported in the SEOmoz's report. Here is an example of a reported link: http://www.mywebsite.com/Door/Doors/GlassNow-Services/GlassNow-Services/Glass-Compliance-Audit/GlassNow-Services/GlassNow-Services/Glass-Compliance-Audit/ btw there is no issue such as crawl error in my Google webmaster tool. Any help appreciated
Technical SEO | | mmoezzi0 -
Duplicate Page Content error but I can't see it
Hi All We're getting a lot of Duplicate Page Content errors but I can't match it up. For example this page: http://www.daytripfinder.co.uk/attractions/32-antique-cottage It is saying the on page properties as follows: Title DayTripFinder - Things to do reviewed by you - 7,000 attractions <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">Meta Description</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">Read Reviews, Browse Opening Hours and Prices. View Photos, Maps. 7,000 UK Visitor Attractions.</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">But this isn't the page title or meta description.
Technical SEO | | KateWaite85
</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">And it's showing five (many others) example pages that share it. Again the page titles and description are different.</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/mckinlay-theatre</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/bakers-dolphin</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/shipley-park-fishing</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/king-johns-lodge-and-gardens</dt> <dt style="color: #5e5e5e; font-family: Helvetica, Arial, sans-serif; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; line-height: normal;">http://www.daytripfinder.co.uk/attractions/city-hall
</dt> Any ideas? Not sure if I'm missing something here! Thanks!0 -
Keywords based domains redirecting to a site.. is it SPAM?
Keywords based domains redirecting to a site is considered spam isn't it ? And if yes, then is it considered spam in all cases whether those domain based sites are related or non related to main site?
Technical SEO | | Personnel_Concept0 -
What can I do if Google Webmaster Tools doesn't recognize the robots.txt file?
I'm working on a recently hacked site for a client and and in trying to identify how exactly the hack is running I need to use the fetch as Google bot feature in GWT. I'd love to use this but it thinks the robots.txt is blocking it's acces but the only thing in the robots.txt file is a link to the sitemap. Unde the Blocked URLs section of the GWT it shows that the robots.txt was last downloaded yesterday but it's incorrect information. Is there a way to force Google to look again?
Technical SEO | | DotCar0