404'd pages still in index
-
I recently launched a site and shortly after performed a URL rewrite (not the greatest idea, i know). The developer 404'd the old pages instead of a permanent 301 redirect. This caused a mess in the index. I have tried to use Google's removal tool to remove these URL's from the index. These pages were being removed but now I am finding them in the index as just URL's to the 404'd page (i.e. no title tag or meta description). Should I wait this out or now go back and 301 redirect the old URL's (that are 404'd now) to the new URL's? I am sure this is the reason for my lack of ranking as the rest of my site is pretty well optimized and I have some quality links.
-
Will do. Thanks for the help.
-
I think the latter - robot and 301.
but (if you can) leave a couple without 301 and see what (if any) difference you get - would love to hear how it works out.
-
Is it better to remove the robots.txt entries that are specific to the old URL's so Google can see the 404 so Google will remove those pages at their own pace or remove those bits of the robots.txt file specific to the old URL's and 301 them to the new URL's. It seems those are my two options....? Obviously, I want to do what is best for the site's rankings and will see the fastest turnaround. Thanks for your help on this by the way!
-
I'm not saying remove the whole robots.txt file - just the bits relating to the old urls (if you have entries in a robots.txt that affect the old urls).
e.g. say you're robots.txt blocks access to
then you should remove that line from the robots.txt otherwise google won't be able to crawl those pages to 'see' the 404 and realise that they're not there.
My guess is a few weeks before it all settles down, but that really is a finger in the air guess. I went through a similar scenario with moving urls and then moving them again shortly after the first move - took a month or two.
-
I am a little confused regarding removal of the robots.txt file since that is a step in requesting removal from google (per their removal tool requirements). My natural tendency is to 301 redirect the old URL's to the new ones. Will I need to remove the robots.txt file prior to permanently redirecting the old URL's to the new ones? How long does it take Google (estimate) to remove old URL's after a 301?
-
Ok, got that, so that sounds like an external rewrite - which is fine. url only, but no title or description - that sounds like what you get when you block crawling via robots.txt - if you've got that situation, I'd suggest removing the block so that google can crawl them and find that they are 404s. Sounds like they'll fall out of the index eventually. Another thing you could try to hurry things along is: 301 the old urls to the new ones. submit a sitemap containing the old urls (so that they get crawled and the 301s are picked up) update your sitemap and resubmit with only the new urls.
-
When I say URL rewrite, I mean we restructured the URL's to be cleaner and more search friendly. For example, take a URL that was www.example.com/index/home/keyword and structure it to be www.example.com/keyword. Also, the old URL's (i.e. www.example.com/index/home/keyword) are being shows towards the end of the site:example.com search with just the old URL - no title or meta description. Is this a sign that they are on the way out of the index? Any insight would be helpful.
-
Couple of things probably need clarifying: When you say URL rewrite, I'm assuming you mean an external rewrite (in effect, a redirect)? If you do an internal rewrite, that (of itself) should make no difference at all to how any external visitors/engines see your urls/pages. If the old pages had links or traffic I would be inclined to 301 them to the new pages. If the old pages didn't have traffic/links, leave them, they'll fall out eventually - they're not in an xml sitemap by any chance are they (in which case update the sitemap). You often see a drop in rankings when restructuring a site and (in my experience), it can take a few weeks to recover. To give you an example, it took nearly two months for the non-www version of our site to disappear from the index after a similar move (and messing about with redirects).
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Paginated category pages still showing in Google
Despite our blog using rel=next and rel=”prev” we’re still finding paginated pages getting impressions in Google, suggesting they are taking up unnecessary crawl budget. An example is: https://www.theukdomain.uk/seo/page/2/ What steps would you recommend I take to most benefit my sites SEO? Thanks, Sam
Intermediate & Advanced SEO | | sjefferies0 -
How long will old pages stay in Google's cache index. We have a new site that is two months old but we are seeing old pages even though we used 301 redirects.
Two months ago we launched a new website (same domain) and implemented 301 re-directs for all of the pages. Two months later we are still seeing old pages in Google's cache index. So how long should I tell the client this should take for them all to be removed in search?
Intermediate & Advanced SEO | | Liamis0 -
All urls seem to exist (no 404 errors) but they don't.
Hello I am doing a SEO auditing for a website which only has a few pages. I have no cPanel credentials, no FTP no Wordpress admin account, just watching it from the outside. The site works, the Moz crawler didn't report any problem, I can reach every page from the menu. The problem is that - except for the few actual pages - no matter what you type after the domain name, you always reach the home page and don't get any 404 error. I.E. Http://domain.com/oiuxyxyzbpoyob/ (there is no such a page, but i don't get 404 error, the home is displayed and the url in the browser remains Http://domain.com/oiubpoyob/, so it's not a 301 redirect). Http://domain.com/WhatEverYouType/ (same) Could this be an important SEO issue (i.e. resulting in infinite amount of duplicate content pages )? Do you think I should require the owner to prevent this from happening? Should I look into the .htaccess file to fix it ? Thank you Mozers!
Intermediate & Advanced SEO | | DoMiSoL0 -
Is a 404, then a meta refresh 301 to the home page OK for SEO?
Hi Mozzers I have a client that had a lot of soft 404s that we wanted to tidy up. Basically everything was going to the homepage. I recommended they implement proper 404s with a custom 404 page, and 301 any that really should be redirected to another page. What they have actually done is implemented a 404 (without the custom 404 page) and then after a short delay 301 redirected to the homepage. I understand why they want to do this as they don't want to lose the traffic, but is this a problem with SEO and the index? Or will Google treat as a hard 404 anyway? Many thanks
Intermediate & Advanced SEO | | Chammy0 -
How can a Page indexed without crawled?
Hey moz fans,
Intermediate & Advanced SEO | | atakala
In the google getting started guide it says **"
Note: **Pages may be indexed despite never having been crawled: the two processes are independent of each other. If enough information is available about a page, and the page is deemed relevant to users, search engine algorithms may decide to include it in the search results despite never having had access to the content directly. That said, there are simple mechanisms such as robots meta tags to make sure that pages are not indexed.
" How can it happen, I dont really get the point.
Thank you0 -
How do I get my Golf Tee Times pages to index?
I understand that Google does not want to index other search results pages, but we have a large amount of discount tee times that you can search for and they are displayed as helpful listing pages, not search results. Here is an example: http://www.activegolf.com/search-northern-california-tee-times?Date=8%2F21%2F2013&datePicker=8%2F21%2F2013&loc=San+Diego%2C+CA&coupon=&zipCode=&search= These pages are updated daily with the newest tee times. We don't exactly want every URL with every parameter indexed, but at least http://www.activegolf.com/search-northern-california-tee-times. It's weird because all of the tee times are viewable in the HTML and are not javascript. An example of similar pages would be Yelp, for example this page is indexed just fine - http://www.yelp.com/search?cflt=dogwalkers&find_loc=Lancaster%2C+MA I know ActiveGolf.com is not as powerful as Yelp but it's still strange that none of our tee times search pages are being indexed. Would appreciate any ideas out there!
Intermediate & Advanced SEO | | CAndrew14.0 -
Page loads fine for users but returns a 404 for Google & Moz
I have an e-commerce website that is built using Wordpress and the WP E-commerce plug-in, the products have always worked fine and the pages when you view them in a browser work fine and people can purchase the products with no problems. However in the Google merchant feed and in the Moz crawl diagnostics certain product pages are returning a 404 error message and I can't work out why, especially as the pages load fine in the browser. I had a look at the page headers and can see when the page does load the initial request does return a 404 error message, then every other request goes through and loads fine. Can anyone help me as to why this is happening? A link to the product I have been using to test is: http://earthkindoriginals.co.uk/organic-clothing/lounge-wear/organic-tunic-top/ Here is a part of the header dump that I did: http://earthkindoriginals.co.uk/organic-clothing/lounge-wear/organic-tunic-top/
Intermediate & Advanced SEO | | leapSEO
GET /organic-clothing/lounge-wear/organic-tunic-top/ HTTP/1.1
Host: earthkindoriginals.co.uk
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:21.0) Gecko/20100101 Firefox/21.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Language: en-gb,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: __utma=159840937.1804930013.1369831087.1373619597.1373622660.4; __utmz=159840937.1369831087.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); wp-settings-1=imgsize%3Dmedium%26hidetb%3D1%26editor%3Dhtml%26urlbutton%3Dnone%26mfold%3Do%26align%3Dcenter%26ed_size%3D160%26libraryContent%3Dbrowse; wp-settings-time-1=1370438004; __utmb=159840937.3.10.1373622660; PHPSESSID=e6f3b379d54c1471a8c662bf52c24543; __utmc=159840937
Connection: keep-alive
HTTP/1.1 404 Not Found
Date: Fri, 12 Jul 2013 09:58:33 GMT
Server: Apache
X-Powered-By: PHP/5.2.17
X-Pingback: http://earthkindoriginals.co.uk/xmlrpc.php
Expires: Wed, 11 Jan 1984 05:00:00 GMT
Cache-Control: no-cache, must-revalidate, max-age=0
Pragma: no-cache
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 6653
Connection: close
Content-Type: text/html; charset=UTF-80 -
Have you ever seen this 404 error: 'www.mysite.com/Cached' in GWT?
Google webmaster tools just started showing some strange pages under "not found" crawl errors. www.mysite.com/Cached www.mysite.com/item-na... <--- with the three dots, INSTEAD of www.mysite.com/item-name/ I have just 301'd them for now, but is this a sign of a technical issue? The site is php/sql and I'm doing the URL rewrites/301s etc in .htaccess. Thanks! -Dan EDIT: Also, wanted to add, there is no 'linked to' page.
Intermediate & Advanced SEO | | evolvingSEO0