We changed the URL structure 10 weeks ago and Google hasn't indexed it yet...
-
We recently modified the whole URL structure on our website, which resulted in huge amount of 404 pages changing them to nice human readable urls. We did this in the middle of March - about 10 weeks ago...
We used to have around 5000 404 pages in the beginning, but this number is decreasing slowly. (We have around 3000 now).
On some parts of the website we have also set up a 301 redirect from the old URLs to the new ones, to avoid showing a 404 page thus making the “indexing transmission”, but it doesn’t seem to have made any difference.
We've lost a significant amount of traffic, because of the URL changes, as Google removed the old URLs, but hasn’t indexed our new URLs yet.
Is there anything else we can do to get our website indexed with the new URL structure quicker?
It might also be useful to know that we are a page rank 4 and have over 30,000 unique users a month so I am sure Google often comes to the site quite often and pages we have made since then that only have the new url structure are indexed within hours sometimes they appear in search the next day!
-
Good Point. I'd suggest a canonical on the new pages as well as blasting them to the social media sites for a quicker turnaround.
-
They already has 5000 404 pages. I don't mean block any existing pages which were changed.
I mean only block pages which always returned a 404
-
I agree with Dave except I would not recommend blocking any pages with robots.txt. 301 redirect them all to their new pages.
-
If you have a site with 5000 pages and you decide to change all the page URLs, you will still have a site with 5000 pages. The problem is that Google has no way of understanding how the old pages map to the new ones. You do.
You need to 301 your old pages to your new ones. This method is a win all around.
You lose zero traffic. You keep 90%+ of the link value, and your visitors can find the pages they are looking for.
Right now Google sees your new pages as duplicate content and wont list it.
-
- Have you setup a Google Webmasters account?
- You should submit a sitemap to Webmasters. Info Here
- Also make sure all the 404 pages are blocked by robots.txt
- Add in a 301 redirect from all old content to the new content.
This link is about moving to a new domain but most of the steps still apply.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing a site from Google index with no index met tags
Hi there! I wanted to remove a duplicated site from the google index. I've read that you can do this by removing the URL from Google Search console and, although I can't find it in Google Search console, Google keeps on showing the site on SERPs. So I wanted to add a "no index" meta tag to the code of the site however I've only found out how to do this for individual pages, can you do the same for a entire site? How can I do it? Thank you for your help in advance! L
Technical SEO | | Chris_Wright1 -
Not all images indexed in Google
Hi all, Recently, got an unusual issue with images in Google index. We have more than 1,500 images in our sitemap, but according to Search Console only 273 of those are indexed. If I check Google image search directly, I find more images in index, but still not all of them. For example this post has 28 images and only 17 are indexed in Google image. This is happening to other posts as well. Checked all possible reasons (missing alt, image as background, file size, fetch and render in Search Console), but none of these are relevant in our case. So, everything looks fine, but not all images are in index. Any ideas on this issue? Your feedback is much appreciated, thanks
Technical SEO | | flo_seo1 -
Category URL Pagination where URLs don't change between pages
Hello, I am working on an e-commerce site where there are categories with multiple pages. In order to avoid pagination issues I was thinking of using rel=next and rel=prev and cannonical tags. I noticed a site where the URL doesn't change between pages, so whether you're on page 1,2, or 3 of the same category, the URL doesn't change. Would this be a cleaner way of dealing with pagination?
Technical SEO | | whiteonlySEO0 -
Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
Hi- I have a client that had thousands of dynamic php pages indexed by Google that shouldn't have been. He has since blocked these php pages via robots.txt disallow. Unfortunately, many of those php pages were linked to by high quality sites mulitiple times (instead of the static urls) before he put up the php 'disallow'. If we create 301 redirects for some of these php URLs that area still showing high value backlinks and send them to the correct static URLs, will Google even see these 301 redirects and pass link value to the proper static URLs? Or will the robots.txt keep Google away and we lose all these high quality backlinks? I guess the same question applies if we use the canonical tag instead of the 301. Will the robots.txt keep Google from seeing the canonical tags on the php pages? Thanks very much, V
Technical SEO | | Voodak0 -
Yet Another, Yet Important URL structure query.
Massive changes to our stock media site and structure here. While we have an extensive category system previously our category pages have only been our search pages with ID numbers for sorting categories. Now we have individual category pages. We have about 600 categories with about 4 max tiers. We have about 1,000,000 total products and issues with products appearing to be duplicate. Our current URL structure for producta looks like this: http://example.com/main-category/12345/product-name.htm Here is how I was planning on doing the new structure: Cat tier 1: http://example.com/category-one/ Cat tier 2: http://example.com/category-one/category-two/ Cat tier 3: http://example.com/category-one-category-two/category-three Cat tier 4: http://example.com/category-one-category-two-category-three/category-four/ Product: http://example.com/category-one-category-two-category-three/product-name-12345.htm Thoughts? Thanks! Craig
Technical SEO | | TheCraig0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
How to keep a URL social equity during a URL structure/name change?
We are in the process of making significant URL name/structure change to one of our property and we want to keep the social equity (likes, share, +1, tweets) from the old to the new URL. We have been trying many different option without success. We are running our social "button" in an iframe. Thanks
Technical SEO | | OlivierChateau0 -
URLs: To Change or Not to Change
Hello, We recently launched a redesigned site in Drupal in December of last year. We are an eco-travel company. My current URL's look like this: /africa-and-middle-east/kenya-tanzania /central-south-america/galapagos-islands My pages have good term targeting grades, and the rankings for the terms we are targeting - "kenya and tanzania safaris" and "galapagos islands cruises" are decent, but not great - most are on page 2 or 3. The one URL where I targeted our most important term, "amazon river cruises," I am still on page 2. /central-south-america/amazon-river-cruises My questions are: Did I miss an opportunity with the rest of the URL's, and should I consider changing the rest to more targeted terms with 301s? Since the new site launched in January, perhaps I have not given enough time for my new URL's to index and mature. Would it be easier to set up landing pages with unique article content that targets terms such as "galapagos islands cruises" and "kenya and tanzania safaris"? If so, how can I do it in such a way as to not "compete" with the pages I want to drive them to? This also raises the question of redirecting the same URL twice i.e. I would have 2 redirects in place for the same url e.g. from the former site to the new site, and yet another redirect to the most-recent URL. Is that a problem? Sorry if I've asked too many questions in one post. 😉 Any advice appreciated.
Technical SEO | | csmithal0