Existing Pages in Google Index and Changing URLs
-
Hi!!
I am launching a newly recoded site this week and had a another noobie question.
The URL structure has changed slightly and I have installed a 301 redirect to take care of that. I am wondering how Google will handle my "old" pages? Will they just fall out of the index? Or does the 301 redirect tell Google to rewrite the URLs in the index?
I am just concerned I may see an "old" page and a "new" page with the same content in the index. Just want to make sure I have covered all my bases.
Thanks!!
Lynn
-
Hi!! Thanks Mike! I didn't realize I was passing the SIDs (as not in the URL) but it makes sense I am. Will take this to a private question and let you know what I hear back.
Thanks for your help!
Lynn
-
I would be happy to help if I knew the answer, but I don't. I don't have session IDs in my URLs (I use cookie-based session management instead, mostly because I wanted clean URLs for bookmarking and SEO). Perhaps someone else who uses session IDs in URLs could answer (or else Google "session IDs in urls" and see what comes up. I found this one: http://www.searchengineguide.com/stoney-degeyter/why-session-ids-and-search-engines-dont.php )
-
Hi! I am in Google Webmaster Tools but haven't played with it extensively since I set it up and added my domain.
Looking at it seeing some crawl errors. Most of them have SID in them. Why would it be trying to crawl a session ID?
That brings up another question. The shopper is able to narrow down a category by manufacturer and price. These links will be crawled and indexed as well. Do I want them to be???
Anything you can offer would be appreciated. If it's too in-depth (meaning will take you too much time) can take this to a private question.
Thank you!
Lynn
-
Hi!! The only thing that has changed is the removal of /shop/ from the product pages URLs. Here is the 301 installed. I was told all was well with it. Would love another set of eyeballs if you can confirm it looks good. I am actually ranking for some things so am paranoid I am going to mess the site move up. Thanks for the info. I really appreciate it.
############################################
enable rewrites
Options +FollowSymLinks
RewriteEngine on
#RedirectMatch 301 ^/shop?/$ http://hiphound.com/
RedirectMatch 301 ^/shop?/$ http://hiphound.com
###########################################
-
Crawl rate depends on your site size, your site's rate of change, how fast you serve pages, and I'm sure a couple of other factors. If you're not yet on Google Webmaster Tools then you should be (it's free). It will show you pages/day that the googlebot is crawling your site.
-
Thank you!! Great article!
Follow-up - how long does it take for the URLs to be rewriten in the Google index? Is that done on the next crawl?
Thanks! I really appreciate the help.
Lynn
-
If you have set up the 301 correctly then if a user tries to visit the old page either via typing the old URL or via the search engine then they will be directed to the new content. When the site is reindexed the old results should fall out of the index.
-
You should be okay with 301s. See http://www.atlantaanalytics.com/practicing-web-analytics/how-does-google-analytics-handle-301-and-302-redirects/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexing is slowing down?
I have up to 20 million unique pages, and so far I've only submitted about 30k of them on my sitemap. We had a few load related errors during googles initial visits, and it thought some were duplicates, but we fixed all that. We haven't gotten a crawl related error for 2 weeks now. Google appears to be indexing fewer and fewer urls every time it visits. Any ideas why? I am not sure how to get all our pages indexed if its going to operate like this... love some help thanks! HnJaXSM.png
Technical SEO | | RyanTheMoz0 -
Search Console Indexed Page Count vs Site:Search Operator page count
We launched a new site and Google Search Console is showing 39 pages have been indexed. When I perform a Site:myurl.com search I see over 100 pages that appear to be indexed. Which is correct and why is there a discrepancy? Also, Search Console Page Index count started at 39 pages on 5/21 and has not increased even though we have hundreds of pages to index. But I do see more results each week from Site:psglearning.com My site is https://wwww.psglearning.com
Technical SEO | | pdowling0 -
Can you noindex a page, but still index an image on that page?
If a blog is centered around visual images, and we have specific pages with high quality content that we plan to index and drive our traffic, but we have many pages with our images...what is the best way to go about getting these images indexed? We want to noindex all the pages with just images because they are thin content... Can you noindex,follow a page, but still index the images on that page? Please explain how to go about this concept.....
Technical SEO | | WebServiceConsulting.com0 -
If my home page never shows up in SERPS but other pages do, does that mean Google is penalizing me?
So my website I do local SEO for, xyz.com is finally getting better on some keywords (Thanks SEOMOZ) But only pages that are like this xyz.com/better_widgets_ or xyz.com/mousetrap_removals Is Google penalizing me possibly for some duplicate content websites I have out there (working on, I know I know it is bad)...
Technical SEO | | greenhornet770 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Which factors are effect on Google index?
Mywebsite have 455 URL submitedbut only 77 URLs are indexed. How can i improve more indexed URL?
Technical SEO | | magician0 -
Home page deindexed by google
when I search my website on google by site:www.mydomain.com I have found my domain with www has been de-indexed by google, but when I search site:mydomain.com, my home page--**mydomain.com **show up on the search results without www, put it simple, google only index my domain without www, I wonder how to make my domain with www being indexed, and how to prevent this problem occure again.
Technical SEO | | semer0 -
Canonical for non-exist URL ?
Hi I have a website what has parameter URL. For example www.example.com/index.php?page_id=1&no=2 I want that search engine see my page URL as; www.example.com/toys/cars But this URL is not exist in my website. And when i externally enter this page it goes to 404 page. If i add canonical url as www.example.com/toys/cars to the page www.example.com/index.php?page_id=1&no=2, what happened ? Is the url at the serp change as www.example.com/toys/cars ?
Technical SEO | | SEMTurkey0