How Does Google's "index" find the location of pages in the "page directory" to return?
-
This is my understanding of how Google's search works, and I am unsure about one thing in specific:
- Google continuously crawls websites and stores each page it finds (let's call it "page directory")
- Google's "page directory" is a cache so it isn't the "live" version of the page
- Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords.
- When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory"
- These returned pages are given ranks based on the algorithm
The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)?
For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache?
The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
-
Yeah that makes sense. I also have a lot of experience with databases and the back ends of websites so I know your language.
I'm wondering how Google correlates the url with the page entries then. Maybe each page entry would have a url field so Google knows the location of the live version to constantly update that entry in the "page directory" database?
-
That is a question that no one here can answer. We cant speak for how Google does things internally.
but.... as a web / database programmer for 14+ years let me tell you how its "generally" done
Usually when you have to link to separate sets of data together (ie. database or tables) there is usually a unique_id created to link them which usually is never changed. So when a new record is created that record will live with that ID for its life, also known as a (unique identifier which tends to be an auto-incremented number that is dynamically generated and can not be repeated).
Since records tend to be linked this way, any other fields that exist in the record (firstName, lastName, Url, blah blah) then can be changed without the original ID being disturbed.
So to answer your question from my experience I would assume Google links from a unique identifier of some sort and not the URL directly.
Hope I didn't lose you, its my favorite subject...but no one here speaks that language to much
-
That makes sense, thanks for getting back to me so fast!
Perhaps you can help answer my next question. I have a client who used to host his domain at "www.oldurl.com", and has migrated his website to "www.newurl.com". He wants to use his old domain "www.oldurl.com", so he setup forwarding/masking so that when someone tries to access "www.oldurl.com" they are forwarded to "www.newurl.com" but the url shown to the user is "www.oldurl.com".
My client want his old url "www.oldurl.com" to be ranked in Google, but from what I understand his new url will be ranked. I know masking is really bad for SEO, and I want to educate my client as to why on the technical side. I have read Google see's all the content as duplicate with masking. Do you know the details as to why?
-
Hey Cesar,
Thanks for the links! Really useful info there.
Unfortunately they I couldn't find the answer I was looking for so I'll be more specific in what I'm asking.
From what I understand Google uses two database systems. One contains keywords and the other contains cached pages. How does a keyword entry point to a page entry? Does it use a unique id number, or does it use the url that page is using in the "live" vesion on the web?
-
Just because you create a new page and delete the old one, Google won't know immediately about it. So if Google crawls the new page before it's had a chance to crawl the old one, then it will indeed consider the new page to be duplicate content. Then when it tries to crawl the old page, it will discover that it no longer exists. However, as long as links to the old page exist, it will continue to try to crawl that page. Eventually it may de-index the old page if it keeps returning an error.
Bottom line, if you are moving content to a new URL, be sure to include a 301 redirect on the old page so that Google (and other search engines) know that the piece of content has moved. You can also do this with canonical tags, but 301s are more effective.
-
Thanks for the response and links Takeshi. Maybe I can rephrase the question to be more clear. Let's say a piece of content (or page) is at the url "www.oldurl.com/page". During a migration this same piece of content now at the url "www.newurl.com/page". The "www.oldurl.com" doesn't exist anymore so there isn't duplicate content in the live web.
Would Google create a new entry in it's "page directory" (what is the industry standard name for this directory?) and give it the url "www.newurl.com/page"?
If it does create a new entry, would Google keep the old entry "www.oldurl.com/page" although the old url doesn't exist in the "live" web anymore?
-
Wow you just asked questions that would require about 10,000,000,000 answers
Lets start here
- Video from the man himself Mr. Matt Cutts - Matt Cutts (Works for Google)
- Great Web 2.0 Page create from Google themself - (Google Them self)
- Older but still relevant description about how "backlinks" affect PR - (Google Them self)
-
This a pretty confusing question, and the terminology you use is different from industry standard. Check out these links for a quick overview of how Google works:
- http://www.google.com/insidesearch/howsearchworks/thestory/
- http://www.googleguide.com/google_works.html
If you are just worried about changing a page's url, just be sure to put in a 301 redirect from the old page to the new page. That way, even if Google has an older version of the page indexed, it will automatically redirect the user to the new page as well as help Google discover the new location of the page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How handle pages with "read more" text query strings?
My site has hundreds of keyword content landing pages that contain one or two sections of "read more" text that work by calling the page and changing a ChangeReadMore variable. This causes the page to currently get indexed 5 times (see examples below plus two more with anchor tag set to #sectionReadMore2 This causes Google to include the first version of the page which is the canonical version and exclude the other 4 versions of the page. Google search console says my site has 4.93K valid pages and 13.8K excluded pages. My questions are: 1. Does having a lot of excluded pages which are all copies of included pages hurt my domain authority or otherwise hurt my SEO efforts? 2. Should I add a rel="nofollow" attribute to the read more link? If I do this will Google reduce the number of excluded pages? 3. Should I instead add logic so the canonical tag displays the exact URL each time the page re-displays in another readmore mode? I assume this would increase my "included pages" and decrease the number of "excluded pages". Would this somehow help my SEO efforts? EXAMPLE LINKS https://www.tpxonline.com/Marketplace/Used-AB-Dick-Presses-For-Sale.asp https://www.tpxonline.com/Marketplace/Used-AB-Dick-Presses-For-Sale.asp?ChangeReadMore=More#sectionReadMore1 https://www.tpxonline.com/Marketplace/Used-AB-Dick-Presses-For-Sale.asp?ChangeReadMore=Less#sectionReadMore1
Technical SEO | | DougHartline0 -
Pages Indexed Not Changing
I have several sites that I do SEO for that are having a common problem. I have submitted xml sitemaps to Google for each site, and as new pages are added to the site, they are added to the xml sitemap. To make sure new pages are being indexed, I check the number of pages that have been indexed vs. the number of pages submitted by the xml sitemap every week. For weeks now, the number of pages submitted has increased, but the number of pages actually indexed has not changed. I have done searches on Google for the new pages and they are always added to the index, but the number of indexed pages is still not changing. My initial thought was as new pages are added to the index, old ones are being dropped. But I can't find evidence of that, or understand why that would be the case. Any ideas on why this is happening? Or am I worrying about something that I shouldn't even be concerned with since new pages are being indexed?
Technical SEO | | ang1 -
GWT returning 200 for robots.txt, but it's actually returning a 404?
Hi, Just wondering if anyone has had this problem before. I'm just checking a client's GWT and I'm looking at their robots.txt file. In GWT, it's saying that it's all fine and returns a 200 code, but when I manually visit (or click the link in GWT) the page, it gives me a 404 error. As far as I can tell, the client has made no changes to the robots.txt recently, and we definitely haven't either. Has anyone had this problem before? Thanks!
Technical SEO | | White.net0 -
Can Google show the hReview-Aggregate microformat in the SERPs on a product page if the reviews themselves are on a separate page?
Hi, We recently changed our eCommerce site structure a bit and separated our product reviews onto a a different page. There were a couple of reasons we did this : We used pagination on the product page which meant we got duplicate content warnings. We didn't want to show all the reviews on the product page because this was bad for UX (and diluted our keywords). We thought having a single page was better than paginated content, or at least safer for indexing. We found that Googlebot quite often got stuck in loops and we didn't want to bury the reviews way down in the site structure. We wanted to reduce our bounce rate a little, so having a different reviews page could help with this. In the process of doing this we tidied up our microformats a bit too. The product page used to have to three main microformats; hProduct hReview-Aggregate hReview The product page now only has hProduct and hReview-Aggregate (which is now nested inside the hProduct). This means the reviews page has hReview-Aggregate and hReviews for each review itself. We've taken care to make sure that we're specifying that it's a product review and the URL of that product. However, we've noticed over the past few weeks that Google has stopped feeding the reviews into the SERPs for product pages, and is instead only feeding them in for the reviews pages. Is there any way to separate the reviews out and get Google to use the Microformats for both pages? Would using microdata be a better way to implement this? Thanks,
Technical SEO | | OptiBacUK
James0 -
Is it OK for a sitemap to appear as a "Top URL" in Google Webmaster?
I'm using Google Webmaster (alongside other tools) to understand how Google is indexing my site. One of the tools is "Content Keywords", where it lists keywords that Google sees as significant for your site. The keywords shown are generally fine, but when I click on an individual word, I am often seeing our sitemap as one of the "Top URLs" that the keyword is found on (our sitemap is at system/sitemap1.xml.gz) - is this OK? Obviously I don't want to add the sitemap URL to robots.txt, but I also want to ensure that 'real' user-focused pages (e.g. our homepage) appear higher in the "Top URLs" list for the keywords, as I'm assuming this is an indicator of how the site is performing in search. Any help appreciated!
Technical SEO | | anilababla0 -
Website's stability and it's affect on SEO
What is the best way to combat previous website stability issues? We had page load time and site stability problems over the course of several months. As a result our keyword rankings plummeted. Now that the issues have been resolved, what's the best/quickest way to regain our rankings on specific keywords? Thanks, Eric
Technical SEO | | MediaCause0 -
I am trying to correct error report of duplicate page content. However I am unable to find in over 100 blogs the page which contains similar content to the page SEOmoz reported as having similar content is my only option to just dlete the blog page?
I am trying to correct duplicate content. However SEOmoz only reports and shows the page of duplicate content. I have 5 years worth of blogs and cannot find the duplicate page. Is my only option to just delete the page to improve my rankings. Brooke
Technical SEO | | wianno1680 -
Google indexing page with description
Hello, We rank fairly high for a lot of terms but Google is not indexing our descriptions properly. An example is with "arnold schwarzenegger net worth". http://www.google.ca/search?q=arnold+schwarzenegger+net+worth&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a When we add content, we throw up a placeholder page first. The content gets added with no body content and the page only contains the net worth amount of the celebrity. We then go back through and re-add the descriptions and profile bio shortly after. Will that affect how the pages are getting indexed and is there a way we can get Google to go back to the page and try to index the description so it doesn't just appear as a straight link? Thanks, Alex
Technical SEO | | Anti-Alex0