Working out exactly how Google is crawling my site if I have loooots of pages
-
I am trying to work out exactly how Google is crawling my site including entry points and its path from there. The site has millions of pages and hundreds of thousands indexed. I have simple log files with a time stamp and URL that google bot was on. Unfortunately there are hundreds of thousands of entries even for one day and as it is a massive site I am finding it hard to work out the spiders paths. Is there any way using the log files and excel or other tools to work this out simply? Also I was expecting the bot to almost instantaneously go through each level eg. main page--> category page ---> subcategory page (expecting same time stamp) but this does not appear to be the case. Does the bot follow a path right through to the deepest level it can/allowed to for that crawl and then returns to the higher level category pages at a later time? Any help would be appreciated
Cheers
-
Can you explain to me how you did your site map for this please?
-
I've run into the same issue for a site with 40 k + pages - far from your overall page # but still .. maybe it's the same flow overall.
The site I was working on had a structure of about 5 level deep. Some of the areas within the last level were out of reach and they didn't get indexed. More then that even a few areas on level 2 were not present in the google index and the google boot didn't visit those either.
I've created a large xml site map and a dynamic html sitemap with all the pages from the site and submit it via webmaster tool (the xml sitemap that is) but that didn't solve the issue and the same areas were out of the index and didn't got hit. Anyway the huge html sitemap was impossible to follow from a user point of view so I didn't keep that online for long but I am sure it can't work that way either.
What i did that finally solved the issue was to spot the exact areas that were left out, identify the "head" of those pages - that means several pages that acted as gateway for the entire module and I've build a few outside links that pointed to those pages directly and a few that were pointed to main internal pages of those modules that were left out.
Those pages gain authority fast and only in a few days we've spotted the google boot staying over night
All pages are now indexed and even ranking well.
If you can spot some entry pages that can conduct the spider to the rest you can try this approach - it should work for you too.
As far as links I've started with social network links, a few posts with links within the site blog (so that means internal links) and only a couple of outside links - articles with content links for those pages. Overall I think we are talking about 20-25 social network links (twitter, facebook, digg, stumble and delic), about 10 blog posts published in a 2-3 days span and about 10 articles in outside sources.
Since you have a much larger # as far as pages you probably will need more gateways and that means more links - but overall it's not a very time consuming session and it can solve your issue... hopefully
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Keyword stuffing on category pages - eCommerce site
Hi there fellow Mozzers. I work for a wine company, and I have a theory that some of our category pages are not ranking as well as they could, due to keyword stuffing. The best example is our Champagne category page, which we are trying to rank for the keyword Champagne, currently rank 6ish. However, when I load the page into Moz, it tells me that I might be stuffing, which I am not, BUT my products might be giving both Moz and Google this impression as well. Our product names for any given Champagne is "Champagne - {name}" and the producer is "Champagne {producer name}. Now, on the category pages we have a list of Champagnes, actually 44 Which means that with the way we display them, with both name of the wine, the name of the producer AND the district. That means we have 132 mentions of the word "Champagne" + the content text that I have written. I am wondering, how good is Google at identifying that this is in fact not stuffing, but rather functionality that makes for this high density of the keyword? Is there anything I can do? I mean, we can change it so it's not listed with Champagne on all the products, but I believe it would make the usability suffer a bit, not a lot - but it's a question of balance and I would like to hear if anyone has encountered a similar problem, if it is in fact a problem?
Intermediate & Advanced SEO | | Nikolaj-Landrock2 -
Highly interlinking a particular page shows up in Google search results?
Hi All, As we can see the below statement from Google about internal linking: "The number of internal links pointing to a page is a signal to search engines about the relative importance of that page."
Intermediate & Advanced SEO | | vtmoz
https://support.google.com/webmasters/answer/138752?hl=en So if we interlink a page highly than other pages, will it rank on search results instead of homepage? Moreover if the page have "keyword" in URL slug...like www.website.com/keyword. Thanks0 -
Google Search Console Crawl Errors?
We are using Google Search Console to monitor Crawl Errors. It seems Google is listing errors that are not actual errors. For instance, it shows this as "Not found": https://tapgoods.com/products/tapgoods__8_ft_plastic_tables_11_available So the page does not exist, but we cannot find any pages linking to it. It has a tab that shows Linked From, but if I look at the source of those pages, the link is not there. In this case, it is showing the front page (listed twice, both for http and https). Also, one of the pages it shows as linking to the non-existant page above is a non-existant page. We marked all the errors as fixed last week and then this week they came up again. 2/3 are the same pages we marked as fixed last week. Is this an issue with Google Search Console? Are we getting penalized for a non existant issue?
Intermediate & Advanced SEO | | TapGoods0 -
Moving career site to new URL from main site. Will it hurt SEO for main page?
For one of our clients we are building a career site and putting it under a different URL and hosting service (mainly due to security concerns of hosting it under the same host and domain). almost 100% of the incoming traffic to their current career section (which it is in a sub-folder) receives traffic for branded keywords (brand + job/career/employment), that is, there are no job position specific keywords. The client is now worried that after moving the site, the inbound traffic to the main site will be severely affected as well as the SERP results. My questions are, will the non-career related SERPs be affected? I don't see how will they be but I could be wrong If no, how could we reassure her that the SEO to the main site wont be affected? are there any case studies of a similar case (splitting part of the website under a new URL and hosting service?) Thank you for your help. PS: this is my first post so please forgive me if this has been asked before. I could not find a good response.
Intermediate & Advanced SEO | | rflores0 -
Will I lose traffic from Google for re-directing a page?
I’m currently planning to a retire a discontinued product and put a 301 redirect to a related product (although not identical). The thing is, I’m still getting significant traffic from people searching for the old product by name. Would Google send this traffic to the new pages via the re-direct? Is Google likely to display the new page in place of the old page for similar queries or will it serve other content? I’d like to answer this question so that I can decide between the two following approaches: 1) Retiring the old page immediately and putting a 301 redirect to the new related pages. This will have the advantage of transferring the value of any link signals / referring traffic. Traffic will also land on the new pages directly without having to click through from another page. We would have a dynamic message telling users that the old product had been retired depending on whether they had visited out site before. 2) Keep the old product pages temporarily so that we don’t lose the traffic from the search engines. We would then change the old pages to advise users that the old product was now retired, but that we have other products that might solve their problems. When this organic traffic decreases over time, then we will proceed with the re-direct as above. I am worried though that the old product pages might outrank the new product pages. I’d really appreciate some advice with this. I’ve been reading lots of articles, but it seems like there are different opinions on this. I understand that I will lose between 10% - 15% of page rank as per the Matt Cutts video.
Intermediate & Advanced SEO | | RG_SEO0 -
Google giving me only partial site links?
Hi Guys, My site is #1 ranked for the term "waiting till marriage," but Google only gives me partial site links. See "Forums - Articles - Questions - Videos" links in attached screenshot. How do I get the full, page-dominating, mini-description-having site links? Any suggestions? Note: I've got a ton of content and decent traffic, but I haven't put much time into developing back links yet. I'm a php developer, but I'm new to professional-level SEO. Any help would be hugely appreciated. Also, sorry about the inflammatory nature of the site. It's not a preachy site; it's just a support group. Hope it doesn't offend. partial-sitelinks.png
Intermediate & Advanced SEO | | MikeAM270 -
How to enable crawling for dynamic generated search result pages?
I want to enable crawling facility for dynamic generated search result pages which are generating by Magento Solr search. You can view more about it by following URLs. http://code.google.com/p/magento-solr/ http://www.vistastores.com/catalogsearch/result/?q=bamboo+table+lamp
Intermediate & Advanced SEO | | CommercePundit
http://www.vistastores.com/catalogsearch/result/?q=ceramic+table+lamp
http://www.vistastores.com/catalogsearch/result/?q=green+patio+umbrella Right now, Google is not crawling search result page because, I have added following syntax to Robots.txt file. Disallow: /*?q= So, How do I enable crawling of search result pages with best SEO practice? If any other inputs in same direction so, it will help me more to get it done.0 -
Do sites with a small number of content pages get penalized by Google?
If my site has just five content pages, instead of 25 or 50, then will it get penalized by Google for a given moderately competitive keyword?
Intermediate & Advanced SEO | | RightDirection0