Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
After Server Migration - Crawling Gets slow and Dynamic Pages wherein Content changes are not getting Updated
-
Hello,
I have just performed doing server migration 2 days back
All's well with traffic moved to new servers
But somehow - it seems that w.r.t previous host that on submitting a new article - it was getting indexed in minutes. Now even after submitting page for indexing - its taking bit of time in coming to Search Engines and some pages wherein content is daily updated - despite submitting for indexing - changes are not getting reflected
Site name is - http://www.mycarhelpline.com
Have checked in robots, meta tags, url structure - all remains well intact. No unknown errors reports through Google webmaster
Could someone advise - is it normal - due to name server and ip address change and expect to correct it automatically or am i missing something
Kindly advise in . Thanks
-
Thanks all for Inputs
I searched Google and found this note from Google which may happen post server migration
https://support.google.com/webmasters/answer/6033412?hl=en&ref_topic=6033383
A note about Googlebot’s crawl rate
It’s normal to see a temporary drop in Googlebot’s crawl rate immediately after the launch, followed by a steady increase over the next few weeks, potentially to rates that may be higher than from before the move.
This fluctuation occurs because we determine crawl rate for a site based on many signals, and these signals change when your hosting changes. As long as Googlebot does not encounter any serious problems or slowdowns when accessing your new serving infrastructure, it will try to crawl your site as fast as necessary and possible.
Add on Thompson Paul - Appreciate - yes its a good suggestion, will see to include sitemap
-
The one thing you haven't mentioned, which is likely to be most critical for this issue, is your XML sitemap. I couldn't find it at any of the standard URLs (/sitemap.xml and /sitemap_index.xml both lead to generic 404 pages). Also, there's no directive to the sitemap in your robots.txt.
Given that the sitemap.xml is the clearest and fastest way for you to help Google to discover new content, I'd strongly recommend you get a clean, dynamically updated sitemap.xml implemented for the site, submit it through both Google and Bing webmaster tools, and place the proper pointer to it in your robots.txt file.
Once it's been submitted to the webmaster tools, you'll be able to see exactly how frequently its being discovered/crawled.
Hope that helps?
Paul
-
The good news is, this actually sounds pretty normal. 24 hours to reflect changes in content is better than many sites. I can't account for why it dropped from 4 to 24, but I'd say this is still in the range of "good"
-
@ Cyrus
Certain pages
Earlier it was less than 4 hrs - but now its taking around 24 hrs - basis the data been updated in Search engine result just found today - i thought it was not getting updated at all
Fetch & render - no issues, Its submitting. No errors in GWT
Tested speed test - though no noticeable improvement in loading time - but no unnessary page size or load time been increased too
I was wondering - can it be a temporary phenomena - where crawl speed is slow and later on will come back to normal. Its less than 72 hrs when server been migrated
Google Search Console Crawl Stats is last updated for 16th June - so unable to figure it out from there. No errors in webmaster
-
Howdy,
A couple of questions:
1. Are there certain pages that aren't getting updated, or is it your entire site?
2. How often are changes in the pages reflected in Google's cache?Is it a case where Google simply displays old/outdated information all the time? Finally, have you done a "Fetch and Render" check in Google Webmaster Tools?
-
@Anirban
Thanks, no errors in GWT Tools. Loading time - could not observe a change. As per Gtmetrics - tested - is well within limits. Pages with dynamic content are not getting updated in Search Engine - which earlier was happening on immediate basis
-
It should not be. check your page load time. If pages takes longer to load than google bot may bounce off. Check your webmaster tool as see if there are any server errors showing.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No Index thousands of thin content pages?
Hello all! I'm working on a site that features a service marketed to community leaders that allows the citizens of that community log 311 type issues such as potholes, broken streetlights, etc. The "marketing" front of the site is 10-12 pages of content to be optimized for the community leader searchers however, as you can imagine there are thousands and thousands of pages of one or two line complaints such as, "There is a pothole on Main St. and 3rd." These complaint pages are not about the service, and I'm thinking not helpful to my end goal of gaining awareness of the service through search for the community leaders. Community leaders are searching for "311 request service", not "potholes on main street". Should all of these "complaint" pages be NOINDEX'd? What if there are a number of quality links pointing to the complaint pages? Do I have to worry about losing Domain Authority if I do NOINDEX them? Thanks for any input. Ken
Intermediate & Advanced SEO | | KenSchaefer0 -
Conditional Noindex for Dynamic Listing Pages?
Hi, We have dynamic listing pages that are sometimes populated and sometimes not populated. They are clinical trial results pages for disease types, some of which don't always have trials open. This means that sometimes the CMS produces a blank page -- pages that are then flagged as thin content. We're considering implementing a conditional noindex -- where the page is indexed only if there are results. However, I'm concerned that this will be confusing to Google and send a negative ranking signal. Any advice would be super helpful. Thanks!
Intermediate & Advanced SEO | | yaelslater0 -
Will 301 Redirects Slow Page Speed?
We have a lot of subdomains that we are switching to subfolders and need to 301 redirect all the pages from those subdomains to the new URL. We have over 1000 that need to be implemented. So, will 301 redirects slow the page speed regardless of which URL the user comes through? Or, as the old urls are dropped from Google's index and bypassed as the new URLs take over in the SERPs, will those redirects then have no effect on page speed? Trying to find a clear answer to this and have yet to find a good answer
Intermediate & Advanced SEO | | MJTrevens0 -
Trailing Slashes for Magento CMS pages - 2 URLS - Duplicate content
Hello, Can anyone help me find a solution to Fixing and Creating Magento CMS pages to only use one URL and not two URLS? www.domain.com/testpage www.domain.com/testpage/ I found a previous article that applies to my issue, which is using htaccess to redirect request for pages in magento 301 redirect to slash URL from the non-slash URL. I dont understand the syntax fully in htaccess , but I used this code below. This code below fixed the CMS page redirection but caused issues on other pages, like all my categories and products with this error: "This webpage has a redirect loop ERR_TOO_MANY_REDIRECTS" Assuming you're running at domain root. Change to working directory if needed. RewriteBase / # www check If you're running in a subdirectory, then you'll need to add that in to the redirected url (http://www.mydomain.com/subdirectory/$1 RewriteCond %{HTTP_HOST} !^www. [NC]
Intermediate & Advanced SEO | | iamgreenminded
RewriteRule ^(.*)$ http://www.mydomain.com/$1 [R=301,L] Trailing slash check Don't fix direct file links RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.)/$
RewriteRule ^(.)$ $1/ [L,R=301] Finally, forward everything to your front-controller (index.php) RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule .* index.php [QSA,L]0 -
Google crawling different content--ever ok?
Here are a couple of scenarios I'm encountering where Google will crawl different content than my users on initial visit to the site--and which I think should be ok. Of course, it is normally NOT ok, I'm here to find out if Google is flexible enough to allow these situations: 1. My mobile friendly site has users select a city, and then it displays the location options div which includes an explanation for why they may want to have the program use their gps location. The user must choose the gps, the entire city, or he can enter a zip code, or choose a suburb of the city, which then goes to the link chosen. OTOH it is programmed so that if it is a Google bot it doesn't get just a meaningless 'choose further' page, but rather the crawler sees the page of results for the entire city (as you would expect from the url), So basically the program defaults for the entire city results for google bot, but for for the user it first gives him the initial ability to choose gps. 2. A user comes to mysite.com/gps-loc/city/results The site, seeing the literal words 'gps-loc' in the url goes out and fetches the gps for his location and returns results dependent on his location. If Googlebot comes to that url then there is no way the program will return the same results because the program wouldn't be able to get the same long latitude as that user. So, what do you think? Are these scenarios a concern for getting penalized by Google? Thanks, Ted
Intermediate & Advanced SEO | | friendoffood0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Do search engines crawl links on 404 pages?
I'm currently in the process of redesigning my site's 404 page. I know there's all sorts of best practices from UX standpoint but what about search engines? Since these pages are roadblocks in the crawl process, I was wondering if there's a way to help the search engine continue its crawl. Does putting links to "recent posts" or something along those lines allow the bot to continue on its way or does the crawl stop at that point because the 404 HTTP status code is thrown in the header response?
Intermediate & Advanced SEO | | brad-causes0 -
Duplicate Content From Indexing of non- File Extension Page
Google somehow has indexed a page of mine without the .html extension. so they indexed www.samplepage.com/page, so I am showing duplicate content because Google also see's www.samplepage.com/page.html How can I force google or bing or whoever to only index and see the page including the .html extension? I know people are saying not to use the file extension on pages, but I want to, so please anybody...HELP!!!
Intermediate & Advanced SEO | | WebbyNabler0