Is googlebot the slowest bot?
-
This morning, I wrote a breaking news story about a "Wolf of Wall Street"
It was published at 12:05:49
Googlebot, which used to be on my site within a minute or less, didn't bother to visit for 53 minutes. And now, 32 minutes later, even though it has been crawled, this story doesn't even show up in google search.
Except that it is in the top 10 stories today, at #2, so the headline appears in every page on the site, so every page that has been crawled today, around 10 minutes after it was published, contains that text, so they show up. EINnews, which also crawls our pages is listed for the headline text.
Finally, the page turns up in search results 4 hours later, and says that it is 4 hours old.
Does anyone else see this slow motion mode?
If you do see this, what is wrong with the site that causes this recalcitrant behavior?
The headline of the story is "A 'Wolf of Wall Street' Raided By FBI In Florida"
and the link is http://shar.es/1bW5Sw
-
Your gap will disappear if you get back into the News index. Best of luck!
-
Thank you for looking Ryan.
Google ignores out news map because they removed us from new for a reason they didn't disclose, about 2 years ago.
So I haven't been checking on that map.
I've been trying to find time to change over from my custom-built CMS to Wordpress, and thought I'd reapply after I did that, but I'm 6 months behind my schedule to get that done. (had problems with the page design and the data conversion)
Yes, we're much smaller than the others, but 4hours for a page to show up in the index must mean something else is going on, and I can't work out what that could be.
I'll see if I can get my redesign back on track, and that will make the site more mobile-friendly.
Have you seen anything like that 4 hour gap before? I will track the next few stories I publish too, and report back.
-
Hi Alan,
Are you still pushing Google News tagged XML sitemaps when publishing articles as well? Looking at the ones currently on your site I don't see any new ones referenced since October 2014. And it looks like there's a lot of current mapping that could be updated.In general the site seems a little low in the loop of the major news cycle and would have a lower crawl/index priority on big stories behind the CNNs, Foxs, and Yahoos of the world.
It also doesn't seem to be in the Google News index: https://encrypted.google.com/search?hl=en&q=site%3Anewsblaze.com#hl=en&tbm=nws&q=wolf+of+wall+street+site:newsblaze.com
Google's Guideline to Google News Publisher inclusion is straightforward, and fairly thorough. https://support.google.com/news/publisher/answer/40787?hl=en If you get included via those means, you should see your news articles appearing very quickly.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO: High intent organic revenue down in Europe
Our team is stumped and we are hoping some of you might have some insight! We are seeing a drop in Europe organic revenue and we can't seem to figure out what the core cause of the problem is. What's interesting, the high intent traffic is increasing across the business, as is organic-attributed revenue. And in Europe specifically, other channels appear to be doing just fine. This seems to be a Europe high-intent SEO problem. What we have established: Revenue was at a peak in Q4 2017 and Q1 2018 Revenue dips in mid-late Q2 2018 and again in Q4 2018 where it has stayed low since Organic traffic has gone up, conversion rate has gone down, purchases have gone down Paid search traffic has gone up, conversion rate has gone down slightly, submissions have gone up Currency changes are minimal We cannot find any site load issues What we know happened during this time frame (January 2018 onward): Updates to the website (homepage layout, some text changes) end of April 2018 GDPR end of May 2018 Google Analytics stops being able to track Firefox Europe is a key market for us and we cant figure out what might be causing this to happen - again, only in Europe - beyond GDPR and the changes we've made on our site is there anything else major that we're missing that could be causing this? Or does anyone have any insights as to where we should look? Thank you in advance!
Algorithm Updates | | RS-Marketing0 -
Our Sites Organic Traffic Went Down Significantly After The June Core Algorithm Update, What Can I Do?
After the June Core Algorithim Update, the site suffered a loss of about 30-35% of traffic. My suggestions to try to get traffic back up have been to add metadata (since the majority of our content is lacking it), as well ask linking if possible, adding keywords to alt images, expanding and adding content as it's thin content wise. I know that from a technical standpoint there are a lot of fixes we can implement, but I do not want to suggest anything as we are onboarding an SEO agency soon. Last week, I saw that traffic for the site went back to "normal" for one day and then saw a dip of 30% the next day. Despite my efforts, traffic has been up and down, but the majority of organic traffic has dipped overall this month. I have been told by my company that I am not doing a good job of getting numbers back up, and have been given a warning stating that I need to increase traffic by 25% by the end of the month and keep it steady, or else. Does anyone have any suggestions? Is it realistic and/or possible to reach that goal?
Algorithm Updates | | NBJ_SM2 -
Mobile Usability Issues after Mobile Frist
Hi All A couple months ago we got an email from google, telling us - Mobile-first indexing enabled for https://www.impactsigns.com/ Ran the test on MOZ, Mobile usability shows 100% Last week got an email from google - New Mobile usability issues detected for impactsigns.com Top new issues found, ordered by number of affected pages: Content wider than screen Clickable elements too close together I can not seem to figure out what those issues are, as all content is visible. How important are these 2 issues? Since we are now on the mobile first side?
Algorithm Updates | | samoos0 -
What does it mean to build a 'good' website.
Hi guys. I've heard a lot of SEO professionals, Google, (and Rand in a couple of whiteboard Friday's) say it's really important to build a 'good' website if you want to rank well. What does this mean in more practical terms? (Context... I've found some sites rank much better than they 'should' do based on the competition. However, when I built my own site (well-optimised (on-page) based on thorough keyword research) it was nowhere to be found (not even top 50 after I'd 'matched' the backlink profile of others on page 1). I can only put this down to there being 'good quality website' signals lacking in the latter example. I'm not a web developer so the website was the pretty basic WordPress site.)
Algorithm Updates | | isaac6630 -
Is anyone else's ranking jumping?
Rankings have been jumping across 3 of our websites since about 24 October. Is anyone seeing similar? For example ... jumps from position 5 to 20 on one day, then back to 5 for 3 days and then back to 20 for a day I'm trying to figure out if it's algorithm based or if my rank checker has gone mad. I can't replicate the same results if I search incognito or in a new browser, everything always looks stable in the SERPs if I do the search myself
Algorithm Updates | | Marketing_Today0 -
Log File Analyzer Only Showing Spoofed Bots and No Verified Bots
Question for you guys: After analyzing some crawl data in Search Console in the sitemap section, I noticed that Google consistently isn't indexing about 3/4 of the client sites I work on that all use the same content management system. I began to wonder if maybe Google (and others) have a hard time crawling certain parts of the sites consistently, as finding a pattern here could lead me to investigate whether there's a CMS problem. To research this, I started using a log file analyzer (Screaming Frog's version) for some of those clients. After loading the files, I noticed that none of the crawl activity logged by the servers is considered verified. I input one month's worth of log files, but when I switch the program to show only verified bots, all data disappears. Is it possible for a site not to have any search engines crawling it for a whole month? Given my experience, that seems unlikely, particularly since we've been submitting crawl requests. I know that doesn't guarantee a crawl, but it seems odd that it's never happening for any search engines across the board. Context that might be helpful: I did check technical settings, and the sites are crawlable. The sites do appear in search but seem to be losing organic search traffic. Thanks for any help you can provide!
Algorithm Updates | | geodigitalmarketing0 -
I'm Pulling Hairs! - Duplicate Content Issue on 3 Sites
Hi, I'm an SEO intern trying to solve a duplicate content issue on three wine retailer sites. I have read up on the Moz Blog Posts and other helpful articles that were flooded with information on how to fix duplicate content. However, I have tried using canonical tags for duplicates and redirects for expiring pages on these sites and it hasn't fixed the duplicate content problem. My Moz report indicated that we have 1000s of duplicates content pages. I understand that it's a common problem among other e-commerce sites and the way we create landing pages and apply dynamic search results pages kind of conflicts with our SEO progress. Sometimes we'll create landing pages with the same URLs as an older landing page that expired. Unfortunately, I can't go around this problem since this is how customer marketing and recruitment manage their offers and landing pages. Would it be best to nofollow these expired pages or redirect them? Also I tried to use self-referencing canonical tags and canonical tags that point to the higher authority on search results pages and even though it worked for some pages on the site, it didn't work for a lot of the other search result pages. Is there something that we can do to these search result pages that will let google understand that these search results pages on our site are original pages? There are a lot of factors that I can't change and I'm kind of concerned that the three sites won't rank as well and also drive traffic that won't convert on the site. I understand that Google won't penalize your sites with duplicate content unless it's spammy. So If I can't fix these errors -- since the company I work conducts business where we won't ever run out of duplicate content -- Is it worth going on to other priorities in SEO like Keyword research, On/Off page optimization? Or should we really concentrate on fixing these technical issues before doing anything else? I'm curious to know what you think. Thanks!
Algorithm Updates | | drewstorys0 -
Googlebot soon to be executing javascript - Should I change my robots.txt?
This question came to mind as I was pursuing an unrelated issue and reviewing a site's robots/txt file. Currently this is a line item in the file: Disallow: https://* According to a recent post in the Google Webmasters Central Blog: [http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html](http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html "Understanding Web Pages Better") Googlebot is getting much closer to being able to properly render javascript. Pardon some ignorance on my part because I am not a developer, but wouldn't this require Googlebot be able to execute javascript? If so, I am concerned that disallowing Googlebot from the https:// versions of our pages could interfere with crawling and indexation because as soon as an end-user clicks the "checkout" button on our view cart page, everything on the site flips to https:// - If this were disallowed then would Googlebot stop crawling at that point and simply leave because all pages were now https:// ??? Or am I just waaayyyy over thinking it?...wouldn't be the first time! Thanks all! [](http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html "Understanding Web Pages Better")
Algorithm Updates | | danatanseo0