Does Bing Last Crawl Update?
-
Do those of you using Bing Webmaster tools find your sitemap last crawl date gets updated? I noticed mine never updates, just shows the date of the last time I submitted it. It seems like it would update daily as it looks like bing crawls my site daily. See attached image.
-
Thanks for the information. I don't know how to and don't think I have access to change those settings on my sitemap with my current website provider. I'll send this information to them and see if they know what the problem is. Google seems to update daily, so there must be a setting that Bing doesn't like.
-
Hi Kyle
Take a look at the Build a sitemap page from Google Webmaster Tools Help, particularly "Sitemap tag definitions".
There are some tags you can take advantage of that can also help with crawling and how often your pages change.
Take a look at that page and make sure everything is formatted properly.
Hope this helps a bit as well - good luck!
-
I increased the crawl rate for certain hours of the day when the traffic is slower, will see if that does anything. It seems as though Bing is crawling my site, just not my sitemap for some reason.
-
Hi Kyle
Did you take a look at the Crawl Control in Bing Webmaster Tools? This has been very helpful for me.
My sitemaps look pretty up to date - last being 4/30.
Try that and let me know. Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Huge number of crawl anomalies and 404s - non- existent urls
Hi there, Our site was redesigned at the end of January 2020. Since the new site was launched we have seen a big drop in impressions (50-60%) and also a big drop in total and organic traffic (again 50-60%) when compared to the old site. I know in the current climate some businesses will see a drop in traffic, however we are a tech business and some of our core search terms have increased in search volume as a result of remote-working. According to search console there are 82k urls excluded from coverage - the majority of these are classed as 'crawl anomaly' and there are 250+ 404's - almost all of the urls are non-existent, they have our root domain with a string of random characters on the end. Here are a couple of examples: root.domain.com/96jumblestorebb42a1c2320800306682 root.domain.com/01sportsplazac9a3c52miz-63jth601 root.domain.com/39autoparts-agency26be7ff420582220 root.domain.com/05open-kitchenaf69a7a29510363 Is this a cause for concern? I'm thinking that all of these random fake urls could be preventing genuine pages from being indexed / or they could be having an impact on our search visibility. Can somebody advise please? Thanks!
Technical SEO | | nicola-10 -
No index and Crawl Budget
Hello, If we noindex pages, will it improve crawl budget ? For example pages like these - https://x-z.com/2012/10/
Technical SEO | | Johnroger
https://x-y.com/2012/06/
https://x-y.com/2013/03/
https://x-y.com/2019/10/
https://x-y.com/2019/08/ Should we delete/redirect such pages ? Thanks0 -
Last Part Breadcrumb Trail Active or Non-Active
Breadcrumbs have been debated quite a bit in the past. Some claim that the last part of the breadcrumb trail should be non-active to inform users they have reached the end. In other words, Do not link the current page to itself. On the other hand, that portion of the breadcrumb would won't be displayed in the SERPS and if it was may lead to a higher CTR. Foe example: www.website.com/fans/panasonic-modelnumber panasonic-modelnumber would not be active as part of the breadcrumb. What is your take?
Technical SEO | | CallMeNicholi0 -
Salvaging links from WMT “Crawl Errors” list?
When someone links to your website, but makes a typo while doing it, those broken inbound links will show up in Google Webmaster Tools in the Crawl Errors section as “Not Found”. Often they are easy to salvage by just adding a 301 redirect in the htaccess file. But sometimes the typo is really weird, or the link source looks a little scary, and that's what I need your help with. First, let's look at the weird typo problem. If it is something easy, like they just lost the last part of the URL, ( such as www.mydomain.com/pagenam ) then I fix it in htaccess this way: RewriteCond %{HTTP_HOST} ^mydomain.com$ [OR] RewriteCond %{HTTP_HOST} ^www.mydomain.com$ RewriteRule ^pagenam$ "http://www.mydomain.com/pagename.html" [R=301,L] But what about when the last part of the URL is really screwed up? Especially with non-text characters, like these: www.mydomain.com/pagename1.htmlsale www.mydomain.com/pagename2.htmlhttp:// www.mydomain.com/pagename3.html" www.mydomain.com/pagename4.html/ How is the htaccess Rewrite Rule typed up to send these oddballs to individual pages they were supposed to go to without the typo? Second, is there a quick and easy method or tool to tell us if a linking domain is good or spammy? I have incoming broken links from sites like these: www.webutation.net titlesaurus.com www.webstatsdomain.com www.ericksontribune.com www.addondashboard.com search.wiki.gov.cn www.mixeet.com dinasdesignsgraphics.com Your help is greatly appreciated. Thanks! Greg
Technical SEO | | GregB1230 -
CDN Being Crawled and Indexed by Google
I'm doing a SEO site audit, and I've discovered that the site uses a Content Delivery Network (CDN) that's being crawled and indexed by Google. There are two sub-domains from the CDN that are being crawled and indexed. A small number of organic search visitors have come through these two sub domains. So the CDN based content is out-ranking the root domain, in a small number of cases. It's a huge duplicate content issue (tens of thousands of URLs being crawled) - what's the best way to prevent the crawling and indexing of a CDN like this? Exclude via robots.txt? Additionally, the use of relative canonical tags (instead of absolute) appear to be contributing to this problem as well. As I understand it, these canonical tags are telling the SEs that each sub domain is the "home" of the content/URL. Thanks! Scott
Technical SEO | | Scott-Thomas0 -
How to handle Not found Crawl errors?
I'm using Google webmaster tools and able to see Not found Crawl errors. I have set up custom 404 page for all broken links. You can see my custom 404 page as follow. http://www.vistastores.com/404 But, I have question about it. Will it require to set 301 redirect for broken links which found in Google webmaster tools?
Technical SEO | | CommercePundit0 -
Google Fresh Update
Here's a question, if you put the date in your url (often a wordpress format) would that be seen by Google now as 'fresh' content? - part of their new update.
Technical SEO | | pauledwards0