Crawl completed but still says meta description missing
-
Before the last crawl I went through and made sure all meta descriptions were not missing. However, from the last crawl on the 26th July, it still has the same pages on the list and showed they were crawled as well?
Any ideas on why they may still be showing as missing?
-
Hey There,
Our crawler should definitely pick up any changes that were made before the most recent crawl began, so this definitely seems a bit strange. I am going to run your site through our Crawl Test toll to see if the pages are still being reported with the Missing Meta Description error, but I need to know which campaign you are having issues with. If you don't want to include the campaign name in this public forum, you can just let me know the initials for the campaign name. Also, it would be helpful if you can let me know when the meta descriptions were added to your site so I can check into whether or not the crawl had already begun by that time.
I look forward to hearing back soon.
Chiaryn
Help Team Ninja -
Yes thought it was a bit strange. Ok great thanks I will send them an email!
-
Ideally this should not be the case, you can either wait for the next crawl or email the moz help centre that is help@moz.com
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
Whole website moved to https://www. HTTP/2 version 3 years ago. When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol Robots file is correct (simply allowing all and referring to https://www. sitemap Sitemap is referencing https://www. pages including homepage Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working 301 redirects set up for non-secure and non-www versions of website all to https://www. version Not using a CDN or proxy GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so. Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2 Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page. Any thoughts, further tests, ideas, direction or anything will be much appreciated!
Technical SEO | | AKCAC1 -
Crawl depth and www
I've run a crawl on a popular amphibian based tool, just wanted to confirm... should http://www.homepage be at crawl depth 0 or 1? The audit shows http://homepage at level 0 and http://www.homepage at level 1 through a redirect. Thanks
Technical SEO | | Focus-Online-Management0 -
Meta tags in Single Page Apps
Since the deprecation of the AJAX Crawling Scheme back last October I am curious as to when Googlebot actually reads meta tag information from a page. We have a website at whichledlight.com that is implemented using emberjs. Part of the site is our results pages (i.e. gu10-led-bulbs). This page updates the meta and link tags in the head of the document for things like canonicalisation and robots, but can only do so after the page finishes loading and the JavaScript has been run.When the AJAX crawling scheme was still in place we were able to prerender these pages (including the modified meta and link tags) and serve these to Googlebot. Now Googlebot no longer uses these prerendered snapshots and instead is sophisticated enough load and run our site.So the question I have is does Googlebot read the meta and links tags downloaded from the original response or does it wait until the page finishes rendering before reading them (including any modifications that have been performed on them)
Technical SEO | | TrueluxGroup1 -
Crawl Results
How fresh is SEOMOZ crawl results ?. On my report for today I can see that my website ranking for several keywords run manually and individually on Google, Yahoo and bing to be better than the actual SEOMOZ report. Also have been noticing that Back link count on SEOMOZ report to be significantly less than counted with other sites and software.Can someone advise me on this ?
Technical SEO | | sherohass0 -
How long after google crawl do you need 301 redirects
We have just added 301's when we moved our site. Google has done a crawl & spat back a few errors. How long do I need to keep those 301's in place? I may need to change some. Thanks
Technical SEO | | Paul_MC0 -
Warnings for blocked by blocked by meta-robots/meta robots Nofollow...how to resolve?
Hello, I see hundreds of notices for blocked by meta-robots/meta robots nofollow and it appears it is linked to the comments on my site which I assume I would not want to be crawled. Is this the case and these notices are actually a positive thing? Please advise how to clear them up if these notices can be potentially harmful for my SEO. Thanks, Talia
Technical SEO | | M80Marketing0 -
Lots of overdynamic URL and crawl errors..
Just wanted some advice. SEOmoz crawl found out about 18,000 errors. The error URLs are all mainly URLs like the one below, which seem to be the registration URL with a re-direct on, going back the product after registration: http://www.DOMAIN.com/index.php?_g=co&_a=reg&redir=/index.php?_a=viewProd%26productId=3465 We have the following line in the robots file to stop the login page from being crawled: Disallow: /index.php?act=login If I add the following, will it stop the error? Disallow: /index.php?act=reg Thanks in advance**.**
Technical SEO | | filarinskis0 -
Fixing Missing MetaTag Errors
Hey all, I just had a crawl test done on my site(created using wordpress) and I received a ton of missing meta tag descriptions to fix. The odd thing is though I use "All in One" SEO Tool and the actual pages or posts on the site do have meta tag descriptions, however I noticed for every post an RSS Feed is being automatically generated and this Feed is the link missing the meta tag descriptions. Most of the errors display "Comments on" with a /feed in the end of the url. I am totally clueless on how to resolve these errors as I havent installed any WP plugins that generate feeds automatically. Has anyone encountered this problem before or know how to fix this?? The site url is http:// GovernmentGrantsAustralia . org I have left spaces above to avoid being a link dropper 🙂 Would really appreciate if anyone can help! FYI: I just found this link after digging through all the Q&A history, however I tried it and am not sure if it has worked as I still see the errors on my SEOmoz report. The link is:
Technical SEO | | justin99
http://www.seomoz.org/qa/view/41413/wordpress-missing-meta-description-tag-comments Hope someone can help me figure this one out! Thanks, Justin0