How do I update the crawl issues & Notifications?
-
I have a list of errors, most relating to missing meta descriptions. I have added a meta description to a page, visited the site and viewed the source, and the meta description is now there. When I go to analyze issues, the report it gives back for the link contains the same missing meta description as previously. How do I get it to update and realize the issue has been fixed?
-
Hi Adam,
Thanks for the question!
As Kimberly pointed out, you actually need to wait for your next scheduled weekly update to see the changes you've made to your site. If you don't want to wait, you can head over to the Research Tools page and use the Crawl Test tool. This will crawl any domain you'd like, up to 3k pages, and report on issues found in a csv format. Just a heads up that Crawl Tests are cached for 24 hours so you'll need to wait a bit between crawls to get updated data.
Best,
Sam
Moz Helpster -
Do you have to wait for the scheduled crawl?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz only crawling one page of a campaign, please help
Today I set up a new campaign for a client, however the crawl has only found the home page and is saying that the URL is unavailable. The site is definitely live and the URL is correct. I have set up the campaign 3 times one with the full address (http://www.) one with www. and with just the domain name. All three of these have come page with one page crawled and "unavailable" above the URL. It is picking up the crawl issues on the page and showing domain authority but I don't know why it's not crawling other pages. Prior to setting up the campaign I did a site crawl and Moz found everything then, so I don't know why it isn't now. Please help. Thanks
Getting Started | | Wrapped0 -
Got a problem in using MOZ Crawl test
Hello,
Getting Started | | turkeyanaclinic
Guys i need help as i'm getting this message "**Moz was unable to crawl your site on Dec 26, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster."
After i made a Campaign i'm getting this message but after i created new campaign it crawls well
can you help me to edit the old campaign ? Regards.0 -
Standard Syntax in robots.txt doesn't prevent Moz bot from crawling
A client is getting many false positive site crawl errors for things like duplicate titles and duplicate content on pages that include /tag/ in the URL. An example is https://needquest.com/place_tag/autism-spectrum-disorder/page/4/ To resolve this we have set up a disallow statement in the robots.txt file that says
Getting Started | | btreloar
Disallow: /page/ For some reason this appears not to work, as the site crawl errors continue to list pages like this. Does anyone understand why that would be and what we need to do to properly disallow crawling these pages?0 -
Scheduled update - Re-Crawl - recrawl
Can I not perform a manual update? I setup a campaign without GA as I did not have access, I got access, added the GA account to the campaign but no data is showing as I think I require an update, but have to wait 7 days? Is that right? Thanks
Getting Started | | SJMDT0 -
Campaign.crawl-seed.bad-response
I am trying to set up a new campaign for a website, but I keep getting this error message... campaign.crawl-seed.bad-response 😞 I have no idea what the problem is. Can you tell me what I am suppose to do to fix this? The URL I am trying to set up is www.aboutplcs.com
Getting Started | | ChadC0 -
High Number of Crawl Errors for Blog
Hello All, We have been having an issue with very high crawl errors on websites that contain blogs. Here is a screenshot of one of the sites we are dealing with: http://cl.ly/image/0i2Q2O100p2v . Looking through the links that are turning up in the crawl errors, the majority of them (roughly 90%) are auto-generated by the blog's system. This includes category/tag links, archived links, etc. A few examples being: http://www.mysite.com/2004/10/ http://www.mysite.com/2004/10/17/ http://www.mysite.com/tagname As far as I know (please correct me if I'm wrong!), search engines will not penalize you for things like this that appear on auto-generated pages. Also, even if search engines did penalize you, I do not believe we can make a unique meta tag for auto-generate pages. Regardless, our client is very concerned seeing these high number of errors in the reports, even though we have explained the situation to him. Would anyone have any suggestions on how to either 1) tell Moz to ignore these types of errors or 2) adjust the website so that these errors now longer appear in the reports? Thanks so much! Rebecca
Getting Started | | Level2Designs0 -
Crawl Diagnostics
Hello Experts, today i was analyse one of my website with moz and get issue overview and get total 212 issue 37 high all derive to this same url http://blogname.blogspot.com/search?updated-max=2013-10-30T17:59:00%2B05:30&max-results=4&reverse-paginate=true so can anyone help me how to find this url and remove all high priority error. and even on page website get A grade then why not performing well in SE ?
Getting Started | | JulieWhite0 -
What are the solutions for Crawl Diagnostics?
Hi Mozers, I am pretty new to SEO and wanted to know what are the solutions for the various errors reported in the crawl diagnostics and if this question has been asked, please guide me in the right directions. Following are queries specific to my site just need help with these 2 only: 1. Error 404: (About 60 errors) : These are for all the PA 1 links and are no longer in the server, what do i do with these? 2. Duplicate Page Content and Title ( About 5000) : Most of these are automatic URL;s that are generated when someone fills any info on our website. What do I do with these URL;s. they are for example: _www.abc.fr/signup.php?_id=001 and then www.abc.fr/signup.php?id=002 and so on. What do I need to do and how? Plzz. Any help would be highly appreciated. I have read a lot on the forums about duplicate content but dont know how to implement this in my case, please advise. Thanks in advance. CY
Getting Started | | Abhi81870