Is there a way to take notes on a crawled URL?
-
I'm trying to figure out the best way to keep track of there different things I've done to work on a page (for example, adding a longer description, or changing h2 wording, or adding a canonical URL. Is there a way to take notes for crawled URLs? If not what do you use to accomplish this?
-
Hey! Dave here from the Help Team,
There are a couple different things you can do to mark items that you have done. One of the new features we have implemented into Site Crawl is the ability to mark items as "Fixed". This tool can be handy if you know that you have fixed issues with your site but are still waiting for your next update. Another trick you might want to do is download your "all crawled pages" CSV and then create a "notes" column. It wont live in the Moz dashboard but at least you would have a good record! Hopefully those options help you out!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why my site not crawl?
my error in dashboard: **Moz was unable to crawl your site on Jul 23, 2020. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster i thing edit robots.txt how fix that?
Feature Requests | | alixxf0 -
Errors for URLS being too long and archive data is duplicate
I have hundreds of errors for the same three things. The URL is too long. Currently the categories are domain.com/product-category/name main product/accessory product/
Feature Requests | | PacificErgo
-Do I eliminate somehow the product category? Not sure how to fix this. 2) It has all of my category pages listed as archives showing duplicates. I don't know why, as they are not blog posts, they hold products on them. I don't have an archived version of this. How do I fix this? 3. It is saying my page speed is slow. I am very careful to optimize all my photos in PhotoShop. Plus I have a tool on the site to further compress. I just went with another host company that is supposed to be faster. Any ideas/ I would so appreciate your help and guidance. All my best to everyone, be safe and healthy.0 -
Any way to programatically retrieve a campaign's Search Visibility score?
I would like to retrieve a campaign's Search Visibility score, and preferably historical too, in an automated fashion. However, as far as I know the API has no access to this. Is there something I'm missing? If not, is there a formula I can use to calculate this score?
Feature Requests | | MaddenMediaSEO1 -
What is the best way to display historical ranking data?
I have utilized MOZ for years but have always struggled with a nice graph that illustrates historical ranking data for all tracked keywords. Can someone help me find the best solution for this in MOZ?
Feature Requests | | WebMarkets0 -
Is there any way to filter by relevancy first and then volume second? Right now I just export the results of keyword explorer and do it offline. It would be great if I could do it online
I'm trying to filter the results of a keyword search in keyword explorer by relevancy first and volume second. But the minute I select volume the relevancy is completely lost. I know I can export them and manipulate it in excel but is there a feature that allows me to do this in Moz?
Feature Requests | | Anerudh0 -
Crawl error : 804 https (SSL) error
Hi, I have a crawl error in my report : 804 : HTTPS (SSL) error encountered when requesting page. I check all pages and database to fix wrong url http->https but this one persist. Do you have any ideas, how can i fix it ? Thanks website : https://ilovemypopotin.fr/
Feature Requests | | Sitiodev0 -
Way to track public's use of keywords over time in MOZ?
I apologize for the basic question but is there a way to track within MOZ the public's use of my keywords over time? My traffic understandably goes down certain times of the year -- holidays, vacation season, back to school, etc. What I want to see is whether my share of the traffic available is staying steady, slipping or even increasing. Sometimes I'll see a big spike in traffic that I suspect is related more to a general increase in traffic in my keywords than anything else. Thanks for any help.
Feature Requests | | NCCompLawyer0 -
Crawl diagnostic errors due to query string
I'm seeing a large amount of duplicate page titles, duplicate content, missing meta descriptions, etc. in my Crawl Diagnostics Report due to URLs' query strings. These pages already have canonical tags, but I know canonical tags aren't considered in MOZ's crawl diagnostic reports and therefore won't reduce the number of reported errors. Is there any way to configure MOZ to not consider query string variants as unique URLs? It's difficult to find a legitimate error among hundreds of these non-errors.
Feature Requests | | jmorehouse0