Samantha,
Excellent. I appreciate the consideration. Hopefully this is something that others will also find value from.
NOTE: my mock-up would only be one approach. There are other methods that might be more user friendly.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Job Title: COO
Company: Sticky Life
Favorite Thing about SEO
URL Structure
Samantha,
Excellent. I appreciate the consideration. Hopefully this is something that others will also find value from.
NOTE: my mock-up would only be one approach. There are other methods that might be more user friendly.
I now understand that the ignore option found in the MOZ Site Crawl tool will permanently remove the item from ever showing up in our Issues again. We desire to use the issues list as kind of like a To-Do checklist with the ultimate goal to have no issues found and would like to "Temporarily Remove" an issue to see if it is shows back up in future crawls. If we properly fix the issue it shouldn't show back up.
However, based on the current Ignore function, if we ignore the issue it will never show back up, even if the issue is still a problem.
At the same time, the issue could be a known issue that the end user doesn't want to ever show back up and they desire to never have it show again. In this case it might be nice to maintain the current "Permanently Ignore" option.
Use the following imgur to see a mockup of my idea for your review.
ViviCa1, Thank you. I think I'll now be requesting a new feature.
Quick question about the ignore feature found in the MOZ Site Crawl.
We've made some changes to pages containing errors found by the MOZ Site Crawl. These changes should have resolved issues but we're not sure about the "Ignore" feature and do not want to use it without first understanding what will happen when using it.
It is true that we want all traffic going to our secure version of our site. We want all traffic redirected to the secure version but we do not know why it triggering a Redirect Chain when this has never been setup as a redirect but rather setup through our DNS settings.
We're getting a higher page authority for https://stickylife.com/ versus our https://www.stickylife.com/ url. Not sure why but the domain should not be chained at all. This is what I want to figure out.
Once we understand why this is happening to our domain we might be able to fix the issue for the other pages that are showing the same issue.
MozPro is highlighting some redirect chain issues with our domain that I do not recall ever setting up in our redirect list. In our Moz Pro Campaign I see the Site Crawl has flagged 36 Redirect Chain Issues. I understand how the redirect chain errors can happen but I do not recall ever manually redirecting our domain, yet I have http://stickylife.com, https://stickylife.com & https://www.stickylife.com all associated in one of our redirect chain errors.
When looking at our redirect files I do not see any of these domain redirects and wonder how this has happened and how to fix it.
It appears as though our HTTP and HTTPS is causing some redirection. I wonder if this is coming from our DNS settings?
Dave,
Awesome. Thank you. I look forward to communicating through the support ticket.
On June 8th we ran a Moz Crawl on our site. We found 144 pages that were flagged with duplicate content.
Again on June 13th we ran another moz crawl on our site and found 137 pages that were flagged with duplicate content. Then one final scan on June 22nd with 161 pages of duplicate content.
After comparing the 3 different scans I see that, without making any changes, pages that were not flagged as duplicate content are now being flagged as duplicate content. While at the same time, pages that were originally flagged as duplicate content are now no longer showing up with duplicate content. I could understand if we made some changes to these pages but no changes were made.
For example: On the 8th this page was flagged as duplicate content - https://www.stickylife.com/star-magnet
On the 13th and 22nd it was not flagged as duplicate content but no changes were made to that page. For reference it was flagged as duplicate content with the following page: https://www.stickylife.com/baseball-glove-magnet This page was also Not changed or altered between between these dates.
In addition, when Moz scans our site through our campaign every Friday the results do not match what we see when we do a manual scan. Moz's weekly scan only reveals 14 pages with duplicate content as opposed to the numbers you see above.
Why such inconsistencies in the Moz Scans?
I suppose you're right. I have never experienced a drop in DA before so I was a little shocked. It is something I'll have to watch overtime.
Thank you for sharing. This helps me understand a little more about Moz ranking and what to expect when looking at Domain Authority.
Quick question about the ignore feature found in the MOZ Site Crawl.
We've made some changes to pages containing errors found by the MOZ Site Crawl. These changes should have resolved issues but we're not sure about the "Ignore" feature and do not want to use it without first understanding what will happen when using it.
Co-Started stickylife.com in 2009 and have been working on it ever since.
Looks like your connection to Moz was lost, please wait while we try to reconnect.