Hi there:
Just a newby question...
I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there.
They are intended to be blocked by the web robot.txt file.
Here is an example url (joomla + virtuemart structure):
http://www.domain.com/component/users/?view=registration
and the here is the blocking content in the robots.txt file
User-agent: *
_ Disallow: /components/_
Question is:
Will this kind of duplicated url errors be removed from the error list automatically in the future?
Should I remember what errors should not really be in the error list?
What is the best way to handle this kind of errors?
Thanks and best regards
Franky