Should I block .ashx files from being indexed ?
-
I got a crawl issue that 82% of site pages have missing title tags
All this pages are ashx files (4400 pages).
Should I better removed all this files from google ? -
Thanks !
As simple as that
-
Are the pages useful to the user? Do you expect users to actively use these pages on your site? Do you want users to be able to find these pages when they search for their issues through Google?
If you've answered 'yes' to any of these questions, I wouldn't suggest removing them from Google. Instead, take your time and set a schedule to optimize each of these pages.
If these pages are: Not valuable to the user, unnecessary to be indexed by Google, locked behind a membership gate, duplicate pages, thin content - then these would be good reasons to noindex them from all search engines.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Backlink Indexing Problem? Suggest any paid or free tool..!
My Forum backlinks and social share backlinks are not indexing plz suggest any free r paid tool that index such backlinks. Otherwise my ranking will drop. Plz help
Moz Pro | | AsifSeotools7770 -
Woocommerce filter urls showing in crawl results, but not indexed?
I'm getting 100's of Duplicate Content warnings for a Woocommerce store I have. The urls are
Moz Pro | | JustinMurray
etc These don't seem to be indexed in google, and the canonical is for the shop base url. These seem to be simply urls generated by Woocommerce filters. Is this simply a false alarm from Moz crawl?0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
How to remove /index.html that causes duplicated content
Hi, How to remove /index.html that causes duplicated content?
Moz Pro | | whitelies
From my website navigation links, it does not shows the /index.html. However, when I run the seomoz crawl errors, it show duplicated content. Can anyone tell me how to do it?0 -
Getting your site totally indexed by SEOMOZ
Hi guys! Ijust started using SEOMOZ software and wondered how it could be that my site has over 10.000 pages but in the Pro Dashboard it only indexed about 1500 of them. I've been waiting a few weeks now but the number has been stable ever since. Is there a way to get the whole site indexed by SEOMoz software? Thanks for your answers!
Moz Pro | | ssiebn70 -
Why does the crawl report say I should have meta description and title tags in my xml files?
Just had my first crawl report today which has been very useful in finding missing and duplicated title tags and meta descriptions but it has flagged up the fact that my xml files are missing these. Surely non HTML documents shouldn't have them (or need them) so why are they showing up in the report?
Moz Pro | | PandyLegend0 -
Why is blocking the SEOmoz crawler considered a red "error?"
Why is blocking the SEOmoz crawler considered a red "error?" Please see attached image... Y3Vay.png
Moz Pro | | vkernel0