Should I block .ashx files from being indexed ?
-
I got a crawl issue that 82% of site pages have missing title tags
All this pages are ashx files (4400 pages).
Should I better removed all this files from google ? -
Thanks !
As simple as that
-
Are the pages useful to the user? Do you expect users to actively use these pages on your site? Do you want users to be able to find these pages when they search for their issues through Google?
If you've answered 'yes' to any of these questions, I wouldn't suggest removing them from Google. Instead, take your time and set a schedule to optimize each of these pages.
If these pages are: Not valuable to the user, unnecessary to be indexed by Google, locked behind a membership gate, duplicate pages, thin content - then these would be good reasons to noindex them from all search engines.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content issues with file download links (diff. versions of a downloadable application)
I'm a little unsure how canonicalisation works with this case. 🙂 We have very regular updates to the application which is available as a download on our site. Obviously, with every update the version number of the file being downloaded changes; and along with it, the URL parameter included when people click the 'Download' button on our site. e.g. mysite.com/download/download.php?f=myapp.1.0.1.exe mysite.com/download/download.php?f=myapp.1.0.2.exe mysite.com/download/download.php?f=myapp.1.0.3.exe, etc In the Moz Site Crawl report all of these links are registering as Duplicate Content. There's no content per se on these pages, all they do is trigger a download of the specified file from our servers. Two questions: Are these links actually hurting our ranking/authority/etc? Would adding a canonical tag to the head of mysite.com/download/download.php solve the crawl issues? Would this catch all of the download.php URLs? i.e. Thanks! Jon
Moz Pro | | jonmc
(not super up on php, btw. So if I'm saying something completely bogus here...be kind 😉 )0 -
Is there a report I can run to get a list of all pages indexed by Google for my website?
I want to get a CSV file of all the pages that are indexed by Google and other search engines so I can create and .htaccess file of 301 redirects
Moz Pro | | etraction0 -
Duplicate page content on / and index.php
Hi I am new to SEOmoz and in the crawl diagnostics for one of my clients it came back duplicate content on the homepage www.myclient.co.uk and on the www.myclient.co.uk/index.php which is obviously the same page. I understand that the key is to do a 301 redirect from the index to /, however how will I know that this will not just create an ever ending loop on the server? From your experience how is the best way to tackle this crawl error? Also is there a specific question that I need to ask the server?
Moz Pro | | search_shop0 -
Getting your site totally indexed by SEOMOZ
Hi guys! Ijust started using SEOMOZ software and wondered how it could be that my site has over 10.000 pages but in the Pro Dashboard it only indexed about 1500 of them. I've been waiting a few weeks now but the number has been stable ever since. Is there a way to get the whole site indexed by SEOMoz software? Thanks for your answers!
Moz Pro | | ssiebn70 -
What software can I use on my Mac to open and read a SEOMoz CSV exported file?
I do not want to buy XL or Pages just to read the CSV from SEOMoz. So I bought an app on the AppStore... and this app is unable to read the CSV from SEOMoz. Since I already wasted $2, Id rather avoid to waste more (and avoid that to others too!). What software is recomanded to open these CSV files? Also, I tried Google Docs, but I bumped in their 400K cells limit 😞
Moz Pro | | jgenesto0 -
Too many pages indexed in SEOMoz
I am running a campaign for a client that has 86 pages via Google and SEmoz is up to almost 10K pages. I am really confused. Any ideas?
Moz Pro | | LaurieK130 -
SEOmoz Bot indexing JSON as content
Hello, We have a bunch of pages that contain local JSON we use to display a slideshow. This JSON has a bunch of<a links="" in="" it. <="" p=""></a> <a links="" in="" it. <="" p="">For some reason, these</a><a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p=""></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">One example page this is happening on is: http://www.trendhunter.com/trends/a2591-simplifies-product-logos . Searching for the string '<a' yields="" 1100+="" results="" (all="" of="" which="" are="" recognized="" as="" links="" for="" that="" page="" in="" seomoz),="" however,="" ~980="" these="" json="" code="" and="" not="" actual="" on="" the="" page.="" this="" leads="" to="" a="" lot="" invalid="" our="" site,="" super="" inflated="" count="" on-page="" page. <="" span=""></a'></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">Is this a bug in the SEOMoz bot? and if not, does google work the same way?</a>
Moz Pro | | trendhunter-1598370