How to Hide Directories in Search?
-
I noticed bad 404 error links in Google Webmaster Tools and they were pointing to directories that do not have an actual page, but hold information.
Ex: there are links pointing to our PDF folder which holds all of our pdf documents. If i type in , example.com/pdf/ it brings up a unformated webpage that displays all of our PDF links.
How do I prevent this from happening. Right now I am blocking these in my robots.txt file, but if i type them in, they still appear.
Or should I not worry about this?
-
Yes, a visit to example.com/dir should now return a 404 error (if you haven't done any redirecting/canonicalizing). This will increase your 404 count in Web Master tools but it's far preferable to the alternative. If you're not redirecting the robots.txt will eventually work and hopefully the links will just fall out of WMT.
-
My hosting company turned off directory browsing and now everything is how it should be. So to my understanding, if the server sees a file that does not have a index file, it should not be view able and should be forbidden. This shoujld not affect us from an SEO standpoint should it? My hosting company said they disabled all directories in our site, however everything still works, except for the forbidden file directories.
-
Basically it shouldn't really have an affect; those unformatted file listings are literally the web server automatically saying 'here's the files that are in this folder', there's no meta tags, description, on page elements, etc.
If you have these pages and they're ranking well, you generally don't want them to be. The automatic file browsing pages don't have your name, your company, etc. in them, and they're generally pretty ugly. They also theoretically could be 'stealing' juice from your 'real' pages, if your internal structure isn't flowing relevance properly.
Basically what I'm saying is that if these pages are having some kind of SEO effect, you probably don't want them to be since they're so basic.
Also I can't overstate the security concerns that directory browsing might be introducing. If someone can directory browse to where your code lives (.php, .aspx.vb, whatever) they may be able to read it. Code sometimes has important things like logins, passwords, merchant account ids, etc. in it that you definitely don't want people reading.
-
Agreed with Valerie that step 1 is to turn off those directory listing pages - that can be a security issue and you don't necessarily want people to see/access the whole list. Also, make doubly sure you don't have any internal links to that directory (Google crawled it somehow).
Generally, Robots.txt should prevent crawling, but it's not foolproof, and it's pretty bad about removing pages once they're indexed. If you can block the page from browsing and return a 404 for the root page, that should be fine. The other option would be to have the page removed in Google Webmaster Tools. You could request removal for the entire folder, but I'm guessing that you may want the actual PDFs indexed.
-
Will turning of directory browsing affect Search for all directories?
-
I really don't want to 301 redirect them as they are just holding files. This is happening with my includes file too. that holds our header, footer, navigation etc. I can check with our hosting company to find out.
-
I'd create an index.html for the directory, and then redirect it somewhere. This way, you're capturing the inbound links and then rescuing some of the inbound juice.
Otherwise, you can also check out this post for more info on other solutions and modifying your htaccess file to prevent the directory view - http://perishablepress.com/better-default-directory-views-with-htaccess/
-
Blocking it in robots.txt will work to hide it from search engines.
If you want to hide it from users or people to who type in the url, you can simply drop a blank "index.html" in the /pdf folder.
-
I would suggest 301'ing them to their /index.htm or /pdf.htm equivalents. If you don't know, a 301 is a signal to a web browser (or search crawler) saying "this page has permanently moved, please go to (otherpage.htm) instead".
Here's a good SEOMoz article explaining it a bit more:
http://www.seomoz.org/learn-seo/redirection
What might be more of a concern, is it sounds like your web server has directory browsing enabled. This could be a security issue (depending on your web server setup). Generally you don't want to expose directories if you don't have to because it gives a potential attacker insight into your system setup. Here's an example how to do it in Apache:
www.camelrichard.org/topics/Apache/Turn_OffDirectoryBrowsing
And IIS:
technet.microsoft.com/en-us/library/cc731109(v=ws.10).aspx
If you like I can confirm if you have open directories if you give me the link, either here or through private message.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Optimization for "Search by Photos" feature
Howdy, fellow mozzers, Does anyone know what affects a given company photos show up in the "Search by Photos" section? I can't find any decent info.. Here is the link to SEL, describing the feature (not even google themselves seem to have an announcement about it). https://searchengineland.com/google-showing-mobile-search-by-photos-option-in-selected-local-verticals-323237 Thanks in advance!
Intermediate & Advanced SEO | | DmitriiK0 -
What to try when Google excludes your URL only from high-traffic search terms and results?
We have a high authority blog post (high PA) that used to rank for several high-traffic terms. Right now the post continues to rank high for variations of the high-traffic terms (e.g keyword + " free", keyword + " discussion") but the URL has been completed excluded from the money terms with alternative URLs of the domain ranking on positions 50+. There is no manual penalty in place or a DCMA exclusion. What are some of the things ppl would try here? Some of the things I can think of: - Remove keyword terms in article - Change the URL and do a 301 redirect - Duplicate the POST under new URL, 302 redirect from old blog post, and repoint links as much as you have control - Refresh content including timestamps - Remove potentially bad neighborhood links etc Has anyone seen the behavior above for their articles? Are there any recommendations? /PP
Intermediate & Advanced SEO | | ppseo800 -
How to get local search volumes?
Hi Guys, I want to get search volumes for "carpet cleaning" for certain areas in Sydney, Australia. I'm using this process: Choose to ‘Search for new keyword and ad group ideas’. Enter the main keywords regarding your product / service Remove any default country targeting Specify your chosen location (s) by targeting specific cities / regions Click to ‘Get ideas’ The problem is none of the areas, even popular ones (like north sydney, surry hills, newtown, manly) are appearing and Google keyword tool, no matches. Is there any other tools or sources of data i can use to get accurate search volumes for these areas? Any recommendations would be very much appreciated. Cheers
Intermediate & Advanced SEO | | wozniak650 -
Making Filtered Search Results Pages Crawlable on an eCommerce Site
Hi Moz Community! Most of the category & sub-category pages on one of our client's ecommerce site are actually filtered internal search results pages. They can configure their CMS for these filtered cat/sub-cat pages to have unique meta titles & meta descriptions, but currently they can't apply custom H1s, URLs or breadcrumbs to filtered pages. We're debating whether 2 out of 5 areas for keyword optimization is enough for Google to crawl these pages and rank them for the keywords they are being optimized for, or if we really need three or more areas covered on these pages as well to make them truly crawlable (i.e. custom H1s, URLs and/or breadcrumbs)…what do you think? Thank you for your time & support, community!
Intermediate & Advanced SEO | | accpar0 -
Does it hurt your SEO to have an inaccessible directory in your site structure?
Due to CMS constraints, there may be some nodes in our site tree that are inaccessible and will automatically redirect to their parent folder. Here's an example: www.site.com/folder1/folder2/content, /folder2 redirects to /folder1. This would only be for the single URL itself, not the subpages (i.e. /folder1/folder2/content and anything below that would be accessible). Is there any real risk in this approach from a technical SEO perspective? I'm thinking this is likely a non-issue but I'm hoping someone with more experience can confirm. Another potential option is to have /folder2 accessible (it would be 100% identical to /folder1, long story) and use a canonical tag to point back to /folder1. I'm still waiting to hear if this is possible. Thanks in advance!
Intermediate & Advanced SEO | | digitalcrc0 -
Archiving a festival website - subdomain or directory?
Hi guys I look after a festival website whose program changes year in and year out. There are a handful of mainstay events in the festival which remain each year, but there are a bunch of other events which change each year around the mainstay programming.This often results in us redoing the website each year (a frustrating experience indeed!) We don't archive our past festivals online, but I'd like to start doing so for a number of reasons 1. These past festivals have historical value - they happened, and they contribute to telling the story of the festival over the years. They can also be used as useful windows into the upcoming festival. 2. The old events (while no longer running) often get many social shares, high quality links and in some instances still drive traffic. We try out best to 301 redirect these high value pages to the new festival website, but it's not always possible to find a similar alternative (so these redirects often go to the homepage) Anyway, I've noticed some festivals archive their content into a subdirectory - i.e. www.event.com/2012 However, I'm thinking it would actually be easier for my team to archive via a subdomain like 2012.event.com - and always use the www.event.com URL for the current year's event. I'm thinking universally redirecting the content would be easier, as would cloning the site / database etc. My question is - is one approach (i.e. directory vs. subdomain) better than the other? Do I need to be mindful of using a subdomain for archival purposes? Hope this all makes sense. Many thanks!
Intermediate & Advanced SEO | | cos20300 -
The Site: search and Flow of PageRank
It is my understanding that if I do a search for site:mydomain.com the results are like every other SERP in that the most authoritative pages are ranked higher. So obviously I would expect my homepage to be first (in most cases), then followed by main category pages, etc. My question is has anybody ever seen disturbing results when doing this (i.e. pages that should have no authority outranking main category pages)? Is this always an issue with site structure or can you think of other factors that may cause this?
Intermediate & Advanced SEO | | purch0 -
Is there any correlation to time and search ranking?
Is there any evidence that google acknowledges the time that a site has been online with all other things being equal for search ranking?
Intermediate & Advanced SEO | | casper4340