Search Engine blocked by robots.txt
-
I do that, because i am using joomla. is bad? thanks
-
yeah, the structure of the site can get confusing.
-
Oh yeh this is good for query strings, will not crawl non SEF URLs
So your are good
-
Yes, i have that.
then put:
Block all query
stringsDisallow: /*?
if i don´t put that, the crawler index me x 2 all the web pages.
example: now i have 400 indexed. if i take off that, will index like 800
-
this is a sample of a Joomla Site that i have for robots.txt.
User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /media/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Disallow: /xmlrpc/
-
just put this in robots:
Block all query stringsDisallow: /*?
saing that not index pages with this string. this don´t generate duplicated files.
it´s bad too?
thanks
Regards
Gabo
-
If you need to Index your website and gets rankings, yes this is bad for your website.
This is means that you don't want any Search engine to index your website, hence, people wont find you in the search engines.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt allows wp-admin/admin-ajax.php
Hello, Mozzers!
Technical SEO | | AndyKubrin
I noticed something peculiar in the robots.txt used by one of my clients: Allow: /wp-admin/admin-ajax.php What would be the purpose of allowing a search engine to crawl this file?
Is it OK? Should I do something about it?
Everything else on /wp-admin/ is disallowed.
Thanks in advance for your help.
-AK:2 -
How can I make it so that robots.txt is not ignored due to a URL re-direct?
Recently a site moved from blog.site.com to site.com/blog with an instruction like this one: /etc/httpd/conf.d/site_com.conf:94: ProxyPass /blog http://blog.site.com
Technical SEO | | rodelmo4
/etc/httpd/conf.d/site_com.conf:95: ProxyPassReverse /blog http://blog.site.com It's a Wordpress.org blog that was set as a subdomain, and now is being redirected to look like a directory. That said, the robots.txt file seems to be ignored by Google bot. There is a Disallow: /tag/ on that file to avoid "duplicate content" on the site. I have tried this before with other Wordpress subdomains and works like a charm, except for this time, in which the blog is rendered as a subdirectory. Any ideas why? Thanks!0 -
How to block text on a page to be indexed?
I would like to block the spider indexing a block of text inside a page , however I do not want to block the whole page with, for example , a noindex tag. I have tried already with a tag like this : chocolate pudding chocolate pudding However this is not working for my case, a travel related website. thanks in advance for your support. Best regards Gianluca
Technical SEO | | CharmingGuy0 -
Empty Meta Robots Directive - Harmful?
Hi, We had a coding update and a side-effect of that was that our directive was emptied, in other words it now reads as: on all of the site. I've since noticed that Google's cache date on all of the pages - at least, the ones I tested - have a Cached date of no later than 17 December '12 - that's the Monday after the directive was removed on mass. So, A, does anyone have solid evidence of an empty directive causing problems? Past experience, Matt Cutts, Fishkin quote, etc. And then B - It seems fairly well correlated but, does my entire site's homogenous Cached date point to this tag removal? Or is it fairly normal to have a particular cache date across a large site (we're a large ecommerce site). Our site: http://www.zando.co.za/ I'm having the directive reinstated as soon as Dev permitting. And then, for extra credit, is there a way with Google's API, or perhaps some other tool, to run an arbitrary list and retrieve Cached dates? I'd want to do this for diagnosis purposes and preferably in a way that OK with Google. I'd avoid CURLing for the cached URL and scraping out that dates with BASH, or any such kind of thing. Cheers,
Technical SEO | | RocketZando0 -
Matching C Block
Hi Guys We have 2 sites that are in the same niche and competing for the same keywords. The sites are on seperate domains one is UK and one is .com They have their own IP's however have both have the same C Block... We have noticed that when the rankings for one site improves the other drops.... Could the C Block be causing this?
Technical SEO | | EwanFisher0 -
Google Search Parameters
Couple quick questions. Is using the parameter pws=0 still useful for turning off personalization? Is there a way to set my location as a URL parameter as well? For instance, I want to set my location to United States, can this be done with a URL param the same way as pws=0?
Technical SEO | | nbyloff0 -
Shopping engine optimization
I am looking for some help to know more about shopping engine optimization for an online retailer. I would like to know about the best seo tools for this and also any companies that you can recommend. Thank you Mozzers! Vijay
Technical SEO | | vijayvasu0 -
My urls changed with new CMS now search engines see pages as 302s what do I do?
We recently changed our CMS from php to .NET. The old CMS did not allow for folder structure in urls so every url was www.mydomain/name-of-page. In the new CMS we either have to have .aspx at the end of the url or a /. We opted for the /, but now my page rank is dead and Google webmaster tools says my existing links are now going through an intermediary page. Everything resolves to the right place, but looks like spiders see our new pages as being 302 redirected. Example of what's happening. Old page: www.mydomain/name-of-page New page: www.mydomain/name-of-page/ What should I do? Should I go in and 301 redirect the old pages? Will this get cleared up by itself in time?
Technical SEO | | rasiadmin10