Search Engine blocked by robots.txt
-
I do that, because i am using joomla. is bad? thanks
-
yeah, the structure of the site can get confusing.
-
Oh yeh this is good for query strings, will not crawl non SEF URLs
So your are good
-
Yes, i have that.
then put:
Block all query
stringsDisallow: /*?
if i don´t put that, the crawler index me x 2 all the web pages.
example: now i have 400 indexed. if i take off that, will index like 800
-
this is a sample of a Joomla Site that i have for robots.txt.
User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /media/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Disallow: /xmlrpc/
-
just put this in robots:
Block all query stringsDisallow: /*?
saing that not index pages with this string. this don´t generate duplicated files.
it´s bad too?
thanks
Regards
Gabo
-
If you need to Index your website and gets rankings, yes this is bad for your website.
This is means that you don't want any Search engine to index your website, hence, people wont find you in the search engines.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Updating Old Content - Should I update In Search Console?
Hey Mozzers If I'm updating old content on a site (for example adding some copy and adding some new links in the page) - Is it important to get Google to recrawl with the feature in webmaster tools? If you didn't do this, could you be waiting a long time for google to recrawl the URL? Cheers!
Technical SEO | | wearehappymedia0 -
Why has my search traffic suddenly tanked?
On 6 June, Google search traffic to my Wordpress travel blog http://www.travelnasia.com tanked completely. There are no warnings or indicators in Webmaster Tools that suggest why this happened. Traffic from search has remained at zero since 6 June and shows no sign of recovering. Two things happened on or around 6 June. (1) I dropped my premium theme which was proving to be not mobile friendly and replaced it with the ColorMag theme which is responsive. (2) I relocated off my previous hosting service which was showing long server lag times to a faster host. Both of these should have improved my search performance, not tanked it. There were some problems with the relocation to the new web host which resulted in a lot of "out of memory" errors on the website for 3-4 days. The allowed memory was simply not enough for the complexity of the site and the volume of traffic. After a few days of trying to resolve these problems, I moved the site to another web host which allows more PHP memory and the site now appears reliably accessible for both desktop and mobile. But my search traffic has not recovered. I am wondering if in all of this I've done something that Google considers to be a cardinal sin and I can't see it. The clues I'm seeing include: Moz Pro was unable to crawl my site last Friday. It seems like every URL it tried to crawl was of the form http://www.travelnasia.com/wp-login.php?action=jetpack-sso&redirect_to=http://www.travelnasia.com/blog/bangkok-skytrain-bts-mrt-lines which resulted in a 500 status error. I don't know why this happened but I have disabled the Jetpack login function completely, just in case it's the problem. GWT tells me that some of my resource files are not accessible by GoogleBot due to my robots.txt file denying access to /wp-content/plugins/. I have removed this restriction after reading the latest advice from Yoast but I still can't get GWT to fetch and render my posts without some resource errors. On 6 June I see in Structured Data of GWT that "items" went from 319 to 1478 and "items with errors" went from 5 to 214. There seems to be a problem with both hatom and hcard microformats but when I look at the source code they seem to be OK. What I can see in GWT is that each hcard has a node called "n [n]" which is empty and Google is generating a warning about this. I see that this is because the author vcard URL class now says "url fn n" but I don't see why it says this or how to fix it. I also don't see that this would cause my search traffic to tank completely. I wonder if anyone can see something I'm missing on the site. Why would Google completely deny search traffic to my site all of a sudden without notifying any kind of penalty? Note that I have NOT changed the content of the site in any significant way. And even if I did, it's unlikely to result in a complete denial of traffic without some kind of warning.
Technical SEO | | Gavin.Atkinson1 -
Big Increase in 404 Errors after Google Custom Search Engine Install on Website
My URL is: http://www.furniturefashion.comHi forum.I recently installed a Custom Google Search Engine (https://www.google.com/cse/) on my blog about ten days ago. Since then my 404 errors in Webmaster Tools has skyrocketed by several thousand. I had not had an issue before. Once it was installed the 404 errors started appearing. What's interesting is that all the errors have the URL then the word "undefined" at the end. I have attached a screen shot from my Webmaster Tools dashboard. Also, there are a few examples below of what the URLs are that have the 404 errors.wood_closet_organizer_to_improve_space_utilization/undefinedsmall-sweet-10-inspiring-small-kitchen-designs/undefined Has anyone had this issue? I very much want the search engine on my site, but not at the expense of several thousand 404 errors. My site queries has been going down since the installation of the custom search engine. Here is some of the code that I have below that I took off my site doing a "view source". Any help would be greatly appreciated.href='http://cdn.furniturefashion.com/wp-content/plugins/google-custom-search/css/smoothness/jquery-ui-1.7.3.custom.css?ver=3.9.2' type='text/css' media='all' />rel='stylesheet' id='gsc_style_search_bar-css' href='http://www.google.com/cse/style/look/minimalist.css?ver=3.9.2' type='text/css' media='all' />rel='stylesheet' id='gsc_style_search_bar_more-css' href='http://cdn.furniturefashion.com/wp-content/plugins/google-custom-search/css/gsc.css?ver=3.9.2' type='text/css' media='all' />< uXRSEkC
Technical SEO | | will21120 -
Best use of robots.txt for "garbage" links from Joomla!
I recently started out on Seomoz and is trying to make some cleanup according to the campaign report i received. One of my biggest gripes is the point of "Dublicate Page Content". Right now im having over 200 pages with dublicate page content. Now.. This is triggerede because Seomoz have snagged up auto generated links from my site. My site has a "send to freind" feature, and every time someone wants to send a article or a product to a friend via email a pop-up appears. Now it seems like the pop-up pages has been snagged by the seomoz spider,however these pages is something i would never want to index in Google. So i just want to get rid of them. Now to my question I guess the best solution is to make a general rule via robots.txt, so that these pages is not indexed and considered by google at all. But, how do i do this? what should my syntax be? A lof of the links looks like this, but has different id numbers according to the product that is being send: http://mywebshop.dk/index.php?option=com_redshop&view=send_friend&pid=39&tmpl=component&Itemid=167 I guess i need a rule that grabs the following and makes google ignore links that contains this: view=send_friend
Technical SEO | | teleman0 -
Robots.txt and joomla
Hello, I use joomla for my website and automatically all those files are blocked is that good or bad, so I remove anything and if so why ? User-agent: *
Technical SEO | | seoanalytics
Disallow: /administrator/
Disallow: /cache/
Disallow: /components/
Disallow: /images/
Disallow: /includes/
Disallow: /installation/
Disallow: /language/
Disallow: /libraries/
Disallow: /media/
Disallow: /modules/
Disallow: /plugins/
Disallow: /templates/
Disallow: /tmp/
Disallow: /xmlrpc/ I also added to my robots.txt files my email address ( is that useful, I am afraid google passes PR to the email address )
and a javascript: void (0) because I have tabs on my webpage ( is that useful )
as well as a .pdf ( is it also useful ) any comments ? does anything need to be changed or is it ok ? Thank you,0 -
Do I need robots.txt and meta robots?
If I can manage to tell crawlers what I do and don't want them to crawl for my whole site via my robots.txt file, do I still need meta robots instructions?
Technical SEO | | Nola5040 -
Matching C Block
Hi Guys We have 2 sites that are in the same niche and competing for the same keywords. The sites are on seperate domains one is UK and one is .com They have their own IP's however have both have the same C Block... We have noticed that when the rankings for one site improves the other drops.... Could the C Block be causing this?
Technical SEO | | EwanFisher0 -
Google Custom Site Search
I am an admin on a google custom site search account. I am also the owner of a verified webmaster tools account for the same site. The Custom Search control panel will not let me add URL's or a Site map for on demand indexing, but says "you must submit a sitemap of your own verified sites". Has anyone else has this issue? Does the Owner of the custom search account have to be the owner of the webmaster account, or can the logged in admin be? Thanks
Technical SEO | | SEMPassion0