Blocking URL's with specific parameters from Googlebot
-
Hi,
I've discovered that Googlebot's are voting on products listed on our website and as a result are creating negative ratings by placing votes from 1 to 5 for every product. The voting function is handled using Javascript, as shown below, and the script prevents multiple votes so most products end up with a vote of 1, which translates to "poor".
How do I go about using robots.txt to block a URL with specific parameters only? I'm worried that I might end up blocking the whole product listing, which would result in de-listing from Google and the loss of many highly ranked pages.
DON'T want to block:
http://www.mysite.com/product.php?productid=1234
WANT to block:
http://www.mysite.com/product.php?mode=vote&productid=1234&vote=2
Javacript button code:
onclick="javascript: document.voteform.submit();"
Thanks in advance for any advice given.
Regards,
Asim -
Good to hear, I am glad you perservered
-
Tried them all now and all come back with "Success"... May be I'll post in the WMT Forum and see if anyone can shed light on this problem. Thanks for your help Alan, it's much appreciated.
-
Yes correct, did you try the other formats?
-
Tried "Fetch as Googlebot" in Diagnostics and it came back as "Success" so I guess the robots.txt directive is not working. I'm assuming it should have reported a failure message when attempting to fetch a URL containing "?mode=vote".
-
Wrong place, go to diagnostics, then look for fetch as googlebot
-
I added "Disallow: /mode=vote" to the robots.txt file and also manually entered it on Crawler Access page, then clicked "Test" and no errors were reported. The WMT page states that robots.txt was last downloaded 16 hours ago so I'll wait until it picks the file up again and then check for any errors. Hopefully that will do trick
-
Try this in robots.txt, I did not think that Google allows wild cards but i just read that they do.
Disallow: /*mode=vote*
or
Disallow: /*mode=vote
or
Disallow: /*mode
Then try in Google WMT to read with googlebot to see if it works.
The first in the list seems right to me, but I have seen others do it the other ways.
-
Thanks for the reply. The site was developed using PHP, mySQL and Javascript. I was hoping there was a way to do it without getting programmers involved...
-
dont think you are going to do it in robots.txt, rather do a 301 from mode=vote to non mode vote.
If you dont know how to put this into practise, tell me what your site is built with, if it is ASP.NET, i will show you how to impliment, if not someone else should be able to help.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dealing with broken internal links/404s. What's best practice?
I've just started working on a website that has generated lots (100s) of broken internal links. Essentially specific pages have been removed over time and nobody has been keeping an eye on what internal links might have been affected. Most of these are internal links that are embedded in content which hasn't been updated following the page's deletion. What's my best way to approach fixing these broken links? My plan is currently to redirect where appropriate (from a specific service page that doesn't exist to the overall service category maybe?) but there are lots of pages that don't have a similar or equivalent page. I presume I'll need to go through the content removing the links or replacing them where possible. My example is a specific staff member who no longer works there and is linked to from a category page, should i be redirecting from the old staff member and updating the anchor text, or just straight up replacing the whole thing to link to the right person? In most cases, these pages don't rank and I can't think of many that have any external websites linking to them. I'm over thinking all of this? Please help! 🙂
Technical SEO | | Adam_SEO_Learning0 -
Microsoft IIS SEO tool claims I have no H1... It's not true.
I have 4300 pages that the tool claims are missing the H1 value but they are there. Here is an example: http://antiquebanknotes.com/rare-currency/first-national-bank-atlanta-illinois-2283.aspx/ Has anyone seen this before?
Technical SEO | | Banknotes0 -
Hi! I'm wondering whether for keyword SEO - a url should be www.salshoes.com/shoes/mens/day-wear (so with a few parent categories) or www.salshoes.com/shoes-mens-day-wear is ok for on page optimization?
Hi! I'm wondering whether for keyword SEO - a url should be www.salshoes.com/shoes/mens/day-wear (so with a few parent categories) or www.salshoes.com/shoes-mens-day-wear is ok for on page optimization? Hi! I'm wondering whether for keyword SEO - a url should be www.salshoes.com/shoes/mens/day-wear (so with a few parent categories) or www.salshoes.com/shoes-mens-day-wear is ok for on page optimization?
Technical SEO | | SalSantaCruz0 -
What's with the redirects?
Hi there,
Technical SEO | | HeadStud
I have a strange issue where pages are redirecting to the homepage.Let me explain - my website is http://thedj.com.au Now when I type in www.thedj.com.au/payments it redirects to https://thedj.com.au (even though it should be going to the page https://thedj.com.au/payments). Any idea why this is and how to fix? My htaccess file is below: BEGIN HTTPS Redirection Plugin <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteRule ^home.htm$ https://thedj.com.au/ [R=301,L]
RewriteRule ^photos.htm$ http://photos.thedj.com.au/ [R=301,L]
RewriteRule ^contacts.htm$ https://thedj.com.au/contact-us/ [R=301,L]
RewriteRule ^booking.htm$ https://thedj.com.au/book-dj/ [R=301,L]
RewriteRule ^downloads.htm$ https://thedj.com.au/downloads/ [R=301,L]
RewriteRule ^payonline.htm$ https://thedj.com.au/payments/ [R=301,L]
RewriteRule ^price.htm$ https://thedj.com.au/pricing/ [R=301,L]
RewriteRule ^questions.htm$ https://thedj.com.au/faq/ [R=301,L]
RewriteRule ^links.htm$ https://thedj.com.au/links/ [R=301,L]
RewriteRule ^thankyous/index.htm$ https://thedj.com.au/testimonials/ [R=301,L]
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://thedj.com.au/ [L,R=301]</ifmodule> END HTTPS Redirection Plugin BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress RewriteCond %{HTTP_HOST} ^mrdj.net.au$ [OR]
RewriteCond %{HTTP_HOST} ^www.mrdj.net.au$
RewriteRule ^/?$ "https://thedj.com.au/" [R=301,L] RewriteCond %{HTTP_HOST} ^mrdj.com.au$ [OR]
RewriteCond %{HTTP_HOST} ^www.mrdj.com.au$
RewriteRule ^/?$ "https://thedj.com.au/" [R=301,L] RewriteCond %{HTTP_HOST} ^thedjs.com.au$ [OR]
RewriteCond %{HTTP_HOST} ^www.thedjs.com.au$
RewriteRule ^/?$ "https://thedj.com.au/" [R=301,L] RewriteCond %{HTTP_HOST} ^theperthweddingdjs.com$ [OR]
RewriteCond %{HTTP_HOST} ^www.theperthweddingdjs.com$
RewriteRule ^/?$ "https://thedj.com.au/" [R=301,L] RewriteCond %{HTTP_HOST} ^thedjs.net.au$ [OR]
RewriteCond %{HTTP_HOST} ^www.thedjs.net.au$
RewriteRule ^/?$ "https://thedj.com.au" [R=301,L]0 -
Schema.org implementation for physician's office vs physician herself?
Hi, Regarding schema.org microdata, which page(s) should have the microdata? 1) http://schema.org/Physician -- appears to be about the office. Since we have all of the contact/address info in the footer on each page, should we do the same with microdata? I can't seem to find a suggested implementation on schema.org Assuming an office has multiple MDs, how should the docs be listed since the physician schema appears to be for the office, not for the individual doctors? Thanks for any insight!
Technical SEO | | Titan5520 -
Should I add 'nofollow' to site wide internal links?
I am trying to improve the internal linking structure on my site and ensure that the most important pages have the most internal links pointing to them (which I believe is the best strategy from Google's perspective!). I have a number of internal links in the page footer going to pages such as 'Terms and Conditions', 'Testimonials', 'About Us' etc. These pages, therefore, have a very large number of links going to them compared with the most important pages on my site. Should I add 'nofollow' to these links?
Technical SEO | | Pete40 -
404's and duplicate content.
I have real estate based websites that add new pages when new listings are added to the market and then deletes pages when the property is sold. My concern is that there are a significant amount of 404's created and the listing pages that are added are going to be the same as others in my market who use the same IDX provider. I can go with a different IDX provider that uses IFrame which doesn't create new pages but I used a IFrame before and my time on site was 3min w/ 2.5 pgs per visit and now it's 7.5 pg/visit with 6+min on the site. The new pages create new content daily so is fresh content and better on site metrics (with the 404's) better or less 404's, no dup content and shorter onsite metrics better? Any thoughts on this issue? Any advice would be appreciated
Technical SEO | | AnthonyLasVegas0 -
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
Technical SEO | | surveygizmo0