Using Robots.txt
-
I want to Block or prevent pages being accessed or indexed by googlebot. Please tell me if googlebot will NOT Access any URL that begins with my domain name, followed by a question mark,followed by any string by using Robots.txt below. Sample URL http://mydomain.com/?example User-agent: Googlebot Disallow: /?
-
Not sure if that would work, but you can test by changing your robots.txt and running a test in GWT > Health > Blocked URLs
You might also be interested in specifying specific URL paraments (e.g. /?sort=name&order=asc > can block sort and order parameters) from within GWT (Configuration > URL Parameters)
Learn more about parameters - https://support.google.com/webmasters/bin/answer.py?hl=en&answer=1235687
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Use Internal Search pages as Landing Pages?
Hi all Just a general discussion question about Internal Search pages and using them for SEO. I've been looking to "noindexing / follow" them, but a lot of the Search pages are actually driving significant traffic & revenue. I've over 9,000 search pages indexed that I was going to remove, but after reading this article (https://www.oncrawl.com/technical-seo/seo-internal-search-results/) I was wondering if any of you guys have had success using these pages for SEO, like with using auto-generated content. Or any success stories about using the "noindexing / follow"" too. Thanks!
Technical SEO | | Frankie-BTDublin0 -
Using 410 To Remove URLs Starting With Same Word
We had a spam injection a few months ago. We successfully cleaned up the site and resubmitted to google. I recently received a notification showing a spike in 404 errors. All of the URLS have a common word at the beginning injected via the spam: sitename.com/mono
Technical SEO | | vikasnwu
sitename.com/mono.php?buy-good-essays
sitename.com/mono.php?professional-paper-writer There's about 100 total URLS with the same syntax with the word "mono" in them. Based on my research, it seems that it would be best to serve a 410. I wanted to know what the line of HTACCESS code would be to do that in bulk for any URL that has the word "mono" after the sitename.com/0 -
Can you force Google to use meta description?
Is it possible to force Google to use only the Meta description put in place for a page and not gather additional text from the page?
Technical SEO | | A_Q0 -
Cross-Domain Canonical - Should I use it under the following circumstances?
I have a number of hyper local directories, where businesses get a page dedicated to them. They can add images and text, plus contact info, etc. Some businesses list on more than one of these directory sites, but use exactly the same description. I've tried asking businesses to use unique text when listing on more than one site to avoid duplication issues, but this is proving to be too much work for the business owner! Can I use a cross-domain canonical and point Google towards the strongest domain from the group of directories? What effects will this have? And is there an alternative way to deal with the duplicate content? Thanks - I look forward to hearing your ideas!
Technical SEO | | cmaddison0 -
What can I do if Google Webmaster Tools doesn't recognize the robots.txt file?
I'm working on a recently hacked site for a client and and in trying to identify how exactly the hack is running I need to use the fetch as Google bot feature in GWT. I'd love to use this but it thinks the robots.txt is blocking it's acces but the only thing in the robots.txt file is a link to the sitemap. Unde the Blocked URLs section of the GWT it shows that the robots.txt was last downloaded yesterday but it's incorrect information. Is there a way to force Google to look again?
Technical SEO | | DotCar0 -
Does anyone know how set up and use google plus for business
Hi i am trying to work out how to use google plus to increase brand awareness and to increase traffic to my site, but i am not sure how to do this. can anyone please give me step by step instructions on setting it up and using it to generate traffic please
Technical SEO | | ClaireH-1848860 -
Linkedin how to use it to promote your business
Hi i have been told about linkedin and how good it is but every time i look at it, it puzzles me and i am not sure if it is worth joining or not. I am looking to promote in the UK and not abroad and would like to know how good it is. Even my accountant uses it but each time i look at it i cannot get my head around it and how to use it to promote my business. Can anyone please let me know how much it cost to join and if it will have any benefits for me to promote my business and my sites. I look forward to hearing from you
Technical SEO | | ClaireH-1848860 -
Duplicate Content and Canonical use
We have a pagination issue, which the developers seem reluctant (or incapable) to fix whereby we have 3 of the same page (slightly differing URLs) coming up in different pages in the archived article index. The indexing convention was very poorly thought up by the developers and has left us with the same article on, for example, page 1, 2 and 3 of the article index, hence the duplications. Is this a clear cut case of using a canonical tag? Quite concerned this is going to have a negative impact on ranking, of course. Cheers Martin
Technical SEO | | Martin_S0