Whats the best way to stop search results from being indexed?
-
I Have a Wordpress Site, and just realized that the search results are being indexed on Google creating duplicate content.
Whats the best way for me to stop these search result pages from being indexed without stopping the regulars and important pages and posts from being indexed as well?
**The typical search query looks like this: **
http://xxx.com/?s=Milnerton&search=search&srch_type
AND
this also includes results that are linked to the "view more" such as:
http://xxx.com/index.php?s=viewmoreYour help would be much appreciated.
regards
Stef
-
You are welcome. Appreciate it if you mark it as answered
-
Thank you so much Sebastian Will do.
-
Either get the Yoast SEO Plugin, which does a <meta name='robots' content='noindex,follow'/> on search pages automatically or if you want to code it for yourself something like:
'; } ?> in the header.php file.
Hope this helps.
Sebastian
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages not indexable?
Hello, I've been trying to find out why Google Search Console finds these pages non-indexable: https://www.visitflorida.com/en-us/eat-drink.html https://www.visitflorida.com/en-us/florida-beaches/beach-finder.html Moz and SEMrush both crawl the pages and show no errors but GSC comes back with, "blocked by robots.txt" but I've confirmed it is not. Anyone have any thoughts? 6AYn1TL
Technical SEO | | KenSchaefer0 -
Best way to handle 301 redirects on a business directory
We work with quite a few sites that promote retail traders and feature a traders' directory with pages for each of the shops (around 500 listings in most cases). As retail strips, shops come and go all the time, so I get a lot of pages that are removed as the business is no longer present. Currently I've been doing 301 redirects to the home page of the directory if you try to access a deleted trader page, but this means a ever growing htaccess file with thousands of 301 redirects. Are we handling this the best way or is there a better way to tackle this situation?
Technical SEO | | Assemblo0 -
Best way to create robots.txt for my website
How I can create robots.txt file for my website guitarcontrol.com ? It is having login and Guitar lessons.
Technical SEO | | zoe.wilson170 -
Does Google index has expiration?
Hi, I have this in mind and I think you can help me. Suppose that I have a pagin something like this: www.mysite.com/politics where I have a list of the current month news. Great, everytime the bot check this url, index the links that are there. What happens next month, all that link are not visible anymore by the user unless he search in a search box or google. Does google keep those links? The current month google check that those links are there, but next month are not, but they are alive. So, my question is, Does google keep this links for ever if they are alive but nowhere in the site (the bot not find them anymore but they work)? Thanks
Technical SEO | | informatica8100 -
No Search Results Found - Should this return status code 404?
A question came up today on how to correctly serve the right status code on pages where no search results are found. I did a couple searches on some major eccomerce and news sites and they were ALL serving status code 200 for No Search Results Found http://www.zappos.com/dsfasdgasdgadsg http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=sdafasdklgjasdklgjsjdjkl http://www.ebay.com/sch/i.html?_trksid=p5197.m570.l1313&_nkw=dfjakljgdkslagklasd&_sacat=0 http://www.cnn.com/search/?query=sdgadgdsagas&x=0&y=0&primaryType=mixed&sortBy=date&intl=false http://www.seomoz.org/pages/search_results?q=sdagasdgasdgasg I thought I read somewhere were it was recommended to serve a status code 404 on these types of pages. Based on what I found above, all sites were serving a 200, so it appears this may not be the best practice. Any thoughts?
Technical SEO | | WEB-IRS0 -
Search Result Page, Index or Not?
I believe Google doesn't want to index and show other search result pages in there SERP.
Technical SEO | | DigitalJungle
So instead of adding "noindex, follow" tag i have changed the url in my search result page like this: Original
http://www.mysite.com/kb-search.aspx?=travelguide&type=wiki&s=3 To
http://www.mysite.com/travelguide/attraction-guide.html And the search result page contains the title of the articles, a short descriptions (300 chars.) and a link to the articles. Does it help? Or should i add noindex, follow tag? Helps Please?0 -
What is the most effective way of indexing a localised website?
Hi all, I have a website, www.acrylicimage.com which provides products in three different currencies, $, £ and Euro. Currently a user can click on a flag to indicate which region they are in, or if the user has not manually selected the website looks at the users Locale setting and sets the region for them. The website also has a very simple content management system which provides ever so slightly different content depending on which region the user is in. The difference in content might literally be a few words per page, like contact details, measurements i.e. imperial to metric. I dont believe that GoogleBot, or any other bot for that matter, sets a Locale, and therefore it will only ever be indexing the content on our default region - the UK. So, my question really is if I need to be able to index different versions of content on the same page, is the best route to provide alternate urls i.e.: /en/about-us
Technical SEO | | dotcentric
/us/about-us
/eu/about-us The only potential downside I see to this is there are currently a couple of pages that do have exactly the same content regardless of whether you have selected the UK or USA regions - could this be considered content duplication? Thanks for your help. Al0 -
What is consider best practice today for blocking admins from potentially getting indexed
What is consider best practice today for blocking pages, for instance xyz.com/admin pages, from getting indexed by the search engines or easily found. Do you recommend to still disallow it in the robots.txt file or is the robots.txt not the best place to notate your /admin location because of hackers and such? Is it better to hide the /admin with an obscure name, use the noidex tag on the page and don't list in the robots.txt file?
Technical SEO | | david-2179970