How to handle (internal) search result pages?
-
Hi Mozers,
I'm not quite sure what the best way is to handle internal search pages. In this case it's for an ecommerce website with about 8.000+ products and search pages currently look like: example.com/search.php?search=QUERY+HERE.
I'm leaning towards making them follow, noindex. Since pages like this can be easily abused for duplicate content and because I'd rather have the category pages ranked.
How would you handle this?
-
If none of these pages are indexed, you can block them via robots.txt. But if someone else links to a search page from somewhere on the web, google might include the url in the index, and then it'll just be a blank entry, as they can't crawl the page and see not to index it, as it's blocked via robots.txt.
-
Thanks for the quick response.
If the pages are presently not indexed, is there any advantage to follow/noindex over blocking via robots.php?
I guess my question is whether it's better or worse to have those pages spidered (by definition, any content that appears on these pages exists somewhere else on the site, since it is a search page)... what do you think?
-
Blocking the pages via robots.txt prevents the spiders from reaching those pages. It doesn't remove those pages from the index if they are already there, it just prevents the bots from getting to them.
If you want these pages removed from your index, and not to impact the size of your index in the search engines, ideally you remove them with the noindex tag.
-
Hi Mark,
Can you explain why this is better than excluding the pages via robots.txt?
-
How did it turn out? And Mark have you done much with internal search?
-
As long as you're sure that no organic search traffic is coming in via ranked search results pages from your site, it would be of no harm just to prevent search engines from indexing those pages as per the robots.txt directive I mentioned above - then just focus all your attention on the other pages of your site.
With regards to the unique content, always try and find the time to produce unique content on the category pages, these were the ones you mentioned you wanted to rank. Normally this is feasible providing you haven't got over 1,000 categories.
Feel free to PM me over a link to your ecommerce website if you would like me to take a look at any of the situation in greater detail.
-
Thanks for the reply. Yes, there is a semi-chance of duplicate content. And to be honest, the search function is not really great.
There are no visitors coming from the search pages, since we haven't build links specifically for those pages. As for the unique content, it's hard. Since we have so many products it's not really possible. We are working on optimizing our top 100 products though.
-
I'd do exactly what you're saying. Make the pages no index, follow. If they're already indexed, you can remove the page search.php from the engines through webmaster tools.
Let me know how it turns out.
-
How I would handle this would depend upon the performance of the ecommerce website and which entrance paths via the website convert higher.
You could easily instruct search engines not to index the search results page by adding the following in your robots.txt:-
Disallow: /search.php?search=*
But is there a real likelihood of duplicate matching content with your actual category pages? It's unlikely in all honesty - but depending on your website content and product range, I suppose possible.
If many visits to your website arrive via indexed search result pages, I would be inclined to leave them indexed however and implement measures to ensure that they won't be flagged as duplicate content.
Ways to handle this depend on your ecommerce provider and it's capabilities sometimes but more often that not, is just a case of ensuring there is plenty of unique content on your category pages (as there should be) and there is no chance of other pages of your website hindering their ranking potential then.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does Google's search results display my home page instead of my target page?
Why does Google's search results display my home page instead of my target page?
Technical SEO | | h.hedayati6712365410 -
Pages being flagged in Search Console as having a "no-index" tag, do not have a meta robots tag??
Hi, I am running a technical audit on a site which is causing me a few issues. The site is small and awkwardly built using lots of JS, animations and dynamic URL extensions (bit of a nightmare). I can see that it has only 5 pages being indexed in Google despite having over 25 pages submitted to Google via the sitemap in Search Console. The beta Search Console is telling me that there are 23 Urls marked with a 'noindex' tag, however when i go to view the page source and check the code of these pages, there are no meta robots tags at all - I have also checked the robots.txt file. Also, both Screaming Frog and Deep Crawl tools are failing to pick up these urls so i am a bit of a loss about how to find out whats going on. Inevitably i believe the creative agency who built the site had no idea about general website best practice, and that the dynamic url extensions may have something to do with the no-indexing. Any advice on this would be really appreciated. Are there any other ways of no-indexing pages which the dev / creative team might have implemented by accident? - What am i missing here? Thanks,
Technical SEO | | NickG-1230 -
301 redirected all my pages to my new domain, now I have a problem with Google Search Console
Hi guys! I bought a new domain name and redirected all my URLs from the old domain to the new one. Everything worked perfectly but now I have a little problem. I want to use the option 'Address Change' in google search console. Step 1 Works (Select new website in the list) Step 2 Works (Confirm that the 301 are working) Step 3 Asks me to Verify the old domain (huh!?) in order to complete the request. Obviously that doesn't work because my 301s WORKS! So if I try to verify the old website by putting a google file in the root of my domain Google tries to access it and it automatically redirects to the new domain. I must be missing something lol help!
Technical SEO | | benoit_20180 -
Is it easier to rank high with a front page than a landing page?
My product is laptop and of cause, I like to rank high for the keyword "laptop". Do any of you know if the search engines tends to rank a front page higher than a landing page? Eg. www.brand.com vs. www.brand.com/laptop
Technical SEO | | Debitoor0 -
Should I deindex my pages?
I recently changed the URLs on a website to make them tidier and easier to follow. I put 301s in place to direct all the previous page names to the new ones. However, I didn't read moz's guide which says I should leave the old sitemap online for a few weeks afterwards. As I result, webmaster tools is showing duplicate page titles (which means duplicate pages) for the old versions of the pages I have renamed. Since the old versions are no longer on the sitemap, google can no longer access them to find the 301s I have put in place. Is this a problem that will fix itself over time or is there a way to quicken up the process? I could use webmaster tools to remove these old urls, but I'm not sure if this is recommended. Alternatively, I could try and recreate the old sitemap, but this would take a lot of time.
Technical SEO | | maxweb0 -
Can you noindex a page, but still index an image on that page?
If a blog is centered around visual images, and we have specific pages with high quality content that we plan to index and drive our traffic, but we have many pages with our images...what is the best way to go about getting these images indexed? We want to noindex all the pages with just images because they are thin content... Can you noindex,follow a page, but still index the images on that page? Please explain how to go about this concept.....
Technical SEO | | WebServiceConsulting.com0 -
Can you 301 redirect a page to an already existing/old page ?
If you delete a page (say a sub department/category page on an ecommerce store) should you 301 redirect its url to the nearest equivalent page still on the site or just delete and forget about it ? Generally should you try and 301 redirect any old pages your deleting if you can find suitable page with similar content to redirect to. Wont G consider it weird if you say a page has moved permenantly to such and such an address if that page/address existed before ? I presume its fine since say in the scenario of consolidating departments on your store you want to redirect the department page your going to delete to the existing pages/department you are consolidating old departments products into ?
Technical SEO | | Dan-Lawrence0 -
I am trying to correct error report of duplicate page content. However I am unable to find in over 100 blogs the page which contains similar content to the page SEOmoz reported as having similar content is my only option to just dlete the blog page?
I am trying to correct duplicate content. However SEOmoz only reports and shows the page of duplicate content. I have 5 years worth of blogs and cannot find the duplicate page. Is my only option to just delete the page to improve my rankings. Brooke
Technical SEO | | wianno1680