On-site Search - Revisited (again, *zZz*)
-
Howdy Moz fans!
Okay so there's a mountain of information out there on the webernet about internal search results... but i'm finding some contradiction and a lot of pre-2014 stuff. Id like to hear some 2016 opinion and specifically around a couple of thoughts of my own, as well as some i've deduced from other sources. For clarity, I work on a large retail site with over 4 million products (product pages), and my predicament is thus - I want Google to be able to find and rank my product pages. Yes, I can link to a number of the best ones by creating well planned links via categorisation, silos, efficient menus etc (done), but can I utilise site search for this purpose?
-
It was my understanding that Google bots don't/can't/won't use a search function... how could it? It's like expeciting it to find your members only area, it can't login! How can it find and index the millions of combinations of search results without typing in "XXXXL underpants" and all the other search combinations? Do I really need to robots.txt my search query parameter? How/why/when would googlebot generate that query parameter?
-
Site Search is B.A.D - I read this everywhere I go, but is it really? I've read - "It eats up all your search quota", "search results have no content and are classed as spam", "results pages have no value"
I want to find a positive SEO output to having a search function on my website, not just try and stifle Mr Googlebot. What I am trying to learn here is what the options are, and what are their outcomes? So far I have -
_Robots.txt - _Remove the search pages from Google
_No Index - _Allow the crawl but don't index the search pages.
_No Follow - _I'm not sure this is even a valid idea, but I picked it up somewhere out there.
_Just leave it alone - _Some of your search results might get ranked and bring traffic in.
It appears that each and every option has it's positive and negative connotations. It'd be great to hear from this here community on their experiences in this practice.
-
-
Hopefully that helps you some I know we ran into a similar situation for a client. Good luck!
-
Great idea! This has triggered a few other thoughts too... cheers Jordan.
-
I would recommend using screaming frog to crawl only product level pages and export them to a csv or excel doc then copy and past your xml sitemap into an excel sheet. Then from there I would clean up the xml sitemap and sort it by product level pages and just compare the two side by side and see what is missing.
The other option would be to go into google webmaster tools or search console and look at Google Index -> index status and then click the advanced tab and just see what is indexed and what all is being blocked by the robots.txt.
-
@jordan & @matt,
I had done this, this was my initial go-to idea and implementation, and I completely agree this is a solution.
I guess I was hoping to answer the question "can Google even use site search?". as this would answer whether the parameter even needs excluding from robots.txt (I suspect they somehow do, as there wouldn't be this much noise about it otherwise).
That leaves the current situation - Does restricting google from searching my internal search results hinder it's ability to find and index my product pages? I'd argue it does, as since implementing this 6 months ago, the site index status has gone from 5.5m to 120k.
However, this could even be a good thing, as it lowers the Googlebot activity requirement, and should focus on the stronger pages... but the holy grail I am trying to achieve here is to get all my products indexed so I can get a few hits a month from each, i'm not trying to get the search results indexed.
-
Agree with Jordan - block the parameter for search in robots.txt and forget it. It won't bring search traffic in, it shouldn't get crawled but if it does, it's always a negative.
-
I cant speak for everyone but generally we like to robots.txt the search pages. I would imagine since you are working on a large retail site you would want to ensure your other pages get indexed properly so I would imagine blocking the search pages with a robots.txt would suffice. I would also look for some common reoccuring searches through the site search to possibly build content around as well.
I hope that helps some.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Links from a penalised site.
Hey Mozzers, Recently we have had a series of agencies in to pitch for work, one group mentioned that due to our association with a possibly penalised product review website, any links and activity associated with the brand would hinder our SEO. We currently have a good rating, but we are now no longer pushing our customers to the site as we move to a new platform. The current link back from this website is also no-followed. Any thoughts on how this could impact us? And how the agencies determined the site was penalised and causing us problems. Cheers Tim
Intermediate & Advanced SEO | | TimHolmes0 -
Having problem with multiple ccTLD sites, SERP showing different sites on different region
Hi everyone, We have more than 20 websites for different region and all the sites have their specific ccTLD. The thing is we are having conflict in SERP for our English sites and almost all the English sites have the same content I would say 70% of the content is duplicating. Despite having a proper hreflang, I see co.uk results in (Google US) and not only .co.uk but also other sites are showing up (xyz.in, xyz.ie, xyz.com.au)The tags I'm using are below, if the site is for the US I'm using canonical and hreflang tag :https://www.xyz.us/" />https://www.xyz.us/" hreflang="en-us" />and for the UK siteshttps://www.xyz.co.uk/" />https://www.xyz.co.uk/" hreflang="en-gb" />I know we have ccTLD so we don't have to use hreflang but since we have duplicate content so just to be safe we added hreflang and what I have heard/read that there is no harm if you have hreflang (of course If implemented properly).Am I doing something wrong here? Or is it conflicting due to canonicals for the same content on different regions and we are confusing Google so (Google showing the most authoritative and relevant results)Really need help with this.Thanks,
Intermediate & Advanced SEO | | shahryar890 -
Splitting One Site Into Two Sites Best Practices Needed
Okay, working with a large site that, for business reasons beyond organic search, wants to split an existing site in two. So, the old domain name stays and a new one is born with some of the content from the old site, along with some new content of its own. The general idea, for more than just search reasons, is that it makes both the old site and new sites more purely about their respective subject matter. The existing content on the old site that is becoming part of the new site will be 301'd to the new site's domain. So, the old site will have a lot of 301s and links to the new site. No links coming back from the new site to the old site anticipated at this time. Would like any and all insights into any potential pitfalls and best practices for this to come off as well as it can under the circumstances. For instance, should all those links from the old site to the new site be nofollowed, kind of like a non-editorial link to an affiliate or advertiser? Is there weirdness for Google in 301ing to a new domain from some, but not all, content of the old site. Would you individually submit requests to remove from index for the hundreds and hundreds of old site pages moving to the new site or just figure that the 301 will eventually take care of that? Is there substantial organic search risk of any kind to the old site, beyond the obvious of just not having those pages to produce any more? Anything else? Any ideas about how long the new site can expect to wander the wilderness of no organic search traffic? The old site has a 45 domain authority. Thanks!
Intermediate & Advanced SEO | | 945010 -
Linking to one of my own sites, from my site
Hi experts, I own a site for castingjobs (Site1) and a site for selling paintings (Site2). In a long time, I've had a link at the bottom of Site1, linking to Site 2. (Basicaly: Partnerlink: Link site 2). Site1 is for me the the only important site, since it's where Im making my monthly revenue. I added the link like 5 years ago or so, to try to boost site 2. My question is:
Intermediate & Advanced SEO | | KasperGJ
1. Is it somehow bad for SEO for site 1, since the two sites have nothing to do with each other, they are basically just owned by me.
2. Would it make sense to link from Site 2 to Site 1 indstead?0 -
Merging Two Unrelated Sites into a Third Site
We have a new client interested in possibly merging 2 sites into one under the brand of a new parent company. Here's a breakdown of the scenario..... BrandA.com sells a variety of B2B widget-services via their online store. BrandB.com sells a variety of B2B thing-a-majig products and services (some of them large in size) not sold through an online store. These are sold more consultatively via a sales team. The new parent company, BrandA-B.com is considering combining the two sites under the new brand parent company domain. The Widget-services and Thing-A-Majigs have very little similarity or purchase crossover; so just because you're interested in one doesn't make you a good candidate for the other. We feel pretty confident that we can round-up all the necessary pages and inbound links to do proper transitioning to a new, separate third domain though we're not in agreement that this is the best course of action. Currently the individual brand sites are fairly well known in their industry and each ranks fairly well for a variety of important terms though there is room for improvement and each site has good links with the exception of the new site which has considerably fewer. BrandA.com DA = 73 - 19 years old
Intermediate & Advanced SEO | | OPM
BrandB.com DA = 55 - 18 years old
BrandA-B.com DA = 40 - 1 year old Our SEO team members have opinions on what the potential outcome(s) of this would be but are wondering what the community here thinks. Will the combining of the sites cause a dilution of the topics of the two sites and hurt rankings? Will the combining of the domain authority help one set part of the business but hurt the other? What do you think? What would you do?0 -
Meta refresh for news site?
We have a news site that uses a meta refresh (<meta < span="">http-equiv="refresh" content="600" /> across all content. I understand the reasoning on the homepage and am trying to decide of the cons of using this (slows page, is it treated differently and loses pagerank/link ..) Does anyone have experience with meta refresh being a negative thing or does it no longer matter?</meta <>
Intermediate & Advanced SEO | | KristieWahlquist0 -
Separate Site or should we incorporate it into our main site
Hello, We have a website to sell personal development trainings. The owners want to start 2 blogs - one for each owner - that promotes their personal coaching practices. What's the SEO advantages of embedding both blogs in the current site vs starting 2 brand new blogs with their names as the domain names?
Intermediate & Advanced SEO | | BobGW0 -
On-Site Optimization Tips for Job site?
I am working on a job site that only ranks well for the homepage with very low ranking internal pages. My job pages do not rank what so ever and are database driven and often times turn to 404 pages after the job has been filled. The job pages have to no content either. Anybody have any technical on-site recommendations for a job site I am working on especially regarding my internal pages? (Cross Country Allied.com)
Intermediate & Advanced SEO | | Melia0