On-site Search - Revisited (again, *zZz*)
-
Howdy Moz fans!
Okay so there's a mountain of information out there on the webernet about internal search results... but i'm finding some contradiction and a lot of pre-2014 stuff. Id like to hear some 2016 opinion and specifically around a couple of thoughts of my own, as well as some i've deduced from other sources. For clarity, I work on a large retail site with over 4 million products (product pages), and my predicament is thus - I want Google to be able to find and rank my product pages. Yes, I can link to a number of the best ones by creating well planned links via categorisation, silos, efficient menus etc (done), but can I utilise site search for this purpose?
-
It was my understanding that Google bots don't/can't/won't use a search function... how could it? It's like expeciting it to find your members only area, it can't login! How can it find and index the millions of combinations of search results without typing in "XXXXL underpants" and all the other search combinations? Do I really need to robots.txt my search query parameter? How/why/when would googlebot generate that query parameter?
-
Site Search is B.A.D - I read this everywhere I go, but is it really? I've read - "It eats up all your search quota", "search results have no content and are classed as spam", "results pages have no value"
I want to find a positive SEO output to having a search function on my website, not just try and stifle Mr Googlebot. What I am trying to learn here is what the options are, and what are their outcomes? So far I have -
_Robots.txt - _Remove the search pages from Google
_No Index - _Allow the crawl but don't index the search pages.
_No Follow - _I'm not sure this is even a valid idea, but I picked it up somewhere out there.
_Just leave it alone - _Some of your search results might get ranked and bring traffic in.
It appears that each and every option has it's positive and negative connotations. It'd be great to hear from this here community on their experiences in this practice.
-
-
Hopefully that helps you some I know we ran into a similar situation for a client. Good luck!
-
Great idea! This has triggered a few other thoughts too... cheers Jordan.
-
I would recommend using screaming frog to crawl only product level pages and export them to a csv or excel doc then copy and past your xml sitemap into an excel sheet. Then from there I would clean up the xml sitemap and sort it by product level pages and just compare the two side by side and see what is missing.
The other option would be to go into google webmaster tools or search console and look at Google Index -> index status and then click the advanced tab and just see what is indexed and what all is being blocked by the robots.txt.
-
@jordan & @matt,
I had done this, this was my initial go-to idea and implementation, and I completely agree this is a solution.
I guess I was hoping to answer the question "can Google even use site search?". as this would answer whether the parameter even needs excluding from robots.txt (I suspect they somehow do, as there wouldn't be this much noise about it otherwise).
That leaves the current situation - Does restricting google from searching my internal search results hinder it's ability to find and index my product pages? I'd argue it does, as since implementing this 6 months ago, the site index status has gone from 5.5m to 120k.
However, this could even be a good thing, as it lowers the Googlebot activity requirement, and should focus on the stronger pages... but the holy grail I am trying to achieve here is to get all my products indexed so I can get a few hits a month from each, i'm not trying to get the search results indexed.
-
Agree with Jordan - block the parameter for search in robots.txt and forget it. It won't bring search traffic in, it shouldn't get crawled but if it does, it's always a negative.
-
I cant speak for everyone but generally we like to robots.txt the search pages. I would imagine since you are working on a large retail site you would want to ensure your other pages get indexed properly so I would imagine blocking the search pages with a robots.txt would suffice. I would also look for some common reoccuring searches through the site search to possibly build content around as well.
I hope that helps some.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site Redesign Performance
Hi, I am looking for a bit of advice if possible. In October 2018 we did a site redesign for a website that we had acquired (www.drainageonline.co.uk) but we have lost SO much organic traffic since Oct (-40/50% each month). I have amended any crawl errors including broken redirects ensuring all 301's that we put in place are working correctly, ensured 404 pages working OK, implemented canonical onto all pages along with other bits but we don't really seem to be seeing any improvements in our organic traffic. Can anyone offer any advice on the above? Thanks, Lea
Intermediate & Advanced SEO | | GAP_Digital_Marketing0 -
Adult Toys Sites
Does anyone know of any changes SEOwise when running an adult toy site versus a normal eCommerce site? Is there any tips or suggestions that are worth knowing to achieve rankings faster? Thanks,
Intermediate & Advanced SEO | | the-gate-films0 -
Links to my site still showing in Webmaster Tools from a non-existent site
We owned 2 sites, with the pages on Site A all linking over to similar pages on Site B. We wanted to remove the links from Site A to Site B, so we redirected all the links on Site A to the homepage on Site A, and took Site A down completely. Unfortunately we are still seeing the links from Site A coming through on Google Webmaster Tools for Site B. Does anybody know what else we can do to remove these links?
Intermediate & Advanced SEO | | pedstores0 -
Site dropped after recovery
Hi everybody! I've been working for http://www.newyoubootcamp.com for some time now. They came to me as they had dropped heavily for their main term, "boot camp". This turned out to be due to a manual penalty, which was in part due to their forum being hacked, as well as some bad link building. Here's an example of the dodgy forum links - http://about1.typepad.com/blog/2014/04/tweetdeck-to-launch-as-html5-web-app-now-accepting-beta-testers.html. The anchor is "microsoft". They've all been 410'd now. Also, we cleaned up the other bad links as best we could, and got through the manual penalty. The site then returned to #5 for "boot camps", below its pre-crash peak of #2, but OK. Over the past few weeks, it has started to slide though. I'm certain it is not down to a lack of quality links - this site has great PR and links from national newspapers and magazines. There's been a few on-site issues too, but nothing outrageous. I'm getting a bit stumped though, and any fresh eyes would be much appreciated!
Intermediate & Advanced SEO | | Blink-SEO0 -
Severe health issues are found on your site. - Check site health (GWT)
Hi, We run a Magento website - When i log in to Google Webmaster Tools, I am getting this message: Severe health issues are found on your site. - <a class="GNHMM2RBFH">Check site health
Intermediate & Advanced SEO | | bjs2010
</a>Is robots.txt blocking important pages? Some important page is blocked by robots.txt. Now, this is the weird part - the page being blocked is the admin page of magento - under
www.domain.com/index.php/admin/etc..... Now, this message just wont go away - its been there for days now - so why does Google think this is an "important page"? It doesnt normally complain if you block other parts of the site ?? Any ideas? THanks0 -
Should product searches (on site searches) be noindex?
We have a large new site that is suffering from a sitewide panda like penalty. The site has 200k pages indexed by Google. Lots of category and sub category page content and about 25% of the product pages have unique content hand written (vs the other pages using copied content). So it seems our site is labeled as thin. I'm wondering about using noindex paramaters for the internal site search. We have a canonical tag on search results pointing to domain.com/search/ (client thought that would help) but I'm wondering if we need to just no index all the product search results. Thoughts?
Intermediate & Advanced SEO | | iAnalyst.com0 -
Search Engine Pingler
Hello everyone, it's me again 😉 I've just got a Pro membership on SeoMoz and I am full of questions. A few days ago I found very interesting tool called: Search Engine Pingler And description of it was like this: Your website or your page was published a long time, but you can not find it on google. Because google has not index your site. Tool Search engine pingler will assist for you. It will ping the URL of your Page up more than 80 servers of google and other search engines. Inform to the search engine come to index your site. So my question is that tool really helps to increase the indexation of the link by search engine like Google, if not, please explain what is a real purpose of it. Thank you to future guru who can give a right answer 🙂
Intermediate & Advanced SEO | | smokin_ace0 -
Why do I have better results with absolute search
Hi, I completed the optimization of my website, and while my rankings have improved, i seem to have consistently better resultats with absolute search terms: ex: i'm better ranked on "word1 word2" than just word1 word2 without the guillemets. Do you have any idea why? As most people perform search without guillemets, i would like to improve my rankings with these searches too. thanks, cedric
Intermediate & Advanced SEO | | smartgrains0