On-site Search - Revisited (again, *zZz*)
-
Howdy Moz fans!
Okay so there's a mountain of information out there on the webernet about internal search results... but i'm finding some contradiction and a lot of pre-2014 stuff. Id like to hear some 2016 opinion and specifically around a couple of thoughts of my own, as well as some i've deduced from other sources. For clarity, I work on a large retail site with over 4 million products (product pages), and my predicament is thus - I want Google to be able to find and rank my product pages. Yes, I can link to a number of the best ones by creating well planned links via categorisation, silos, efficient menus etc (done), but can I utilise site search for this purpose?
-
It was my understanding that Google bots don't/can't/won't use a search function... how could it? It's like expeciting it to find your members only area, it can't login! How can it find and index the millions of combinations of search results without typing in "XXXXL underpants" and all the other search combinations? Do I really need to robots.txt my search query parameter? How/why/when would googlebot generate that query parameter?
-
Site Search is B.A.D - I read this everywhere I go, but is it really? I've read - "It eats up all your search quota", "search results have no content and are classed as spam", "results pages have no value"
I want to find a positive SEO output to having a search function on my website, not just try and stifle Mr Googlebot. What I am trying to learn here is what the options are, and what are their outcomes? So far I have -
_Robots.txt - _Remove the search pages from Google
_No Index - _Allow the crawl but don't index the search pages.
_No Follow - _I'm not sure this is even a valid idea, but I picked it up somewhere out there.
_Just leave it alone - _Some of your search results might get ranked and bring traffic in.
It appears that each and every option has it's positive and negative connotations. It'd be great to hear from this here community on their experiences in this practice.
-
-
Hopefully that helps you some I know we ran into a similar situation for a client. Good luck!
-
Great idea! This has triggered a few other thoughts too... cheers Jordan.
-
I would recommend using screaming frog to crawl only product level pages and export them to a csv or excel doc then copy and past your xml sitemap into an excel sheet. Then from there I would clean up the xml sitemap and sort it by product level pages and just compare the two side by side and see what is missing.
The other option would be to go into google webmaster tools or search console and look at Google Index -> index status and then click the advanced tab and just see what is indexed and what all is being blocked by the robots.txt.
-
@jordan & @matt,
I had done this, this was my initial go-to idea and implementation, and I completely agree this is a solution.
I guess I was hoping to answer the question "can Google even use site search?". as this would answer whether the parameter even needs excluding from robots.txt (I suspect they somehow do, as there wouldn't be this much noise about it otherwise).
That leaves the current situation - Does restricting google from searching my internal search results hinder it's ability to find and index my product pages? I'd argue it does, as since implementing this 6 months ago, the site index status has gone from 5.5m to 120k.
However, this could even be a good thing, as it lowers the Googlebot activity requirement, and should focus on the stronger pages... but the holy grail I am trying to achieve here is to get all my products indexed so I can get a few hits a month from each, i'm not trying to get the search results indexed.
-
Agree with Jordan - block the parameter for search in robots.txt and forget it. It won't bring search traffic in, it shouldn't get crawled but if it does, it's always a negative.
-
I cant speak for everyone but generally we like to robots.txt the search pages. I would imagine since you are working on a large retail site you would want to ensure your other pages get indexed properly so I would imagine blocking the search pages with a robots.txt would suffice. I would also look for some common reoccuring searches through the site search to possibly build content around as well.
I hope that helps some.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our new site will be using static site generator which is supposed to be better for SEO?
Hi folks, Our dev team is planning on building our new marketing webpages on SSG or Static Site Generator(we are stepping away from SSR). Based on my research this is something that can help our SEO in particular for site speed (our site has a poor score).
Intermediate & Advanced SEO | | TyEl
Are there any challenges or concerns I should be aware regarding this direction? If so what are they and how can this be addressed? Thanks0 -
Redirecting Ecommerce Site
Hi I'm working on a big site migration I'm setting up redirects for all the old categories to point to the new ones. I'm doing this based on relevancy, the categories don't match up exactly but I've tried to redirect to the most relevant alternative. Would this be the right approach?
Intermediate & Advanced SEO | | BeckyKey1 -
Splitting One Site Into Two Sites Best Practices Needed
Okay, working with a large site that, for business reasons beyond organic search, wants to split an existing site in two. So, the old domain name stays and a new one is born with some of the content from the old site, along with some new content of its own. The general idea, for more than just search reasons, is that it makes both the old site and new sites more purely about their respective subject matter. The existing content on the old site that is becoming part of the new site will be 301'd to the new site's domain. So, the old site will have a lot of 301s and links to the new site. No links coming back from the new site to the old site anticipated at this time. Would like any and all insights into any potential pitfalls and best practices for this to come off as well as it can under the circumstances. For instance, should all those links from the old site to the new site be nofollowed, kind of like a non-editorial link to an affiliate or advertiser? Is there weirdness for Google in 301ing to a new domain from some, but not all, content of the old site. Would you individually submit requests to remove from index for the hundreds and hundreds of old site pages moving to the new site or just figure that the 301 will eventually take care of that? Is there substantial organic search risk of any kind to the old site, beyond the obvious of just not having those pages to produce any more? Anything else? Any ideas about how long the new site can expect to wander the wilderness of no organic search traffic? The old site has a 45 domain authority. Thanks!
Intermediate & Advanced SEO | | 945010 -
Disavowing Links for Subcategory of Site
Has anyone tried using Google's Disavow tool with only a specific subcategory of their site? We're an ecommerce company and our site took a small hit with this recent Penguin update. We're certain previous linkbuilding efforts are the cause. But we'd like to try the Disavow tool with 1 subcategory to start, see if our rankings for that category improve (we used to be top 3, now ~12 or 13), and if so then roll it out through the rest of the site. Looking for input from others on if they have any experience with this or if it'd be better to just go for the whole thing at once. Thanks.
Intermediate & Advanced SEO | | Kingof50 -
Problems with a NoIndex NoFollow Site
For legal reasons my website is going to launch non-branded websites. We do not have the capacity to make these site sufficiently unique from the main site so we are planning on having them be NoIndex NoFollow. Are there any potential SEO problems here? What will the implication be if in ~1-2 years from launching the NoIndex NoFollow we make the site unique, take away the tag and want to start promoting these sites organically. Thanks!
Intermediate & Advanced SEO | | theLotter0 -
This site got hit but why..?
I am currently looking at taking on a small project website which was recently hit but we are really at a loss as to why so I wanted to open this up to the floor and see if anyone else had some thoughts or theories to add. The site is Howtotradecommodities.co.uk and the site appeared to be hit by Penguin because sure enough it drops from several hundred visitors a day to less than 50. Nothing was changed about the website, and looking at the Analytics it bumbled along at a less than 50 visitors a day. On June 25th when Panda 3.8 hit, the site saw traffic increase to between 80-100 visitors a day and steadily increases almost to pre-penguin levels. On August 9th/10th, traffic drops off the face of the planet once again. This site has some amazing links http://techcrunch.com/2012/02/04/algorithmsdata-vs-analystsreports-fight/
Intermediate & Advanced SEO | | JamesAgate
http://as.exeter.ac.uk/library/using/help/business/researchingfinance/stockmarket/ That were earned entirely naturally/editorially. I know these aren't "get out of jail free cards" but the rest of the profile isn't that bad either. Normally you can look at a link profile and say "Yep, this link and that link are a bit questionable" but beyond some slightly off-topic guest blogging done a while back before I was looking to get involved in the project there really isn't anything all that fruity about the links in my opinion. I know that the site design needs some work but the content is of a high standard and it covers its topic (commodities) in a very comprehensive and authoritative way. In my opinion, (I'm not biased yet because it isn't my site) this site genuinely deserves to rank. As far as I know, this site has received no unnatural link warnings. I am hoping this is just a case of us having looked at this for too long and it will be a couple of obvious/glaring fixes to someone with a fresh pair of eyes. Does anyone have any insights into what the solution might be? [UPDATE] after responses from a few folks I decided to update the thread with progress I made on investigating the situation. After plugging the domain into Open Site Explorer I can see quite a few links that didn't show up in Link Research Tools (which is odd as I thought LRT was powered by mozscape but anyway... shows the need for multiple tools). It does seem like someone in the past has been a little trigger happy with building links to some of the inner pages.0 -
So What On My Site Is Breaking The Google Guidelines?
I have a site that I'm trying to rank for the Keyword "Jigsaw Puzzles" I was originally ranked around #60 or something around there and then all of a sudden my site stopped ranking for that keyword. (My other keyword rankings stayed) Contacted Google via the site reconsideration and got the general response... So I went through and deleted as many links as I could find that I thought Google may not have liked... heck, I even removed links that I don't think I should have JUST so I could have this fixed. I responded with a list of all links I removed and also any links that I've tried to remove, but couldn't for whatever reasons. They are STILL saying my website is breaking the Google guidelines... mainly around links. Can anyone take a peek at my site and see if there's anything on the site that may be breaking the guidelines? (because I can't) Website in question: http://www.yourjigsawpuzzles.co.uk UPDATE: Just to let everyone know that after multiple reconsideration requests, this penalty has been removed. They stated it was a manual penalty. I tried removing numerous different types of links but they kept saying no, it's still breaking rules. It wasn't until I removed some website directory links that they removed this manual penalty. Thought it would be interesting for some of you guys.
Intermediate & Advanced SEO | | RichardTaylor0 -
One platform, multiple niche sites: Worth $60/mo so each site has different class C?
Howdy all, The short of it is that I currently run a very niche business directory/review website and am in the process of expanding the system to support running multiple sites out of the same database/codebase. In a normal setup I'd just run all the sites off of the same server with all of them sharing a single IP address, but thanks to the wonders of the cloud, it would be fairly simple for me to run each site on it's own server at a cost of about $60/mo/site giving each site a unique IP on a unique c-block (in many cases a unique a-block even.) The ultimate goal here is to leverage the authority I've built up for the one site I currently run to help grow the next site I launch, and repeat the process. The question is: Is the SEO-value that the sites can pass to each other worth the extra cost and management overhead? I've gotten conflicting answers on this topic from multiple people I consider pretty smart so I'd love to know what other people say.
Intermediate & Advanced SEO | | qurve0