How do internal search results get indexed by Google?
-
Hi all,
Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget.
The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages.
The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function?
Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page?
Thanks for your thoughts guys.
-
Firstly (and I think you understand this, but for the benefit of others who find this page later): any user landing on the actual page will see its full content - robots.txt has no effect on their experience.
What I think you're asking about here is what happens if Google has previously indexed a page properly with crawling it and discovering content and then you block it in robots.txt, what will it look like in the SERPs?
My expectation is that:
- It will appear in the SERPs as it used to - with meta information / title etc - at least until Google would have recrawled it anyway, and possibly for a bit longer and some failure of Google to recrawl it after the robots.txt is updated
- Eventually, it will either drop out of the index or it may remain but with the "no information" message that shows up when a page is blocked in robots.txt from the outset yet it is indexed anyway
-
Hi Will,
Thanks for the clear answer. Both solutions do have pros and cons.
The only question left is if it would be possible that somebody gets an empty page (so without any content on it) after a while when following an external link to one of your internal search URLs when this URL would be blocked by robots.txt. Search engines wouldn't crawl these pages but still would be able to index them because they follow the link. Or does a URL and its content stay available and visible once it is generated, no matter if it is not crawlable or not indexable? This is maybe a bit out there and it would surprise me, but in this short article that I came across John Mueller says:
"One thing maybe to keep in mind here is that if these pages are blocked by robots.txt, then it could theoretically happen that someone randomly links to one of these pages. And if they do that then it could happen that we index this URL without any content because its blocked by robots.txt. So we wouldn’t know that you don’t want to have these pages actually indexed."
This could be in theory then the case for all URLs that are blocked by robots.txt but get external links.
What's your view on this?
-
I think you could legitimately take either approach to be honest. There isn't a perfect solution that avoids all possible problems so I guess it's a combination of picking which risk you are more worried about (pages getting indexed when you don't want them to, or crawl budget -- probably depends on the size of your site) and possibly considering difficulty of implementation etc.
In light of the fact that we heard about noindex,follow becoming equivalent to noindex,nofollow eventually, that does dampen the benefits of that approach, but doesn't entirely negate it.
I'm not totally sold on the phrasing in the yoast article - I wouldn't call it google "ignoring" robots.txt - it just serves a different purpose. Google is respecting the "do not crawl" directive, but that has never guaranteed that they wouldn't index a page if it got external links.
I personally might lean towards the robots.txt solution on larger sites if crawl budget were the primary concern - just because it wouldn't be the end of the world if (some of) these pages got indexed if they had external links. The only reason we were trying to keep them out was for google's benefit, so if they want to index despite the robots block, it wouldn't keep me awake at night.
Whatever route you go down, good luck!
-
Thanks for the good answers guys, really helpful! It's very clear now how these internal search URLs end up being indexed.
So 'noindex, follow' for URLs generated by internal searches is always the best solution? Even when this uses crawl budget, and blocking by robots.txt doesn't?
You could say that the biggest advantage would be the preservation of link juice when using 'noindex, follow', but John Mueller states that Google treats 'noindex, follow' the same as 'noindex, nofollow' after a while (see this article).
According to this article from Yoast, the most important reason to use 'noindex, follow' is because Google mostly takes this into account, and sometimes ignores the robots.txt.
Maybe this interesting article gives the real reason. If I understand this correctly, it would be possible that somebody gets an empty page after a while when following a link on another website to one of these internal search URLs when this URL would be blocked by robots.txt. Search engines wouldn't crawl these pages but still would be able to index them because they follow the link. Or does a URL and its content stay available and visible once it is generated, no matter if it is not crawlable or not indexable?
And an additional remark: I came across some big webshops that add a canonical tag on a search result page, pointing to the category URL to which the specific search is related to. So if you search for example for 'black laptops', the canonical version of the search result page would be example.com/laptops. If you don't index the search result pages and the links will eventually be 'nofollow', then these pages don't create any value, so what is the point of using canonical tags? On top of that, using canonicals and 'noindex' together should be avoided, according to John Mueller. Google will mostly pick rel=canonical over 'noindex', so this could be an extra reason of internal search URLs being indexed, even when they have the 'noindex' robots tag.
Thanks!
-
These are great additionals I am particularly interested in point #1. I had always suspected Google might try to predict, visit or penetrate URLs in other ways but I didn't know any of the specifics
-
This is a good answer. I'd add two small additional notes:
- Google is voracious in URL discovery even without any links to a page or any of the other mechanisms described here, we have seen instances of URLs being discovered from other sources (think: chrome usage data, crawling of common path patterns etc)
- The description at the end of the answer about robots.txt : I wouldn't describe it as Google "ignoring" the no crawl directives - they will still obey that, and won't crawl the page - it's just that they can index pages that they haven't crawled. Note that this is why you shouldn't combine robots.txt block and noindex tags - Google won't be able to crawl to discover the tags and so may still index the page.
-
Actually quite often there are links to pages of search results. Sometimes webmasters link to them when there's no decent, official page available for a series of products which they wish to promote internally (so they just write a query that captures what they want and link to that instead, from CTA buttons and promotional pop-outs and stuff)
Even when that's not the case, users often share search results with each other on forums and stuff like that. Quite often, even when you think there are 'no links' (internally or externally) to a search results page, you can end up being wrong
Sometimes you also have stuff like related search results hidden in the coding of a web-page, which don't 'activate' until a user begins typing (instant search facilities and the like). If coded badly, sometimes even when the user has entered nothing, a cloaked default list of related searches will appear in the source code or modified source code (after scripts have run) and occasionally Google can get caught up there too
Another problem that can occur is certain search results pages accidentally ending up in the XML sitemap, but that's another kettle of fish entirely
Sometimes you can have lateral indexation tags (canonical tags, hreflangs) going rogue too. Sometimes if a page exists in one language but not another, the site is programmed to 'do something clever' to find relevant content. In some cases these tags can be re-pointed to search result URLs to 'mask' the error of non-uniform multilingual deployment. Custom 404 pages can sometimes try and 'be helpful' by attempting to find similar content for end users and in some cases, end up linking to search results (which means if Google follows a 404, then ends up at the custom 404 URL - Googlebot can sometimes enter the /search area of a website)
You'd be surprised at the number of search results URLs which are linked to on the web, internally or externally
Remember: robots.txt doesn't control indexation, it only controls crawl accessibility. If Google believes a URL is popular (link signals) then they may ignore the no-crawl directive and index the URL anyway. Robots.txt isn't really the type of defense which you can '100% rely upon'
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Indexing Stopped
Hello Team, A month ago, Google was indexing more than 2,35,000 pages, now has reduced to 11K. I have cross-checked almost everything including content, backlinks and schemas. Everything is looking fine, except the server response time, being a heavy website, or may be due to server issues, the website has an average loading time of 4 secs. Also, I would like to mention that I have been using same server since I have started working on the website, and as said above a month ago the indexing rate was more than 2.3 M, now reduced to 11K. nothing changed. As I have tried my level best on doing research for the same, so please if you had any such experiences, do share your valuable solutions to this problem.
Intermediate & Advanced SEO | | jeffreyjohnson0 -
How to maximize CTR from Google image search?
I'm getting good, solid growth in my Google SERPs and Google search traffic now, but I do notice that 70% of my high ranking search results are images and the CTR on those is only 3-4%. All my images are illustrative and highly relevant to my travel blog, but I guess that hardly matters unless they get CTR so people see them in context. Has anyone seen or done any good research on what makes people click through on Google Image Search results? What are the key factors? How do you optimize for click-through? Is it better to watermark your images or overlay label them to increase likelihood of click-through? Thanks, Tony FYI the travel blog in question is www.asiantraveltips.com and a relevant Google search where I rank highly is "songkran 2016 phuket".
Intermediate & Advanced SEO | | Gavin.Atkinson0 -
Making Filtered Search Results Pages Crawlable on an eCommerce Site
Hi Moz Community! Most of the category & sub-category pages on one of our client's ecommerce site are actually filtered internal search results pages. They can configure their CMS for these filtered cat/sub-cat pages to have unique meta titles & meta descriptions, but currently they can't apply custom H1s, URLs or breadcrumbs to filtered pages. We're debating whether 2 out of 5 areas for keyword optimization is enough for Google to crawl these pages and rank them for the keywords they are being optimized for, or if we really need three or more areas covered on these pages as well to make them truly crawlable (i.e. custom H1s, URLs and/or breadcrumbs)…what do you think? Thank you for your time & support, community!
Intermediate & Advanced SEO | | accpar0 -
Apps content Google indexation ?
I read some months back that Google was indexing the apps content to display it into its SERP. Does anyone got any update on this recently ? I'll be very interesting to know more on it 🙂
Intermediate & Advanced SEO | | JoomGeek0 -
Best way to permanently remove URLs from the Google index?
We have several subdomains we use for testing applications. Even if we block with robots.txt, these subdomains still appear to get indexed (though they show as blocked by robots.txt. I've claimed these subdomains and requested permanent removal, but it appears that after a certain time period (6 months)? Google will re-index (and mark them as blocked by robots.txt). What is the best way to permanently remove these from the index? We can't use login to block because our clients want to be able to view these applications without needing to login. What is the next best solution?
Intermediate & Advanced SEO | | nicole.healthline0 -
Malicious site pointed A-Record to my IP, Google Indexed
Hello All, I launched my site on May 1 and as it turns out, another domain was pointing it's A-Record to my IP. This site is coming up as malicious, but worst of all, it's ranking on keywords for my business objectives with my content and metadata, therefore I'm losing traffic. I've had the domain host remove the incorrect A-Record and I've submitted numerous malware reports to Google, and attempted to request removal of this site from the index. I've resubmitted my sitemap, but it seems as though this offending domain is still being indexed more thoroughly than my legitimate domain. Can anyone offer any advice? Anything would be greatly appreciated! Best regards, Doug
Intermediate & Advanced SEO | | FranGen0 -
Google Indexing Feedburner Links???
I just noticed that for lots of the articles on my website, there are two results in Google's index. For instance: http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html and http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+thewebhostinghero+(TheWebHostingHero.com) Now my Feedburner feed is set to "noindex" and it's always been that way. The canonical tag on the webpage is set to: rel='canonical' href='http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html' /> The robots tag is set to: name="robots" content="index,follow,noodp" /> I found out that there are scrapper sites that are linking to my content using the Feedburner link. So should the robots tag be set to "noindex" when the requested URL is different from the canonical URL? If so, is there an easy way to do this in Wordpress?
Intermediate & Advanced SEO | | sbrault740 -
How do I get 2 column Google sitelinks instead of one line sitelinks?
Currently, if you search for my site's brand name on Google, we are the top result. However, rather than having 2 columns of sitelinks, there is just one line of 4 sitelinks. When you search for the site's domain (sitename.com), you get the full 2 columns of sitelinks. Are there any strategies for getting the 2 columns on more than just the domain name search? At the very least, I'd like to get 2 columns to appear when you do a brand name search, but it'd be great to get 2 columns of sitelinks for our top search queries as well. Thanks for the advice...
Intermediate & Advanced SEO | | BostonWright0