Block search engines from URLs created by internal search engine?
-
Hey guys,
I've got a question for you all that I've been pondering for a few days now. I'm currently doing an SEO Technical Audit for a large scale directory.
One major issue that they are having is that their internal search system (Directory Search) will create a new URL everytime a search query is entered by the user. This creates huge amounts of duplication on the website.
I'm wondering if it would be best to block search engines from crawling these URLs entirely with Robots.txt?
What do you guys think? Bearing in mind there are probably thousands of these pages already in the Google index?
Thanks
Kim
-
That sounds perfect - if the user-generated URLs are getting enough traffic, make them permanent pages and 301-redirect or canonical. If not, weed them out of the index.
-
Thanks for your reply Dr. Meyers. I think you're probably right.
Yes I'm recommending they define a canonical set of pages that are the most popular searches, categories and locations which can be reached via internal links and we'll get all those duplicates re-directed back to that canonical set.
But for pages that fall outside those categories and locations, I'll recommend a meta-no-index tag.
-
It can be a complicated question on a very large site, but in most cases I'd META NOINDEX those pages. Robots.txt isn't great at removing content that's already been indexed. Admittedly, NOINDEX will take a while to work (virtually any solution will), as Google probably doesn't crawl these pages very often.
Generally, though, the risk of having your index explode with custom search pages is too high for a site like yours (especially post-Panda). I do think blocking those pages somehow is a good bet.
The only exception I would add is if some of the more popular custom searches are getting traffic and/or links. I assume you have a solid internal link structure and other paths to these listings, but if it looks like a few searches (or a few dozen) have attracted traffic and back-links, you'll want to preserve those somehow.
-
Sure, check below and some of the duplication I mean:
Capitalization Duplication
http://yellow.co.nz/yellow+pages/Car+dealer/Auckland+Region
http://yellow.co.nz/yellow+pages/Car+Dealer/Auckland+Region
With a few URL parameters
And with location duplication
http://yellow.co.nz/yellow+pages/Car+Dealer/Auckland
Let me know if you need any more info!
Cheers
Kim
-
Whats the content look like on the new url? Can you give us an example?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
International SEO Domain Structure
Hi Guys, I am wondering if anybody can point me to a recent trusted report or study on international domain name structure and SEO considerations. I am looking to read up on the SEO considerations and recommendations for the different domain structures in particular using sub-directories i.e. domain.com/uk, domain.com/fr. Kind regards,
Intermediate & Advanced SEO | | WeAreContinuum
Cian1 -
Weird 404 URL Problem - domain name being placed at end of urls
Hey there. For some reason when doing crawl tests I'm finding pages with the domain name being tacked on the end and causing 404 errors.
Intermediate & Advanced SEO | | Jay328
For example: http://domainname.com/page-name/http://domainname.com This is happening to all pages, posts and even category type 1. Site is in Wordpress
2. Using Yoast SEO plugin Any suggestions? Thanks!0 -
Should /node/ URLs be 301 redirect to Clean URLs
Hi All! We are in the process of migrating to Drupal and I know that I want to block any instance of /node/ URLs with my robots.txt file to prevent search engines from indexing them. My question is, should we set 301 redirects on the /node/ versions of the URLs to redirect to their corresponding "clean" URL, or should the robots.txt blocking and canonical link element be enough? My gut tells me to ask for the 301 redirects, but I just want to hear additional opinions. Thank you! MS
Intermediate & Advanced SEO | | MargaritaS0 -
Blocking out specific URLs with robots.txt
I've been trying to block out a few URLs using robots.txt, but I can't seem to get the specific one I'm trying to block. Here is an example. I'm trying to block something.com/cats but not block something.com/cats-and-dogs It seems if it setup my robots.txt as so.. Disallow: /cats It's blocking both urls. When I crawl the site with screaming flog, that Disallow is causing both urls to be blocked. How can I set up my robots.txt to specifically block /cats? I thought it was by doing it the way I was, but that doesn't seem to solve it. Any help is much appreciated, thanks in advance.
Intermediate & Advanced SEO | | Whebb0 -
Internal linking between categories
Is it necessary to do internal links between the same categories of a website ( Let's say Ihave a category about shoes and in the category I have a page about boots and one about sandals ( should the page boots be accessible from the page sandals and the other way round or is the back button going back to the section shoes enough ) ? If internal links between the same category ( sandals to boots ) are needed/recommended is it also a good practice to do site wide links between categories ( shoes and and bags for example ) Because by reading google recommendations "Make a site with a clear hierarchy and text links. Every page should be reachable from at least one static text link" I am not sure if they are talking about breadcrumbs or text links i am kind of lost ... Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Rewriting URL
I'm doing a major URL rewriting on our site to make the URL more SEO friendly as well as more comfortable and intuitive for our users. Our site has a lot of indexed pages, over 250k. So it will take Google a while to reindex everything. I was thinking that when Google Bot encounters the new URLs, it will probably figure out it's duplicate content with the old URL. At least until it recrawls the old URL and get a 301 directing them to the new URL. This will probably lower the ranking of every page being crawled. Am I right to assume this is what will happen? Or is it fine as long as the old URLs get 301 redirect? If it is indeed a problem, what's the best solution? rel="canonical" on every single page maybe? Another approach? Thank you.
Intermediate & Advanced SEO | | corwin0 -
Spammy? Long URLs
Hi All: Is it true that URLs such as this following one are viewed as "spammy" (besides being too long) and that such URLs will negatively affect ranks for keywords and page ranks: http://www.repairsuniverse.com/ipod-parts-ipod-touch-replacement-repair-parts-ipod-touch-1st-gen-replacement-repair-parts.html My thinking is that the page will perform better once it is 301 redirected to a shorter page name, such as: http://www.repairsuniverse.com/ipod-touch-1G-replacement-parts.html It also appears that these long URLs are also more likely to break, creating unnecessary 404s. <colgroup><col width="301"></colgroup> Thanks for your insight on this issue!
Intermediate & Advanced SEO | | holdtheonion0 -
SEO Strategy for URL Change
I'm working with a company who will likely have to change their URL because of a trademark dispute. They will be able to maintain the new URL for some period but will soon need to drop the existing URL all together. Aside from the usual keyword considerations when choosing a URL, are there any SEO strategies I should consider as we execute this change?
Intermediate & Advanced SEO | | Jon_KS0