Wordtracker vs Google Keyword Tool
-
When I find keyword opportunities in Wordtracker, I'll sometimes run them through Adwords Keyword tool only to find that Google says these keywords have 0 search volume. Would you use these keywords even though Google says users aren't searching for them?
-
To specifically answer your question about the differences between WordTracker and the AdWords keyword tool, I examined the WordTracker site. I performed a keyword search for the phrase "depression and bipolar link". It showed 34 searches. To better understand what that result meant, I searched the site and located the following explanation:
"For the Wordtracker data, the Search count is the number of times each keyword appears in our database of searches over the past 365 days. This constitutes just under 1% of all US search, and the data is gathered from metacrawler.com and dogpile.com."
There are two key differences between AdWords data and WordTracker. AdWords clearly has a much larger data source so it should be more accurate. Also, AdWords data is presented based upon monthly searches, where WordTracker uses yearly searches. The AdWords result for "depression and bipolar link" would be 3 monthly searches. Since the result is less then 100, Google rounds the result to 0.
You are reaching for very long tail phrases. You will capture other keywords and shorter phrases in the process.
For example, while Adwords shows no traffic on "depression and bipolar link" the phrase "depression and bipolar" shows 165k monthly searches with medium competition. If I were to create a page, I would focus the article on "depression and bipolar". If you really wish to keep the focus on "depression and bipolar link", you can do such knowing you will capture traffic from other versions of the phrase.
-
Here's a couple that show a fairly decent search volume on Wordtracker & 0 on Adwords KW tool:
multiple sclerosis links with bipolar disorder
ank3 and bipolar disorder
depression and bipolar link
Thanks!
-
Can you share an example?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Keyword cannibalization
Hi, I have two questions regarding keyword cannibalization. 1. I am doing the SEO for a website that sells do-it-yourself packages for heating, bathrooms, ventilation and so on for new houses or for renovations. The most important pages are the product pages (e.g. example.com/products/bathrooms) but there is also a blog divided into categories per product (e.g. example.com/category/bathrooms). The difference is clear: the product page focuses on the product itself, and the blog category page contains all blog posts relating bathrooms (tips, new materials, new innovations,...). My question is if the product page and blog category page can compete with each other for the term bathrooms (although they have different content). Does it help or is it enough to direct internal links from separate blog posts to the most important page (being the product page) and back to avoid my category blog page to compete with my product page? Another possibility would be to use a canonical tag on the category page pointing to the product page, but this actually isn't good practice because it isn't really duplicate content. Third possibility would be to no index the category page. So what is the best solution of the three? 2. A second example of keyword cannibalization can be category archive pages for webshops. If you have a category page example.com/jeans and a subcategory page example.com/jeans/women, is it useful to optimize on both pages for different terms, being jeans for the first page and jeans for women for the second, or will Google not make this distinction because the keyword are too closely related? In other words, is it useful to write content specifically for jeans for women and make a landing page for this keyword, or will this page compete with the category page that has been optimized for just the keyword jeans? In large clothing webshops, you can see for example that there is an optimized page for Nike (content, headings,...) but not for Nike for women or Nike for men. Is this just laziness or is this done exactly to avoid keyword cannibalization? Looking forward to your comments!
Intermediate & Advanced SEO | | Mat_C0 -
Can I know which keywords lost their top rankings on google a year ago if the client didn't checked the keyword rankings in his website?
Hi, Can I know which keywords lost their top rankings on google a year ago if the client didn't checked the keyword rankings in his website? Thanks Roy
Intermediate & Advanced SEO | | kadut1 -
Homepage disappeared from Google
Hello, Since 2 weeks, our website is losing positions in Google. After years on the first page, we dropped for our main keyword to the 3rd page. Seems that all the positions we lost, were ranking with the homepage. Now, we are on the 3rd page but with a less important page. How is it possible that only the homepage disappeared? Is there any explanation for that? I hope there is an explanation, so we can fix the trouble. Kind regards, Tine
Intermediate & Advanced SEO | | TineDL0 -
Subdomains vs. Subfolders vs. New Site
Hello geniuses!!! Here's my Friday puzzle: We have a plastic surgery client who already has a website that's performing fairly well and is driving in leads. She is going to be offering a highly specialized skincare program for cancer patients, and wants a new logo, new website and new promo materials all for this new skincare program. So here's the thing - my gut reaction says NO NEW WEBSITE! NO SUBDOMAIN! because of everything I've read about moving things on and off subdomains, etc (I just studied this: http://moz.com/blog/subdomains-vs-subfolders-rel-canonical-vs-301-how-to-structure-links-optimally-for-seo-whiteboard-friday). And, why wouldn't we want to use the authority of her current site, right? While she doesn't necessarily have a high authority domain - we're not talking WebMD, here - she does have some authority that we've built over time. But, because this is a pretty separate product from her general plastic surgery practice, what would you guys do? Since we'll be creating a logo and skincare "look and feel" for this product, and there will likely be a lot of information involved with it, I don't think we'll be able to just create one page. Is it smart to: a) build a separate site in a subfolder of her current site? (plasticsurgerypractice.com/skincare) b) build a subdomain? (skincare.plasticsurgerypractice.com) c) build her a new site (plasticsurgeryskincare.com)
Intermediate & Advanced SEO | | RachelEm0 -
Multiple Google Webmaster Tools Configurations
Hello everyone, I just inherited a website and 2 different users created GWT accounts on the same site and have configured different settings. Do you know how Google behaves when this happens? Thanks
Intermediate & Advanced SEO | | Carla_Dawson0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Experience with Google Disawow Tool and discovering bad back-links
Hi Community, is there any experience to tell here about the disawow tool from Google? Any review? It have helped revocer sites beaten by Penguin or penalized after WMT Unnatural Link building message? Which tools and methods you use to find bad back-links to submit for the disawow tool? Thanks for your feedback,
Intermediate & Advanced SEO | | Braumueller0 -
301 vs. 404
If a listing on a website is no longer available to display is it better to resolve to a 301 redirect or use a 404? I know from an SEO point of view a 301 will pass on the link value, but is that as valuable as saying tto the user hey that page is no lonoger available try something else?
Intermediate & Advanced SEO | | AU-SEO0