On Page Reports for Long-Tail Keywords?
-
First Q&A here, so take it easy on me. Hopefully this is not a dumb question.
99% of the on page reports I look at are for local keywords. I'm finding that some of the page reports get and F where they should be receiving A's. I noticed if I manage a long-tail keyword like "books in houston tx" that I get a grade based on the exact phrase as opposed to a combination of the keywords. So when I have text in the body saying "books in the Houston area" or "Houston Books" for example, instead of using my exact managed keyword, it will tell me that the keyword is not mentioned in the body anywhere. When it is, it's just not in the exact order... I'm trying to write my pages for users and I don't think the users want to hear "books in houston tx" several times.
If I do a report on the same page for the keyword "books" I get an A. So this is where it gets a little in the grey area for me. I need a solid on page report for local long-tail search terms. Anyone have any advice? Is there a way I could be using this tool to better suit my needs?
-
Highly agree!
Golden Words!
...Ultimately, SEO is not about following a strict set of guidelines--it's about having a well-founded knowledge level and then leveraging that knowledge in unique ways in order to gain an advantage over your competitors....
-
I think that with the understanding you have, you're already beyond the report's capabilities to instruct you--fundamentally, it's a fairly simple tool. At this point, you don't need a report to help you further, you need to interpret your real-world results to make decisions that will let you tweak your pages in the right direction.
Ultimately, SEO is not about following a strict set of guidelines--it's about having a well-founded knowledge level and then leveraging that knowledge in unique ways in order to gain an advantage over your competitors.
In other words, it may be time to use the force, Luke. That's what we all used to do before such reports were available.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Getting keywords to rank on new landing pages
I've built new landing pages for a website, and have loaded them with researched keywords in the content, alt image attributes and metas etc - but, after a number of crawls, the keywords are currently being matched to other existing web pages on the website. Does anyone have any advice on 'unlatching' these keywords from these pages, and instead getting them to match with pages that have been optimised for them? Many thanks!
Moz Pro | | Darkstarr6660 -
Keyword cannibalization
Hello - We have a blog that ranks for 124 different keywords (URL: https://www.clickboarding.com/18-jaw-dropping-onboarding-stats-you-need-to-know/), and it's cannibalizing other keywords that I'm writing blogs specifically for. For example, the blog mentioned above ranks for "employee onboarding process," and I have a different blog I wrote with "employee onboarding process" as the focus keyword that's not ranking at all. Any suggestions or insight is hugely appreciated! Julie K.
Moz Pro | | Julie-Kuepers0 -
Filter Pages
Howdy Moz Forum!! I have a headache of a job over here in the UK and I'd welcome any advice! - It's sunny today, only 1 of 5 days in a year and i'm stuck on this! I have a client that currently has 22,000 pages indexed to Google with almost 4000 showing as duplicate content. The site has a "jobs" and "candidates" list. This can cause all sorts of variations such as job title, language, location etc. The filter pages all seem to be indexed. Plus the static pages are indexed. For example if there were 100 jobs at Moz being advertised, it is displaying the jobs on the following URL structure - /moz
Moz Pro | | Slumberjac
/moz/moz-jobs
/moz/moz-jobs/page/2
/moz/moz-jobs/page/3
/moz/moz-jobs/page/4
/moz/moz-jobs/page/5 ETC ETC Imagine this with some going up to page/250 I have checked GA data and can see that although there are tons of pages indexed this way, non of them past the "/moz/moz-jobs" URL get any sort of organic traffic. So, my first question! - Should I use rel-canonical tags on all the /page/2 & /page/3 etc results and point them all at the /moz/moz-jobs parent page?? The reason for this is these pages have the same title and content and fall very close to "duplicate" content even though it does pull in different jobs... I hope i'm making sense? There is also a lot of pages indexed in a way such as- https://www.examplesite.co.uk/moz-jobs/search/page/9/?candidate_search_type=seo-consulant&candidate_search_language=blank-language These are filter pages... and as far as I'm concerned shouldn't really be indexed? Second question! - Should I "no follow" everything after /page in this instance? To keep things tidy? I don't want all the variations indexed! Any help or general thoughts would be much appreciated! Thanks.0 -
Duplicate page report
We ran a CSV spreadsheet of our crawl diagnostics related to duplicate URLS' after waiting 5 days with no response to how Rogerbot can be made to filter. My IT lead tells me he thinks the label on the spreadsheet is showing “duplicate URLs”, and that is – literally – what the spreadsheet is showing. It thinks that a database ID number is the only valid part of a URL. To replicate: Just filter the spreadsheet for any number that you see on the page. For example, filtering for 1793 gives us the following result: | URL http://truthbook.com/faq/dsp_viewFAQ.cfm?faqID=1793 http://truthbook.com/index.cfm?linkID=1793 http://truthbook.com/index.cfm?linkID=1793&pf=true http://www.truthbook.com/blogs/dsp_viewBlogEntry.cfm?blogentryID=1793 http://www.truthbook.com/index.cfm?linkID=1793 | There are a couple of problems with the above: 1. It gives the www result, as well as the non-www result. 2. It is seeing the print version as a duplicate (&pf=true) but these are blocked from Google via the noindex header tag. 3. It thinks that different sections of the website with the same ID number the same thing (faq / blogs / pages) In short: this particular report tell us nothing at all. I am trying to get a perspective from someone at SEOMoz to determine if he is reading the result correctly or there is something he is missing? Please help. Jim
Moz Pro | | jimmyzig0 -
The pages that add robots as noindex will Crawl and marked as duplicate page content on seo moz ?
When we marked a page as noindex with robots like {<meta name="<a class="attribute-value">robots</a>" content="<a class="attribute-value">noindex</a>" />} will crawl and marked as duplicate page content(Its already a duplicate page content within the site. ie, Two links pointing to the same page).So we are mentioning both the links no need to index on SE.But after we made this and crawl reports have no change like it tooks the duplicate with noindex marked pages too. Please help to solve this problem.
Moz Pro | | trixmediainc0 -
SEOTools Reporting
Is there any other services or software that provide keyword ranking and traffic data other than SEOMoz or Raven Tools?
Moz Pro | | JohnW-UK0 -
How long has the keyword difficulty tool had these limits in place?
While working against a tight deadline, I was surprised to see the following message: "We're sorry. Currently we are only able to offer results for 300 keywords per user per day. Please come back tomorrow" How long has this limit been in place and is the limit listed anywhere during the signup process? I rarely use this tool for more than 10-20 keywords at a time, so I have not run into this issue before.
Moz Pro | | davidangotti0 -
Why are these pages considered duplicate page content?
A recent crawl diagnostic for a client's website had several new duplicate page content errors. The problem is, I'm not sure where the error comes from since the content in the webpage is different from one another. Here's the pages that SEOMOZ reported to have duplicate page content errors: http://www.imaginet.com.ph/wireless-internet-service-providers-term http://www.imaginet.com.ph/antivirus-term http://www.imaginet.com.ph/berkeley-internet-name-domain http://www.imaginet.com.ph/customer-premises-equipment-term The only thing similar that I see is the headline which says "Glossary Terms Used in this Site" - I hope that the one sentence is the reason for the error. Any input is appreciated as I want to find out the best solution for my client's website errors. Thanks!
Moz Pro | | TheNorthernOffice790