User Intent - Office Chairs & Content Writing
-
Hi
I'm trying to look for ideas for content on office chairs. Any ideas on where to start with user intent for this type of search query? I'm using answer the public to gather some ideas.
Some are good ideas, but I can't actually find any search volume for the phrase so then I'm unsure whether to devote time to writing something. Surely most people want to just find a supplier, buy the chair & they don't want a huge informational piece how to buy a chair.
Our competitors are the likes of amazon, and a load of other huge companies with high DA - so I'm looking at types of content we can write, people are interested in reading about chairs, which is less competitive..I'm not sure that exists...
Any help is appreciated
-
Yup, can do the same approach with SF. You can run it in List Mode which will let you upload the list of URLs to crawl, and you can set up an Extraction to separate out the h3s (Configuration > Custom > Extraction)
Paul
-
Not sure how to do it with Frog, but I'm pretty sure you can. You can always start a chat with guys from Netpeak Spider's support, they'll show you how to setup everything, just send them a link to this thread
-
Can you do the same thing with screaming frog? I'm not sure how you do it?
-
Wow thanks, I'll give this a go
-
Here's what I would do if I needed keywords about chairs with commercial intent.
Step 1: Get as many keywords related to this topic as possible.
Step 2: Paste your list of keywords into notepad++
Step 3: Use replace feature (Ctrl+H) to start each line* with "google.com/search?q=" https://i.imgur.com/sJ1g854.png
this will turn your list of keywords into the list of search queries. https://i.imgur.com/sJ1g854.png
This symbol ^ stands for the beginning of the line in Notepad++, so set N++ to replace ^ with that google thing.
Replace all spaces with a + sign for keywords that have more than 1 word to get google.com/search?q=best+chairsStep 4: Use crawler tool (Netpeak Spider has 2 weeks free trial) set it up to crawl your list of "google" URLs from Notepad ++ and fetch
's This is where page titles are nested in SERPs, there will 10 h3's for every keyword you used.
Example: https://i.imgur.com/x5UMGSU.png
Step 5: Paste Page titles with google URLs into spreadsheet and use conditional formatting to highlight titles where words like buy, for sale, price, delivery and etc. are used. Use words that indicate commercial intent.
Step 6: count number of commercial titles per SERP for every keyword to see if SERP is commercial or informational.
Step 7: separate commercial queries from informational
Step 8: analyze informational keywords only and build your content strategy around those keywords, they will be easier to rank for with articles and etc.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Publishing content in two or more places?
I've been thinking about publishing an article on LinkedIn and then posting the same article to the news page on the website. It would be high quality informative and useful but is that likely to cause any duplicate content issues?
Intermediate & Advanced SEO | | seoman100 -
Technical Automated Content - Indexing & Value
One of my clients provides some Financial Analysis tools, which generate automated content on a daily basis for a set of financial derivatives. Basically they try to estimate through technical means weather a particular share price is going up or down, during the day as well as their support and resistance levels. These tools are fairly popular with the visitors, however I'm not sure on the 'quality' of the content from a Google Perspective. They keep an archive of these tools which tally up to nearly a 100 thousand pages, what bothers me particularly is that the content in between each of these varies only slightly. Textually there are maybe up to 10-20 different phrases which describe the move for the day, however the page structure is otherwise similar, except for the Values which are thought to be reached on a daily basis. They believe that it could be useful for users to be able to access back-dated information to be able to see what happened in the past. The main issue is however that there is currently no back-links at all to any of these pages and I assume Google could deem these to be 'shallow' provide little content which as time passes become irrelevant. And I'm not sure if this could cause a duplicate content issue; however they already add a Date in the Title Tags, and in the content to differentiate. I am not sure how I should handle these pages; is it possible to have Google prioritize the 'daily' published one. Say If I published one today; if I had to search "Derivative Analysis" I would see the one which is dated today rather then the 'list-view' or any other older analysis.
Intermediate & Advanced SEO | | jonmifsud0 -
Why are these pages considered duplicate content?
I have a duplicate content warning in our PRO account (well several really) but I can't figure out WHY these pages are considered duplicate content. They have different H1 headers, different sidebar links, and while a couple are relatively scant as far as content (so I might believe those could be seen as duplicate), the others seem to have a substantial amount of content that is different. It is a little perplexing. Can anyone help me figure this out? Here are some of the pages that are showing as duplicate: http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Seth+Green/?bioid=5554 http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Solomon+Northup/?bioid=11758 http://www.downpour.com/catalogsearch/advanced/byNarrator/?mediatype=audio+books&bioid=3665 http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Marcus+Rediker/?bioid=10145 http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Robin+Miles/?bioid=2075
Intermediate & Advanced SEO | | DownPour0 -
Microformats & Microdata
Hi, Does splitting data apart using microformats & Microdata, help Google better understand your content and in turn could be used to increase relevancy? O and does anyone know if it's supported across major browsers.
Intermediate & Advanced SEO | | activitysuper0 -
Duplicate content that looks unique
OK, bit of an odd one. The SEOmoz crawler has flagged the following pages up as duplicate content. Does anyone have any idea what's going on? http://www.gear-zone.co.uk/blog/november-2011/gear$9zone-guide-to-winter-insulation http://www.gear-zone.co.uk/blog/september-2011/win-a-the-north-face-nuptse-2-jacket-with-gear-zone http://www.gear-zone.co.uk/blog/july-2011/telephone-issues-$9-2nd-july-2011 http://www.gear-zone.co.uk/blog/september-2011/gear$9zone-guide-to-nordic-walking-poles http://www.gear-zone.co.uk/blog/september-2011/win-a-the-north-face-nuptse-2-jacket-with-gear-zone https://www.google.com/webmasters/tools/googlebot-fetch?hl=en&siteUrl=http://www.gear-zone.co.uk/
Intermediate & Advanced SEO | | neooptic0 -
When to delete low quality content
If 75% of a site is poor quality, but still accounts for 35% of the traffic to the site, should the content be 404ed? Or, would it be better to move it to a subdomain and set up 301 re-directs? This site was greatly affected by Panda.
Intermediate & Advanced SEO | | nicole.healthline0 -
Duplicate Content, Campaign Explorer & Rel Canonical
Google Advises to use Rel Canonical URL's to advise them which page with similiar information is more relevant. You are supposed to put a rel canonical on the non-preferred pages to point back to the desired page. How do you handle this with a product catalog using ajax, where the additional pages do not exist? An example would be: <colgroup><col width="470"></colgroup>
Intermediate & Advanced SEO | | eric_since1910.com
| .com/productcategory.aspx?page=1 /productcategory.aspx?page=2 /productcategory.aspx?page=3 /productcategory.aspx?page=4 The page=1,2,3 and 4 do not physically exist, they are simply referencing additional products I have rel canonical urls' on the main page www.examplesite.com/productcategory.aspx, but I am not 100% sure this is correct or how else it could be handled. Any Ideas Pro mozzers? |0