How do I find #of RSS Subscribers", "Most Popular Post URL", "What is the most popular post about"
-
Hi, I am a new user at Moz.com and looking for finding below information for a list of blogs.
"#of RSS Subscribers", "Most Popular Post URL", "What is the most popular post about" for a list of blogs URLs?
Please response Which tools I should use and how to use tools? -Muhammad
-
Hi Mohammed,
I apologize if I mistook your question. Let me see if I can answer that correctly for you.
one method would be to use Alexa and type in simply most popular blogs do not check off any other categories or countries. It will then give you a list of the most popular blogs and the way it decides what is more popular than the other is by how much traffic the blogs receive here is the URL with the query in it.
http://www.alexa.com/search?q=most+popular+blogs&p=gkey&r=site_screener
if you use Google blog search it will target only blogs and give you the ability to query only blogs when you wish to find information on them.
http://www.google.com/blogsearch
this website below uses Alexa gives you exactly what you are asking for as far as the most popular blogs and it allows you to set it via month and go back and look at the past as well
http://www.ebizmba.com/articles/blogs
in other method of finding the most popular posts would be to use this tool for blogger
http://www.blogger.webaholic.co.in/2011/05/popular-posts-widget-for-blogger.html
I would consider signing up for discuss and querying which blogs receive the most comments
here is a article on how to find the most popular blogs for your topic
http://labnol.blogspot.com/2006/06/how-to-quickly-find-good-blogs-that.html
Here is a interesting article from life hacker on how to get only the blog posts you want via RSS
http://lifehacker.com/344188/get-only-the-posts-you-want-from-lifehackers-site-feeds
here are the top websites in the world
here is about.com and a list of the 10 most popular blogs in the world
http://webtrends.about.com/od/profile1/tp/Top-10-Most-Popular-Blogs.htm
one method is to use a tool like Alexa something that indexes and already knows the popularity of the websites it gives you access to. For instance I can go into category "top sites" "news" then select "weblogs" and receive this
http://www.alexa.com/topsites/category/Weblogs
I believe this last link might be the most useful to you
http://labnol.blogspot.com/2006/06/how-to-quickly-find-good-blogs-that.html
the question about RSS feeds and the number of subscribers is able to be answered using this tool however they must be using feed burner something I believe will not hurt you in your search to find out how many people have subscribed. As feed burner is by far the most popular RSS
http://thinktraffic.net/how-to-find-an-unpublished-rss-subscriber-count
I am sorry for the long list of links.
To give you the answer regarding subscribers and most popular post I would have to say that the most popular posts is
an Ehow article that I hope will help to better answer your question.
http://www.ehow.com/how_2009187_most-popular-blogs.html
If you would not mind elaborating more on exactly what it is you are looking to do I am certain I could be of more help. As far as trying to find the best solution for your question I hope I have done that however if I have not please let me know and I will do the best I can to answer it. If you can explain to me more fully what the end goal is I think I will be able to be of help to you.
Sincerely,
Thomas
-
Thomas, thanks for your detailed insight into the Moz community! I do think that this poster is looking for something slightly different though. I think they have a list of blog URLs and they want to know for each blog how many RSS subscribers it has, the most popular post in that list, etc.
-
Hi Mohammed,
First off let me say welcome to Moz!
The popular post URLs in Moz this is a list of blogs posts currently written and featured on the "Moz Blog" for instance a URL in that category can be something that has been written by either a Moz staff member, affiliate, whiteboard Friday (done by Rand mostly but sometimes by others) or YouMoz post ( YouMoz is a collection of members posting their own blogs to Moz if you have a very popular post it will be featured on YouMoz).
The way Moz knows what is popular or not is by using its outstanding analytics program here is a URL showing the amount of comments rather they were considered helpful by the community with a thumbs up or not helpful with the thumbs down Facebook shares, twitter shares & Google+ shares just to name a few of the things they track. This is a current blog that is considered popular here are the statistics that I believe are used to decide makes it the most popular
http://moz.com/blog/the-web-developers-seo-cheat-sheet-2013-edition/stats
If it has enough people responding with a thumbs-up meaning this is something useful along with the fact that Moz has a fantastic analytics program built into it allowing them to essentially see exactly how many people have read the post if it is very popular something most people here are interested in then it is placed in the most popular URLs
As far as the number of RSS data showing how many people are subscribed to the " Moz Top Ten News" this puts you on a newsletter where you will receive an e-mail getting the top 10 sources of inbound marketing information found on Moz sent to you monthly via e-mail
currently there are 218,432 subscribers shown to be taking advantage of this RSS feed/newsletter
I really hope you are enjoying all the wonderful tools and people here at Moz I know you will enjoy it.
I hope this has answered your question
Sincerely,
Thomas
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
"New" issues not previously found being shown?
I'm not sure what logic Moz is using for its reporting of Site Crawl issues, but it appears to be pretty flawed (unless I'm missing something, which is possible). I've got a client site that has been in Moz for about 6 months now. Every time the crawler runs, the same number of pages are reported as having been crawled. However I'm consistently getting "New Issues" reported that should have been reported during previous crawls. Example: A redirect chain was reported several month ago. The referring URL was the homepage of the website, and we tracked it down to an old link in the header. This was fixed, marked as resolved, and the issue was not shown on the next crawl. Several weeks later, the same issue was reported for a different page on the website - a page which has existed since 2014 and was already crawled many times. Again, we fixed. Fast-forward to the report that just ran on 12/1 and we have the same issue reported, for a different page, which has also existed for years and has been previously crawled. It's very hard to explain to a client "this item you are seeing has been resolved", only to have it continually crop back up in future reports. Note this is not limited to redirect chains - that's just an example. I'm seeing this for other items such as missing canonicals, duplicate titles, etc.
Moz Bar | | RucksackDigital0 -
MozBot Finding Duplicate Pages That Aren't Duplicate
I've been reviewing the technical audits for my campaign in Moz, and noticed I had a number of duplicate content issues that I'm not really sure how to address. When I click on the links of what the duplicates are, they are all different links that have different content/images. Based on what I was seeing other's wrote in the forum, this could be because the code base is really the same between these pages, and many of these were using query parameters (I'm assuming that is why the code is almost exactly the same across these pages), so example: website.com/tags/KEYWORD1?type=KEYWORD2 is a duplicate of website.com/tags/KEYWORD3?type=KEYWORD4 I was reading that I can use that URL Parameters area in google search console, but my search console says that the googlebot isn't experiencing issues, so I wasn't sure if that was the right move. I can't do the canonicals because these pages all have different content on them, and I know duplicate content is a big SEO issue, so I really wasn't sure what my next steps should be. Thanks for the help!
Moz Bar | | amaray4030 -
Why do i get multiple variations of my url with ?order=asc and ?view=list at the end of it in my crawl report?
I just did a crawl for one my clients to validate any error in the structure. Next thing I know is that the website have multiple variation of the same url with query like ?order=asc and ?view=list at the end of it. I am wondering why these url variations appears in the crawl I just did since bots aren't suppose to go further thant the ? normally. Just to show you a couple of url's of my crawl test. <colgroup><col width="484"></colgroup>
Moz Bar | | alexrbrg
| https://test.com/exemple/?per_page=9 |
| https://test.com/exemple/?per_page=15 |
| https://test.com/exemple/?per_page=30 |
| https://test.com/exemple/?orderby=popularity |
| https://test.com/exemple/?orderby=date |
| https://test.com/exemple/?orderby=price |
| https://test.com/exemple/?orderby=price-desc |
| https://test.com/exemple/?order=asc |
| https://test.com/exemple/?order=desc |
| https://test.com/exemple/?view=list | Thank you Guys0 -
Rogerbot will not crawl my site! Site URL is https but keep getting and error that homepage (http) can not be accessed. I set up a second campaign to alter the target url to the newer https version but still getting the same error! What can I do?
Site URL is https but keep getting and error that homepage (http://www.flogas.co.uk/) can not be accessed. I set up a second campaign to alter the target url to the newer https://www.flogas.co.uk/ version but still getting the same error! What can I do? I want to use Moz for everything rather than continuing to use a separate auditing tool!
Moz Bar | | digitalascend0 -
MOZ Page Grader - Sorry, but that URL is inaccessible?
Hi, I am trying to use the MOZ page grader on this page - https://www.respuestasparaelhombre.com/datos-sobre-disfunción-eréctil-preguntas but it is saying Sorry, but that URL is inaccessible - any ideas? Thanks
Moz Bar | | Jason_Marsh1230 -
Why is the exact same URL being seen as duplicate and showing an error in my SEO reports
Well, I am still having duplicate page issues. I have a question about one of the errors SEO is giving me when I download a crawl report. I am going to attach a screen shot of part of the report so you can see for yourself, along with explaining it here. SEO shows the list of URL's that it crawled in the report. In this(see attachment) portion of the report it has 321 results for the exact same URL. It also says all of these exact same URL's have received a 404 error. What I want to know is how does it make 321 results for the same URL? And with this error that I don't see when I look at the page? 0hkRDST
Moz Bar | | JoshMaxAmps0 -
Rank Tracker - URLs are Different when Exporting to CSV
When exporting to CSV in Rank Tracker, many of the URLs are reduced to the root domain instead of the full, ranking URL as seen within the tool. Right now the URLs must be copied/pasted or manually edited afterwards in the CSV. However, it doesn't happen to every item. A few of them do show the correct URL after being exported. Any idea if this is a bug or just an odd thing the export does?
Moz Bar | | AlfredGoldberg1 -
Ajax #! URL support?
Hi Moz, My site is currently following the convention outlined here: https://support.google.com/webmasters/answer/174992?hl=en Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content. For example, if the bot sees this url: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 it will replace it will instead access the page: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine. However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest. If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great. Also, pushstate is not practical for everyone due to limited browser support, etc. Thanks, Dustin Updates: I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago? Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 And when it is ready to spider the page for content it, it spider's this URL instead: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 The server does the rest, it is simply telling Roger to recognize the #! format and replace it with ?escaped_fragment Though I obviously do not know how Roger is coded but it is a simple string replacement. Thanks.
Moz Bar | | oneactlife0