Track client's facebook page with Social
-
I use SEOMoz for a client and want to track the client'sprogress on Facebook. But I can not get connected becasuse the facebook page is not my own.
How to solve this?
Patrick
-
Thank you all for the responses. I must make myself more clear on this. I can have the admin data of my client's facebook page but when I try to "Add the Facebook page you want to track for this campaign." (under the button "social") then SEOMoz says she wants to have rights of my own facebookpages because I am the owner of this SEOMoz license. When I answer "skip" then SEOMOz tells me that she can't keep track of those pages.
patrick
-
Most business owners are concerned about handing over their facebook admin rights to their agencies. There are tools like sproutsocial or socialdefender you can use where the Facebook page owner can give enough permission to do your job without handing it over. They can sign up for these services and add you as a staff. Now facebook also has multiple permission levels. If you want to post and manage they just need to add you as an editor, not admin.
-
In my opinion it is always easier to get access to clients platforms..
Just be honest and say to them that without getting admin rights then you will be looking at their statistics blind and it would not be a great way for you to fully analyse their Page. Use an analogy " You cant expect an accountant to do a tax return if they can't see your accounting"
Hope that helps
-
The other solution is of course asking your client to grant you access as an admin of their FB page. It's a bit of a work around but it will work.
-
Hi Patrick
I would just log in to SEOMoz next time you are with your client and ask them to go through the process of granting permission to their Facebook explaining what you are doing and why. I don't see this being an issue with your client...
If there is an issue maybe demonstrate what connecting their account will give you access to - show them another campaign with Facebook connected if you have one?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
403s: Are There Instances Where 403's Are Common & Acceptable?
Hey All, Both MOZ & Webmaster tools have identified 403 errors on an editorial site I work with (using Drupal CMS). I looked into the errors and the pages triggering the 403 are all articles in draft status that are not being indexed. If I am not logged into our drupal and I try to access an article in draft status I get the 403 forbidden error. Are these 403's typical for an editorial site where editors may be trying to access an article in draft status while they are not logged in? Webmaster tools is showing roughly 350 pages with the 'Access Denied' 403 status. Are these harmful to rank? Thanks!
Moz Pro | | JJLWeber1 -
Duplicate pages coming from links from the login page - what should we do about them?
This is a follow on to an earlier question which was well answered by Dirk Ceuppens regarding abnormal crawl issues. We are seeing that the issues relating to Duplicate Pages are coming from links from the login page which shows information about where the user was redirected from. For example, if the visitor is not logged on and wishes to wish-list an item, they will be redirected to the login page, with the item code and intended action in the url; which can then continue on to the desired page once logged on. The MOZ crawler is seeing these pages as having Duplicated Content whilst they are all the same apart from a piece of information in the URL. Should we be blocking these duplications? Are they a risk to us? What should we be doing? Many thanks, Sarah
Moz Pro | | Mutatio_Digital0 -
Rogerbot's crawl behaviour vs google spiders and other crawlers - disparate results have me confused.
I'm curious as to how accurately rogerbot replicates google's searchbot I've currently got a site which is reporting over 200 pages of duplicate/titles content in moz tools. The pages in question are all session IDs and have been blocked in the robot.txt (about 3 weeks ago), however the errors are still appearing. I've also crawled the page using screaming frog SEO spider. According to Screaming Frog, the offending pages have been blocked and are not being crawled. Webmaster tools is also reporting no crawl errors. Is there something I'm missing here? Why would I receive such different results. Which one's should I trust? Does rogerbot ignore robot.txt? Any suggestions would be appreciated.
Moz Pro | | KJDMedia0 -
Change the labels' name
Hello, I defined various labels and linked them to groups of keywords. Nevertheless, I'd like to change the name of one label, but it seems to be impossible. How could I do ? Thanks,
Moz Pro | | soliste690 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but sample pages are not related to page indicated with duplicate content
In the crawl diagnostics for my campaign, the duplicate content warnings have been increasing, but when I look at the sample pages that SEOMoz says have duplicate content, they are completely different pages from the page identified. They have different Titles, Meta Descriptions and HTML content and often are different types of pages, i.e. product page appearing as having duplicate content vs. a category page. Anyone know what could be causing this?
Moz Pro | | EBCeller0 -
Truncate page URLs
We have some pages (for example a contact us form) for which the URL is modified by the CMS depending on the referring page (this helps to put the form submission in context for the sales reps who get the contact submission). The SEOmoz crawler considers each URL a new page -- and so numbers like in diagnostics are all inflated as the same page is listed multiple times (e.g. for too many links) Is there a setting to change what the crawler considers to be the same page? Here are two URLs for the same page that the reports treat as separate pages: http://www.spirent.com/About-Us/Contact_us.aspx?referurl=0F528F4D703D8BB3523738D6373AA8AD http://www.spirent.com/About-Us/Contact_us.aspx?referurl=10ACDA6055244E369395223437FDCF30 The page is actually: http://www.spirent.com/About-Us/Contact_us.aspx Thanks Ken
Moz Pro | | spirent.marcom0 -
SEOMoz Crawling Only 1 Page
I entered a new site into my dashboard 2 days ago - everything looked kosher, there were a few hundred pages crawled and a whole bunch of errors. I came back this morning to start work on the site and SEOMoz has crawled the site again, this time returning only 1 page and 0 errors. I haven't even logged in to the site since the first crawl, so I couldn't have broken anything. Has anyone seen this before?
Moz Pro | | Junction0 -
Fixing the Too Many On-Page Links
In our campaign I see that it reported that some of our pages have too many on-page links. But I think most of the links that was seen by MozBot is related to our images. There are a lot of images in our site and at the same time we support 11 languages which adds additional links One of the pages that have a lot of links is www.florahospitality.com/dining.aspx What can you <a></a>suggest to fix this? Thanks. <a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a><a></a>
Moz Pro | | shebinhassan0