@moz staff Where does OSE get Facebook Share information?
-
When using OSE, where does it pull the Facebook data from? Open Graph? Like this? https://graph.facebook.com/http://www.moz.com
I am trying to find out because my URLs are coming in with completely different information: https://graph.facebook.com/http://www.discoverhawaiitours.com/to/discovertheroadtohana_21a.html
We are using the ShareThis plugin and I think it's not reporting the right info.
-
Hey Francisco!
For Open Site Explorer we pull shares of the exact URL you are looking up from public Facebook pages. 'Likes' are also pulled from public pages from Facebook's API and not the Facebook Page or embedded buttons on the URL.
Hope this helps and let me know if you have any questions!
-
1st let me tell you I am not a staff member however I think I can help you.
I know that when you add twitter, Facebook etc. you must validate your ownership of of the social media network. Upon doing that you will then give Moz's API the power to from your website. Extract information from your website and your Facebook site in this instance.
I hope this was of help,
Thomas
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Who gets punished for duplicate content?
What happens if two domains have duplicate content? Do both domains get punished for it, or just one? If so, which one?
Technical SEO | | Tobii-Dynavox0 -
Moz Crawl Diagnostic shows lots of duplicate content issues
Hi my client's website uses URL with www and without www. In page/title both website shows up. The one with www has page authority of 51 and the one without 45. In Moz diagnostic I can see that the website shows over 200 duplicate content which are not found in , e.g. Webmaster. When I check each page and add/remove www then the website shows the same content for both www and no www. It is not redirect - in search tab it actually shows www and then if you use no www it doesn't show www. Is the www issue to blame? or could it be something else? and what do I do since both www URL and no-www URL have high authority, just set up redirect from lower authority URL to higher authority URL?
Technical SEO | | GardenPet0 -
Preserving Social Shares Through URL Changes
We are making significant URL changes to our website. Because the URL is changing the social sharing buttons are not showing previous shared counts. I have read several resources like the one below that is linked to in several other similar questions. http://searchenginewatch.com/article/2172926/How-to-Maintain-Social-Shares-After-a-Site-Migration However I would love to get some insight from someone who has done this and there thoughts on the outcome. As an ecommerce site the "Social Proof" of products that have received social shares is a big deal to us. In Mike Kings example above the counts are being attributed to the OLD URL which is problematic over time. Our site has been up for over 12 years and has had several major changes to it, and I am certain there will be more in the future, being able to preserve the count on the current URL is ideal. While I agree with him that over time I believe social platforms will let data pass through 301 redirects, until then I need to find the best way to do this. Also with his example and others I have seen people mention than new likes from the new url can reset the counter. If you have gone through this and have ideas pleas share them. I look forward to your thoughts thanks.
Technical SEO | | RMATVMC1 -
I noticed all my SEOed sites are getting attacked constantly by viruses. I do wordpress sites. Does anyone have a good recommendation to protect my clients sites? thanks
We have tried all different kinds of security plugins but none seem to work long term.
Technical SEO | | Carla_Dawson0 -
Will I still get Duplicate Meta Data Errors with the correct use of the rel="next" and rel="prev" tags?
Hi Guys, One of our sites has an extensive number category page lsitings, so we implemented the rel="next" and rel="prev" tags for these pages (as suggested by Google below), However, we still see duplicate meta data errors in SEOMoz crawl reports and also in Google webmaster tools. Does the SEOMoz crawl tool test for the correct use of rel="next" and "prev" tags and not list meta data errors, if the tags are correctly implemented? Or, is it necessary to still use unique meta titles and meta descriptions on every page, even though we are using the rel="next" and "prev" tags, as recommended by Google? Thanks, George Implementing rel=”next” and rel=”prev” If you prefer option 3 (above) for your site, let’s get started! Let’s say you have content paginated into the URLs: http://www.example.com/article?story=abc&page=1
Technical SEO | | gkgrant
http://www.example.com/article?story=abc&page=2
http://www.example.com/article?story=abc&page=3
http://www.example.com/article?story=abc&page=4 On the first page, http://www.example.com/article?story=abc&page=1, you’d include in the section: On the second page, http://www.example.com/article?story=abc&page=2: On the third page, http://www.example.com/article?story=abc&page=3: And on the last page, http://www.example.com/article?story=abc&page=4: A few points to mention: The first page only contains rel=”next” and no rel=”prev” markup. Pages two to the second-to-last page should be doubly-linked with both rel=”next” and rel=”prev” markup. The last page only contains markup for rel=”prev”, not rel=”next”. rel=”next” and rel=”prev” values can be either relative or absolute URLs (as allowed by the tag). And, if you include a <base> link in your document, relative paths will resolve according to the base URL. rel=”next” and rel=”prev” only need to be declared within the section, not within the document . We allow rel=”previous” as a syntactic variant of rel=”prev” links. rel="next" and rel="previous" on the one hand and rel="canonical" on the other constitute independent concepts. Both declarations can be included in the same page. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain: rel=”prev” and rel=”next” act as hints to Google, not absolute directives. When implemented incorrectly, such as omitting an expected rel="prev" or rel="next" designation in the series, we'll continue to index the page(s), and rely on our own heuristics to understand your content.0 -
seo moz crawl diagnosis
Hi seomozzers, We are creating a brand new website for a client and I would like to run an seo moz crawl to fix what has been done wrong. So my question is it ok to run an SEO moz crawl with a dev URL? Are final URLs and dev URLs will give me the same results or not? Basically, Should I wait for getting the final URL or is it ok to run a crawl under a dev URL such as www.dev2.example.com or http://183.2564.2864? Thank you 🙂
Technical SEO | | Ideas-Money-Art0 -
Why am i still getting duplicate page title warnings after implementing canonical URLS?
Hi there, i'm having some trouble understanding why I'm still getting duplicate page title warnings on pages that have the rel=canonical attribute. For example: this page is the relative url http://www.resnet.us/directory/auditor/az/89/home-energy-raters-hers-raters/1 and http://www.resnet.us/directory/auditor/az/89/home-energy-raters-hers-raters/2 is the second page of this parsed list which is linking back to the first page using rel=canonical. i have over 300 pages like this!! what should i do SEOmoz GURUS? how do i remedy this problem? is it a problem?
Technical SEO | | fourthdimensioninc0 -
How can I get unimportant pages out of Google?
Hi Guys, I have a (newbie) question, untill recently I didn't had my robot.txt written properly so Google indexed around 1900 pages of my site, but only 380 pages are real pages, the rest are all /tag/ or /comment/ pages from my blog. I now have setup the sitemap and the robot.txt properly but how can I get the other pages out of Google? Is there a trick or will it just take a little time for Google to take out the pages? Thanks! Ramon
Technical SEO | | DennisForte0