Facebook Like button issue
-
In looking through my top pages in Google Analytics, my #2 page (oddly enough) looked like this "/?fb_xd_fragment=". Apparently, this is because we added the Facebook Like button to many of our pages. But I'm worried these show very skewed PageView data and lower Time Spent on each page. The average time on this page is 5 seconds whereas the average sitewide time is much higher.
Further, it shows 9,000 pageviews coming from only 250 Unique Visitors. I'm sure this is messing with our SEO. Is there a fix for this? Should I even be worried about it?
I heard that I can remove it from my GA stat reporting, but I don't want it to be causing problems in the background. Please advise..my boss wants to keep the Facebook Like button the pages as it has brought us some good response.
The page that this is on is: www.accupos.com
Maybe there's an alternate version of the Facebook Like that we don't know about...
I would appreciate any help on this
DM
-
It looks like you've changed this to the regular Facebook like button?
-
I had this problem too. Try visiting one of these urls. What I found was that the ones with the Facebook fragment in the url would end up giving a blank page to the reader. So the reader kept refreshing and this is why the page views went up. I tried to find a solution including editing htaccess but couldn't. The answer for me was to remove the Facebook fan page widget from these pages.
-
Try to use the original Facebook Like button - it uses javascript - this way it won't index any urls - you can generate it here: http://developers.facebook.com/docs/reference/plugins/like/
You will only have to change the data-href attribute value for each page to provide its own url.
I hope this helps.
-
Page views are a waste of time. Focus on what is driving conversions.
Swap out ShareThis for AddThis and see if there are changes. I use Addthis and have no problems in GA
-
I think you should not worry about it as long as you don't have any warning messages in Google Webmaster's panel and you play on a good side.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Issue with GA tracking and Native AMP
Hi everyone, We recently pushed a new version of our site (winefolly.com), which is completely AMP native on WordPress (using the official AMP for WordPress plugin). As part of the update, we also switched over to https. In hindsight we probably should have pushed the AMP version and HTTPS changes in separate updates. As a result of the update, the traffic in GA has dropped significantly despite the tracking code being added properly. I'm also having a hard time getting the previous views in GA working properly. The three views are: Sitewide (shop.winefolly.com and winefolly.com) Content only (winefolly.com) Shop only (shop.winefolly.com) The sitewide view seems to be working, though it's hard to know for sure, as the traffic seems pretty low (like 10 users at any given time) and I think that it's more that it's just picking up the shop traffic. The content only view shows maybe one or two users and often none at all. I tried a bunch of different filters to only track to the main sites content views, but in one instance the filter would work, then half an hour later it would revert to no traffic. The filter is set to custom > exclude > request uri with the following regex pattern: ^shop.winefolly.com$|^checkout.shopify.com$|/products/.|/account/.|/checkout/.|/collections/.|./orders/.|/cart|/account|/pages/.|/poll/.|/?mc_cid=.|/profile?.|/?u=.|/webstore/. Testing the filter it strips out anything not related to the main sites content, but when I save the filter and view the updated results, the changes aren't reflected. I did read that there is a delay in the filters being applied and only a subset of the available data is used, but I just want to be sure I'm adding the filters correctly. I also tried setting the filter to predefined, exclude host equal to shop.winefolly.com, but that didn't work either. The shop view seems to be working, but the tracking code is added via Shopify, so it makes sense that it would continue working as before. The first thing I noticed when I checked the views is that they were still set to http, so I updated the urls to https. I then checked the GA tracking code (which is added as a json object in the Analytics setting in the WordPress plugin. Unfortunately, while GA seems to be recording traffic, none of the GA validators seem to pickup the AMP tracking code (adding using the amp-analytics tag), despite the json being confirmed as valid by the plugin. This morning I decided to try a different approach and add the tracking code via Googles Tag Manager, as well as adding the new https domain to the Google Search Console, but alas no change. I spent the whole day yesterday reading every post I could on the topic, but was not able to find any a solution, so I'm really hoping someone on Moz will be able to shed some light as to what I'm doing wrong. Any suggestions or input would be very much appreciated. Cheers,
Technical SEO | | winefolly
Chris (on behalf of WineFolly.com)0 -
Fetching & Rendering a non ranking page in GWT to look for issues
Hi I have a clients nicely optimised webpage not ranking for its target keyword so just did a fetch & render in GWT to look for probs and could only do a partial fetch with the below robots.text related messages: Googlebot couldn't get all resources for this page Some boiler plate js plugins not found & some js comments reply blocked by robots (file below): User-agent: *
Technical SEO | | Dan-Lawrence
Disallow: /wp-admin/
Disallow: /wp-includes/ As far as i understand it the above is how it should be but just posting here to ask if anyone can confirm whether this could be causing any prrobs or not so i can rule it out or not. Pages targeting other more competitive keywords are ranking well and are almost identically optimised so cant think why this one is not ranking. Does fetch and render get Google to re-crawl the page ? so if i do this then press submit to index should know within a few days if still problem or not ? All Best Dan0 -
Canonicalization Issue?
Good day! I am not sure if my company has a Canonicalization issue? When typing in www.cushingco.com the site redirects to http://www.cushingco.com/index.shtml A visitor can also type in http://cushingco.com/index.shtml into a web browser and land on our homepage (and the url will be http://www.cushingco.com/index.shtml) A majority of websites that link to our company point to: http://www.cushingco.com/index.shtml We are in the process of cleaning up citations and pulling together a content marketing strategy/editorial calendar. I want to be sure folks interested in linking to us have the right url. Please ask me any questions to help narrow down what we might be doing incorrectly. Thanks in advance!! Jon
Technical SEO | | SEOSponge0 -
CSS Issue or not?
Hi Mozzers, I am doing an audit for one of my clients and would like to know if actually the website I am dealing with has any issues when disabling CSS. So I installed Web developer google chrome extension which is great for disabling cookies, css... When executing "Disable CSS", I can see most of the content of the page but what is weird is that in certain sections images appear in the middle of a sentence. Another image appears to be in the background in one of the internal link section(attached pic) Since I am not an expert in CSS I am wondering if this represents a CSS issue, therefore a potential SEO issue? If yes why it can be an SEO issue? Can you guys tell me what sort of CSS issues I should expect when disabling it? what should I look at? if content and nav bar are present or something else? Thank you dBCvk.png
Technical SEO | | Ideas-Money-Art0 -
Https enabled site with seo issues
Hello, Is there a problem with seo bots etc to crawl and rank my wesbite well if the entire site is https enabled? We have a sign in button which results on the next page being https along with the main homepage and all other pages are https enabled. Any major setbacks to the seo strategies? How do I overcome these issues?
Technical SEO | | shanky10 -
Crawl issue
Hi I have a problem with crawl stats. Crawls Only return 3k pages while my site have 27k pages indexed(mostly duplicated content pages), why such a low number of pages crawled any help more than welcomed Dario PS: i have more campaign in place, might that be the reason?
Technical SEO | | Mrlocicero0 -
301 redirect Issues
my clients site is www.greenbayharvest.co.uk When you enter that URL it redirects to www.greenbayharvest.co.uk/shop, dont ask why, thats the way they set it up and thats what im stuck with. So, how do i resolve the 301 issue here. we want all things to point to www.greenbayharvest.co.uk, in terms of SEO but does the fact that there is a redirect going to /shop make this an issue? we appear to have: www.greenbayharvest.co.uk/shop www.greenbayharvest.co.uk greenbayharvest.co.uk greenbayharvest.co.uk/shop all these URL's go to the same same page so what is the best way to correct this? thanks for any help on this Lee
Technical SEO | | IPIM0 -
Duplicate title issue
During the crwal SEO moz found duplicate title problems with quite a good number of pages. This was because my site has test questions like http://www.skill-guru.com/12/scjp-5-mock-test/questions and when user does next or previous, they can traverse to different pages but the title and descrition would remain same. How can this probkem be resolved ?
Technical SEO | | skill-guru2