What is the best way to eliminate ghost traffic from Google Analytics?
-
Hey Mozzers,
I just wanted to see how you all deal with eliminating Google ghost traffic sources from Google. I tried setting up a RegEx 'include' list before, but it seemed as though I was blocking potential traffic sources when I did as much (I'm probably missing something here).
Anyway, I'm interested to read how you all have dealt with this issue in the past, thanks for reading!
-
Hi J.P.,
i have personally found this article and the solution it proposes to be the most helpful - http://help.analyticsedge.com/spam-filter/definitive-guide-to-removing-google-analytics-spam/. Hope it helps.
-
I would recommend taking a look at the following if you are struggling with ghost referral type traffic. Its a great post that was touted around last year when referral traffic spiked for a lot of people. - Hopefully this is what you are after.
https://moz.com/blog/stop-ghost-spam-in-google-analytics-with-one-filter
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hacked Websites (Doorways) Ranking First Page of Google
Hello Moz community! I could really use your help with some suggestions here with some recent changes I've noticed in the Google serps for terms I've been currently working on. Currently one of the projects I am working on is for an online pharmacy and noticed that the SERPs are being now taken up by hacked websites which look like doorways to 301 redirect to an online pharmacy the hacker wants the traffic to go to. Seems like they may be wordpress sites that are hacked and have unrelated content on their websites compared to online pharmacies. We've submitted these issues as spam to Google and within chrome as well but haven't heard back. When searching terms like "Canadian Pharmacy Viagra" and other similar terms we see this issue. Any other recommendations on how we can fix this issue? Thanks for your time and attached is a screenshot of the results we are seeing for one of our searches. 1Orus
White Hat / Black Hat SEO | | monarkg0 -
Forcing Google to Crawl a Backlink URL
I was surprised that I couldn't find much info on this topic, considering that Googlebot must crawl a backlink url in order to process a disavow request (ie Penguin recovery and reconsideration requests). My trouble is that we recently received a great backlink from a buried page on a .gov domain and the page has yet to be crawled after 4 months. What is the best way to nudge Googlebot into crawling the url and discovering our link?
White Hat / Black Hat SEO | | Choice0 -
Bad keywords sending traffic my site, but can't find the source. Advice?
Hi! My site seems to be the target of negative SEO (or some ancient black hat work that's just now coming out of the woodwork). We're getting traffic from keywords like "myanmar girls" and "myanmar celebrities" that just started in late June and only directs to our homepage. I can't seem to find the source of the traffic, though (Analytics just shows it as "Google," "Bing," and "Yahoo" even though I can't find our site showing up for these terms in search results). Is there any way to ferret out the source besides combing through every single link that is directing to us in Webmaster Tools? I'm not even sure that GWT has picked up on it since this is fairly new, and I'd really love to nip this in the bud. Thoughts? Thanks in advance!
White Hat / Black Hat SEO | | 199580 -
Pagination for Search Results Pages: Noindex/Follow, Rel=Canonical, Ajax Best Option?
I have a site with paginated search result pages. What I've done is noindex/follow them and I've placed the rel=canonical tag on page2, page3, page4, etc pointing back to the main/first search result page. These paginated search result pages aren't visible to the user (since I'm not technically selling products, just providing different images to the user), and I've added a text link on the bottom of the first/main search result page that says "click here to load more" and once clicked, it automatically lists more images on the page (ajax). Is this a proper strategy? Also, for a site that does sell products, would simply noindexing/following the search results/paginated pages and placing the canonical tag on the paginated pages pointing back to the main search result page suffice? I would love feedback on if this is a proper method/strategy to keep Google happy. Side question - When the robots go through a page that is noindexed/followed, are they taking into consideration the text on those pages, page titles, meta tags, etc, or are they only worrying about the actual links within that page and passing link juice through them all?
White Hat / Black Hat SEO | | WebServiceConsulting.com0 -
How does Google decide what content is "similar" or "duplicate"?
Hello all, I have a massive duplicate content issue at the moment with a load of old employer detail pages on my site. We have 18,000 pages that look like this: http://www.eteach.com/Employer.aspx?EmpNo=26626 http://www.eteach.com/Employer.aspx?EmpNo=36986 and Google is classing all of these pages as similar content which may result in a bunch of these pages being de-indexed. Now although they all look rubbish, some of them are ranking on search engines, and looking at the traffic on a couple of these, it's clear that people who find these pages are wanting to find out more information on the school (because everyone seems to click on the local information tab on the page). So I don't want to just get rid of all these pages, I want to add content to them. But my question is... If I were to make up say 5 templates of generic content with different fields being replaced with the schools name, location, headteachers name so that they vary with other pages, will this be enough for Google to realise that they are not similar pages and will no longer class them as duplicate pages? e.g. [School name] is a busy and dynamic school led by [headteachers name] who achieve excellence every year from ofsted. Located in [location], [school name] offers a wide range of experiences both in the classroom and through extra-curricular activities, we encourage all of our pupils to “Aim Higher". We value all our teachers and support staff and work hard to keep [school name]'s reputation to the highest standards. Something like that... Anyone know if Google would slap me if I did that across 18,000 pages (with 4 other templates to choose from)?
White Hat / Black Hat SEO | | Eteach_Marketing0 -
Where can i see ejemple of disavow files to adapt mine in order to send to google
Can i send a disavow file to google as CSV file. Where can i see ejemple of disavow files to adapt mine in order to send to google
White Hat / Black Hat SEO | | maestrosonrisas0 -
Will Google Penalize Content put in a Div with a Scrollbar?
I noticed Moosejaw was adding quite a bit of content to the bottom of category pages via a div tag that makes use of a scroll bar. Could a site be penalized by Google for this technique? Example: http://www.moosejaw.com/moosejaw/shop/search_Patagonia-Clothing____
White Hat / Black Hat SEO | | BrandLabs0