Who's still being outranked by spam?
-
Over the past few months, through Google Alerts, I've been watching one of our competitors kick out crap press releases, and links to their site have been popping up all over blog networks with exact match anchor text.
They now outrank us for that anchor text. Why is this still happening? Three Penguin updates later and this still happens. I'm trying so hard to do #RCS and acquire links that will ensure our site's long-term health in the SERPs.
Is anyone else still struggling with this crap?
-
This is nothing new .. this is like playing Russian roulette .. they are taking the risk and they are enjoying the benefits while the sun shines. I do not think we can do anything to change this situation.. hope Google is listening and will come up with something and believe me this tool will not make any difference -
https://www.google.com/webmasters/tools/spamreport?hl=en
I have tried this many times and of course no result so far.
-
Ah. Well, at least you got a reply!
-
I 100% agree with this, and I am working on long term strategies in the hopes that our competitors will face consequences soon enough--two of our biggest competitors were completely destroyed by the first Penguin. It's a little hard when you have higher ups demanding answers as to why we went from number #1 to #4 over the past three months.
-
Yes, I did that yesterday, but unfortunately, a few weeks ago I got an email from Google saying they received a spam report I sent them for a different site over eight months ago, so I'm not expecting any kind of expedient action on Google's part.
-
I'm seeing it less and less in the travel space. But when it does happen I agree that it's hugely frustrating. You simply have to keep on doing the right thing in the firm belief that Google will reward you in the end. Don't be tempted to copy spammy techniques because (a) you'll probably suffer in the long run, and (b) it might not actually be those that are causing your competitors to outrank you.
-
Have you tried a webspam report?
-
The recent algo updates to catch this sort of stuff seem to be a bit hit and miss at the moment. But it does give us reason to believe that Googs are actively working on removing the trash, but it is still just an algo, it's not perfect. Hopefully they'll continue getting even better at identifying and removing spam.
-
Everyday I keep seeing spam website ranking at the top, frustrating to see that they are up there.
Example 1, we are trying to #1 for a educational course name for a client that is a training school. They are currently being outranked by a website that has about 90% exact match Japanese blog comments, simply because those blog have a high DA.
Example 2, we have a regional client trying to rank for [keyword] + [city name], they are being outranked by an EMD who has 1 link, which is a directory. Whereas we keep pushing new content on the client's site and building their social networks. Sure, the number 1 website isn't doing any shading spam techniques, but they are simply ranking because of their domain names which I thought Google's latest update would of fixed that..
-
Yes - I am struggling with exactly the same issues. I actually sent a private question to SEOMoz last month asking for advice because I was so frustrated. One of our competitors has set up hundreds of fake blog networks with exact match anchor text all over the place linking back to their ecommerce site (to the tune of 300,000+ links)....I know their tactics, because I used to work for them (and it's obvious when looking at their inbound links in ahrefs.com). I disagreed with their SEO approach and frequently voiced my opinions. Fortunately, I was hired away by one of their competitors
Now I work for a great, honest, hard-working company that does great #RCS every day. Still my former company, even after Penguin and Panda and more Penguin and Panda, consistently makes Page 1 while my current company is lucky to make Page 2 for some really important keywords. It is very very frustrating!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Back links to pages on our site that don't exist on forums we haven't used with irrelevant product anchor text
Hi, I have a recurring issue that I can't find a reason for. I have a website that has over 7k backlinks that I monitor quite closely. Each month there are additional links on third party forums that have no relevance to the site or subject matter that are as a result toxic. Our clients site is a training site yet these links are appearing on third party sites like http://das-forum-der-musik.de/mineforum/ and have anchor text with "UGG boots for sale" to pages on our url listed as /mensuggboots.html that obviously don't exist. Each month, I try to contact the site owners and then I add them to Google using the disavow tool. Two months later they are gone and then are replaced with new backlinks on a number of different forum websites. Quite random but always relating to UGG boots. There are at least 100 extra links each month. Can anyone suggest why this is happening? Has anyone seen this kind of activity before? Is it possibly black hat SEO being performed by a competitor? I just don't understand why our URL is listed. To be fair, there are other websites linked to using the same terms that aren't ours and are also of a different theme so I don't understand what the "spammer" is trying to achieve. Any help would be appreciated.
White Hat / Black Hat SEO | | rufo
KInd Regards
Steve0 -
Infinite Scrolling on Publisher Sites - is VentureBeat's implementation really SEO-friendly?
I've just begun a new project auditing the site of a news publisher. In order to increase pageviews and thus increase advertising revenue, at some point in the past they implemented something so that as many as 5 different articles load per article page. All articles are loaded at the same time and from looking in Google's cache and the errors flagged up in Search Console, Google treats it as one big mass of content, not separate pages. Another thing to note is that when a user scrolls down, the URL does in fact change when you get to the next article. My initial thought was to remove this functionality and just load one article per page. However I happened to notice that VentureBeat.com uses something similar. They use infinite scrolling so that the other articles on the page (in a 'feed' style) only load when a user scrolls to the bottom of the first article. I checked Google's cached versions of the pages and it seems that Google also only reads the first article which seems like an ideal solution. This obviously has the benefit of additionally speeding up loading time of the page too. My question is, is VentureBeat's implementation actually that SEO-friendly or not. VentureBeat have 'sort of' followed Google's guidelines with regards to how to implement infinite scrolling https://webmasters.googleblog.com/2014/02/infinite-scroll-search-friendly.html by using prev and next tags for pagination https://support.google.com/webmasters/answer/1663744?hl=en. However isn't the point of pagination to list multiple pages in a series (i.e. page 2, page 3, page 4 etc.) rather than just other related articles? Here's an example - http://venturebeat.com/2016/11/11/facebooks-cto-explains-social-networks-10-year-mission-global-connectivity-ai-vr/ Would be interesting to know if someone has dealt with this first-hand or just has an opinion. Thanks in advance! Daniel
White Hat / Black Hat SEO | | Daniel_Morgan1 -
What is the difference between rel canonical and 301's?
Hi Guys I have been told a few times to add the rel canonical tag to my category pages - however every category page actually is different from the other - besides the listings that I have for my staff on each pages. Some of them specialise in areas that cross over in other areas - but over really if I'm re directing for eg: Psychic Readings over to Love and Relationships because 5 of my staff members are in both categories - the actual delivering of content and in depth of the actual category which skills are provided at different levels don't justify me creating a rel tag from Psychic Readings over to Love and Relationships just because i have 5 staff members listed under both categories. Tell me have I got this right or completely wrong? Here is an eg: Psychic Readings category https://www.zenory.com/psychic-readings And love and relationships category - https://www.zenory.com/love-relationships Hope this makes sense - I really look forward to your guys feedback! Cheers
White Hat / Black Hat SEO | | edward-may0 -
Controlling crawl speed/delay through dynamic server-code and 503's
Lately i'm experiencing performance trouble caused by bot traffic. Although Googlebot is not the worst (it's mainly bingbot and ahrefsbot), they cause heavy server load from time to time. We run a lot of sites on one server, so heavy traffic on one site impacts other site's performance. Problem is that 1) I want a centrally managed solution for all sites (per site administration takes too much time), which 2) takes into account total server-load in stead of only 1 site's traffic and 3) controls overall bot-traffic in stead of controlling traffic for one bot. IMO user-traffic should always be prioritized higher than bot-traffic. I tried "Crawl-delay:" in robots.txt, but Googlebot doesn't support that. Although my custom CMS system has a solution to centrally manage Robots.txt for all sites at once, it is read by bots per site and per bot, so it doesn't solve 2) and 3). I also tried controlling crawl-speed through Google Webmaster Tools, which works, but again it only controls Googlebot (and not other bots) and is administered per site. No solution to all three of my problems. Now i came up with a custom-coded solution to dynamically serve 503 http status codes to a certain portion of the bot traffic. What traffic-portion for which bots can be dynamically (runtime) calculated from total server load at that certain moment. So if a bot makes too much requests within a certain period (or whatever other coded rule i'll invent), some requests will be answered with a 503 while others will get content and a 200. Remaining question is: Will dynamically serving 503's have a negative impact on SEO? OK, it will delay indexing speed/latency, but slow server-response-times do in fact have a negative impact on the ranking, which is even worse than indexing-latency. I'm curious about your expert's opinions...
White Hat / Black Hat SEO | | internetwerkNU1 -
Why are "outdated" or "frowned upon" tactics still dominating?
Hey, my first post here. I recently picked up a new client in real estate for a highly competitive market. One trend I'm noticing with all the top sites they are doing old tactics such as:
White Hat / Black Hat SEO | | Jay328
-Paid Directories
-Terrible/Spam Directories
-Overuse of exact text keywords for example: City name + real estate
-Blogroll/link exchange
-Tons of meta key words
-B.S. press releases blog commenting with kw as name Out of all the competition there is only one guy who is following the rules of today. One thing I'm noticing is that nobody is doing legit guest blogging, has great social presence, has awesome on page, etc. It's pretty frustrating as I'm trying to follow the rules and seeing these guys kill it by doing "bad seo". Anybody else find themselves in this situation? I know I'm probably beating a dead horse but I needed to vent about this 😉2 -
Is it still valuable to place content in subdirectories to represent hierarchy or is it better to have every URL off the root?
Is it still valuable to place content in subdirectories to represent hierarchy on the site or is it better to have every URL off the root? I have seen websites structured both ways. It seems having everything off the root would dilute the value associated with pages closest to the homepage. Also, from a user perspective, I see the value in a visual hierarchy in the URL.
White Hat / Black Hat SEO | | belcaro19860 -
Have I created link spam.....
Howdy fellow Mozzers.... Since Googles Penguin Update I am overly cautious when reviewing our link profile. I spotted 2 domains linking to us yesterday, 80+ links from each domain to our homepage. This looked superstitious, site wide links effectively. At first inspection I couldn't spot the links....they turned out to be two individual comments, but as the site had a plugin with "most recent comments", 1 link became 80. The link is an exact match of the individuals name who made the comment. And is a result of filling out the comment form. Name: Website: Comment: By filling out the name and website the name becomes the anchor text for the link to the website. Long story short...do you think this is penguin esq. link spam? Is it not? Or is it just not worth the risk and remove them anyway???
White Hat / Black Hat SEO | | RobertChapman0 -
Anchor text penalty doesn't work?!
How do you think, does the anchortext penalty exactly work? Keyword domains obviously can't over-optimize for their main keyword (for example notebook.com for the keyword notebook). And a lot of non-keyword-domains do optimize especially in the beginning for their main keyword to get a good ranking in google (and it always works). Is there any particular point (number of links) I can reach, optimizing for one keyword, after what i'm gonna get a penalty?
White Hat / Black Hat SEO | | TheLastSeo0