Google 'most successful online businesses'
-
how come this guy has all but 1 of the top ten results? (UK results - I'm guessing same in USA?) - with thin content on a spammed keyword on multi-sub domains? How can we 'white hat' guys compete if stuff like this is winning?
-
I agree, low competition and probably low search volume. Do some SEO for it and you'll be up there in no time!
-
The information that you provide is really skimpy...
However, in general, if that type of stuff is dominating then competition is not very tough.
A little hard work should advance your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Submitting a page to Google Search Console or Bing Webmaster Tools with nofollow tags
Hello, I was hoping someone could help me understand if there is any point to submit a domain or subdomain to Google Search Console (Webmaster Tools) and Bing Webmaster Tools if the pages (on the subdomain for example) all have nofollow/noindex tags ... or the pages are being blocked by the robots.txt file). There are some pages on a data feed onto a subdomain which I manage that have these above characteristics ... which I cannot change ... but I am wondering if it is better to simply exclude from submitting those from GWT and BWT (above) thereby eliminating generating errors or warnings ... or is it better to tell Google and Bing about them anyway then perhaps there is a chance those nofollow pages may be indexed/contextualised in some way, making it worth the effort? Many thanks!
White Hat / Black Hat SEO | | uworlds
Mark0 -
Strategies to recover from a Google Penalty?
2 years ago we took over a client who had a hacked site and also had signed up with a black hat SEO team that set up 50 spammy directory links back to the site. Since then we have cleaned up the hacks, had the site reviewed by Google and readded to the Search Index, and disavowed all the directory links through GWT. Over the last 2 years, we've encouraged the client to create new content and have developed a small but engaged social following. The website is www.fishtalesoutfitting.com/. The site's domain authority is 30, but it struggles to rank higher than 20 for even uncompetitive long tail keywords. Other sites with much lower domain authorities outrank the site for our primary keywords. We are now overhauling the site design and content. We are considering creating an entirely new URL for the primary domain. We would then use 301 redirects from the old url to the new. We'd welcome insight into why the current site may still be getting penalized, as well as thoughts on our strategy or other recommendations to recover from the events of 2 years ago. Thank you.
White Hat / Black Hat SEO | | mlwilmore0 -
Sudden influx of 404's affecting SERP's?
Hi Mozzers, We've recently updated a site of ours that really should be doing much better than it currently is. It's got a good backlink profile (and some spammy links recently removed), has age on it's side and has been SEO'ed a tremendous amount. (think deep-level, schema.org, site-speed and much, much more). Because of this, we assumed thin, spammy content was the issue and removed these pages, creating new, content-rich pages in the meantime. IE: We removed a link-wheel page; <a>https://www.google.co.uk/search?q=site%3Asuperted.com%2Fpopular-searches</a>, which as you can see had a **lot **of results (circa 138,000). And added relevant pages for each of our entertainment 'categories'.
White Hat / Black Hat SEO | | ChimplyWebGroup
<a>http://www.superted.com/category.php/bands-musicians</a> - this page has some historical value, so the Mozbar shows some Page Authority here.
<a>http://www.superted.com/profiles.php/wedding-bands</a> - this is an example of a page linking from the above page. These are brand new URLs and are designed to provide relevant content. The old link-wheel pages contained pure links (usually 50+ on every page), no textual content, yet were still driving small amounts of traffic to our site.
The new pages contain quality and relevant content (ie - our list of Wedding Bands, what else would a searcher be looking for??) but some haven't been indexed/ranked yet. So with this in mind I have a few questions: How do we drive traffic to these new pages? We've started to create industry relevant links through our own members to the top-level pages. (http://www.superted.com/category.php/bands-musicians) The link-profile here _should _flow to some degree to the lower-level pages, right? We've got almost 500 'sub-categories', getting quality links to these is just unrealistic in the short term. How long until we should be indexed? We've seen an 800% drop in Organic Search traffic since removing our spammy link-wheel page. This is to be expected to a degree as these were the only real pages driving traffic. However, we saw this drop (and got rid of the pages) almost exactly a month ago, surely we should be re-indexed and re-algo'ed by now?! **Are we still being algor****hythmically penalised? **The old spammy pages are still indexed in Google (138,000 of them!) despite returning 404's for a month. When will these drop out of the rankings? If Google believes they still exist and we were indeed being punished for them, then it makes sense as to why we're still not ranking, but how do we get rid of them? I've tried submitting a manual removal of URL via WMT, but to no avail. Should I 410 the page? Have I been too hasty? I removed the spammy pages in case they were affecting us via a penalty. There would also have been some potential of duplicate content with the old and the new pages.
_popular-searches.php/event-services/videographer _may have clashed with _profiles.php/videographer, _for example.
Should I have kept these pages whilst we waited for the new pages to re-index? Any help would be extremely appreciated, I'm pulling my hair out that after following 'guidelines', we seem to have been punished in some way for it. I assumed we just needed to give Google time to re-index, but a month should surely be enough for a site with historical SEO value such as ours?
If anyone has any clues about what might be happening here, I'd be more than happy to pay for a genuine expert to take a look. If anyone has any potential ideas, I'd love to reward you with a 'good answer'. Many, many thanks in advance. Ryan.0 -
G.A. question - removing a specific page's data from total site's results?
I hope I can explain this clearly, hang in there! One of the clients of the law firm I work for does some SEO work for the firm and one thing he has been doing is googling a certain keyword over and over again to trick google's auto fill into using that keyword. When he runs his program he generates around 500 hits to one of our attorney's bio pages. This happens once or twice a week, and since I don't consider them real organic traffic it has been really messing up my GA reports. Is there a way to block that landing page from my overall reports? Or is there a better way to deal with the skewed data? Any help or advice is appreciated, I am still so new to SEO I feel like a lot of my questions are obvious, but please go easy on me!
White Hat / Black Hat SEO | | MyOwnSEO0 -
Controlling crawl speed/delay through dynamic server-code and 503's
Lately i'm experiencing performance trouble caused by bot traffic. Although Googlebot is not the worst (it's mainly bingbot and ahrefsbot), they cause heavy server load from time to time. We run a lot of sites on one server, so heavy traffic on one site impacts other site's performance. Problem is that 1) I want a centrally managed solution for all sites (per site administration takes too much time), which 2) takes into account total server-load in stead of only 1 site's traffic and 3) controls overall bot-traffic in stead of controlling traffic for one bot. IMO user-traffic should always be prioritized higher than bot-traffic. I tried "Crawl-delay:" in robots.txt, but Googlebot doesn't support that. Although my custom CMS system has a solution to centrally manage Robots.txt for all sites at once, it is read by bots per site and per bot, so it doesn't solve 2) and 3). I also tried controlling crawl-speed through Google Webmaster Tools, which works, but again it only controls Googlebot (and not other bots) and is administered per site. No solution to all three of my problems. Now i came up with a custom-coded solution to dynamically serve 503 http status codes to a certain portion of the bot traffic. What traffic-portion for which bots can be dynamically (runtime) calculated from total server load at that certain moment. So if a bot makes too much requests within a certain period (or whatever other coded rule i'll invent), some requests will be answered with a 503 while others will get content and a 200. Remaining question is: Will dynamically serving 503's have a negative impact on SEO? OK, it will delay indexing speed/latency, but slow server-response-times do in fact have a negative impact on the ranking, which is even worse than indexing-latency. I'm curious about your expert's opinions...
White Hat / Black Hat SEO | | internetwerkNU1 -
How will Google deal with the crosslinks for my multiple domain site
Hi, I can't find any good answer to this question so I thought, why not ask Moz.com ;-)! I have a site, let's call it webshop.xx For a few languages/markets, Deutsch, Dutch & Belgian, English, French. I use a different TLD with a different IP for each of these languages, so I'll end up with: webshop.de, webshop.nl, webshop.be, webshop.co.uk, webshop.com & webshop.fr They all link to eachother and every subpage that is translated from the other site gets a link as well from the other languages, so: webshop.com/stuff links to webshop.de/stuff My main website, webshop.com gets links from every other of these domain which Open Site Explorer as well as Majestic SEO sees as an external link. (this is happening) My question. How will Google deal in the long-run with the crosslinks coming from these domains? some guesses I made: I get full external links juice (content is translated so unique?) I get a bit of the juice of an external link They are actually seen as internal links I'll get a penalty Thanks in advance guys!!!
White Hat / Black Hat SEO | | pimarketing0 -
Google Preferred Agency???
I just stumbled upon an SEO company's website that says they are a 'Google Preferred Agency'. This isn't just a line of copy on the site, it's featured prominently on the site, and they use the Google logo as well. I've never heard of a 'Google Preferred Agency'. One would think that even if there was such a thing, that it would involve a link back to a profile page on Google like they do with AdWords/Analytics partners... Am I missing something, or is this company doing something a little shady? I don't want to toss the name of the company out there because I don't want to publicly bash them.
White Hat / Black Hat SEO | | stevefidelity0 -
'Stealing' link juice from 404's
As you all know, it's valuable but hard to get competitors to link to your website. I'm wondering if the following could work: Sometimes I spot that a competitor is linking to a certain external page, but he made a typo in the URL (e.g. the competitor meant to link to awesomeglobes.com/info-page/ but the link says aewsomeglobes.com/info-page/). Could I then register the typo domain and 301 it to my own domain (i.e. aewsomeglobes.com/info-page/ to mydomain.com/info-page/) and collect the link juice? Does it also work if the link is a root domain?
White Hat / Black Hat SEO | | RBenedict0