Hey there, i'm working on search results in dutch.
-
My biggest competitor who's number 1 in main keywords in google has almost only links from 'linkfarms' and blog comments. How is he ranked that high? Would it be a good idea to add a bit the best of these in my mix, while i work on the real good quality content?
-
As long as those guest posts are well-placed on respected, topical sites, that could help. You just have to make sure your contributions to those sites are genuinely relevant to their audiences.
-
Welcome to Dutch link building, overall the quality of links is way lower than the averages in other countries. In some cases the low quality links are still seen as more high quality. We've still got some very popular link farms/ directories that in any other country would directly end up in your disavow file.
-
Could be! Its just when i use moz, to discover there best links some high quality 'link pages' come up. They do have a couple of others pretty good links could be that i quess. Would doing 2-3 guest posts be okay?
-
Thank you, i did miss that!
-
Hi Wouter,
I would say that you don't want to be following in what they are doing. There might be other reasons why they are doing so well, but without seeing the site, it is almost impossible to say why.
Perhaps their content is considered exceptionally good - perhaps they have a few non-spammy links that are really high quality? - Perhaps they have more trust gained from Google?
Remember that Google will ignore a lot of the spammy links from directories and blog comments. Look past the links if you want to see what is going on as there may be something else.
-Andy
-
Hello, my friend.
I have noticed the same thing happening to our website and our clients' websites. As you said, we see lots of bad/spammy links to our competitors and they rank high (not always higher though). Well, I asked this question here: https://moz.com/community/q/spammy-backlinks-are-working
After reading all that + just using common sense + a bit of hope for intelligence of Google updates, I just didn't have enough guts to risk the rankings we achieved so far and reputation of the domain.
So, as it's said in responses in that discussion, if you're willing to see your website get messed up in case everything goes south, you're more than welcome. Just write a case study/research after that for curious minds like me
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Mapping ALL search data for a broad topic
Hi All As our company becomes a bigger and bigger entity I'm trying to figure out how I can create more autonomy. One of the key areas that needs fixing is briefing the writers on articles based on keywords. We're not just trying to go after the low hanging fruit or the big money keywords but actually comprehensively cover every topic and provide actual good quality up to date info (surprisingly rare in a competitive niche) and eventually cover pretty much every topic there is. We generally work on a 3 tier system on a folder level, topics and then sub-topics. The challenge is getting an agency to: a) be able to pull all of the data without being knowledgeable in our specific industry. We're specialists and, thus, target people that need specialist expertise as well as more mainstream stuff (the stuff that run of the mill people wouldn't know about). b) know where it all fits topically as we kind of organise the content on a heirarchy basis. And we generally cover multiple smaller topics within articles. Am I asking for the impossible here? It's the one area of the business I feel the most nervous about creating autonomy with. Can we become be as extensive and comprehensive as a wiki-type website without having somebody within the business that knows it providing the keyword research. I did a searh for all data using the main two seed keywords for this subject on ahrefs and it came up with 168000 lines of spreadsheet data. Obviously this went way beyond the maximum I was allowed to export. Interested in feedback and, if any agencies are up for the challenge, do let me know! I've been using moz pro for a long time but have never posted and apologise if what I'm describing is being explained badly here. Requirements Keywords to cover all (broad niche) related queries in the UK, no relevant uk (broad niche) keywords will be missed Organised in a way that can be interpreted as article brief and folder structure instructions. Questions How would you ensure you cover every single keyword? Assuming no specialist X knowledge, how will you be able to map content and know which search queries belong in which topics and in what order. Also (where there is keyword leakage from other regions) how will you know which are UK terms and which aren’t? With minimal X knowledge – how will you know whether you’ve missed an opportunity or not (what you don’t know you don’t know) What specific resources will you require from us in order for this to work? What format will the data be provided to us in - how will you present the finished work so that it can be turned into article briefs?
Intermediate & Advanced SEO | | d.bird0 -
I'm stumped!
I'm hoping to find a real expert to help out with this. TL;DR Our visibility in search has started tanking and I cannot figure out why. The whole story: In fall of 2015 I started working with Convention Nation (www.conventionnation.com). The client is trying to build a resource for convention and tradeshow attendees that would help them identify the events that will help them meet their goals (learning, networking, sales, whatever). They had a content team overseas that spent their time copy/pasting event information into our database. At the time, I identified several opportunities to improve SEO: Create and submit a sitemap Add meaningful metas Fix crawl errors On-page content uniqueification and optimization for most visible events (largest audience likely to search) Regular publishing and social media Over nine months, we did these things and saw search visibility, average rank and CTR all double or better. There was still one problem, and that is created by our specific industry. I'll use a concrete example: MozCon. This event happens once a year and there are enough things that are the same about it every year (namely, the generalized description of the event, attendees and outcomes) that the 2015 page was getting flagged as a duplicate of 2016. The event content for most of our events was pretty thin anyway, and much of it was duplicated from other sources, so we implemented a feature that grouped recurring events. My thinking was that this would reduce the perception of duplicate or obsolete content and links and provide a nice backlink opportunity. I expected a dip after we deployed this grouping feature, that's been consistent with other bulk content changes we've made to the site, but we are not recovering from the dip. In fact, our search visibility and traffic are dropping every week. So, the current state of things is this: Clean crawl reports: No errors reported by Moz or Google Moz domain authority: 20; Spam score 2/17 We're a little thin on incoming links, but steady growth in both social media and backlinks Continuing to add thin/duplicate content for unique events at the rate of 200 pages/mo Adding solid, unique strategic content at the rate of 15 pages/mo I just cannot figure out where we've gone astray. Is there anything other than the thin/copied content that could be causing this? It wasn't hurting us before we grouped the events... What could possibly account for this trend? Help me, Moz Community, you're my only hope! Lindsay
Intermediate & Advanced SEO | | LindsayDayton0 -
Title Tag Verses H1 Tag. Is having both the same better than different if there's only one clear winner in keyword search volume
Hi Mozzers, I am going through my categories on my eccomerce hire site trying to improve things and just wanted to check this query with you My understanding is that if I have the same H1 and title tag, then that would give more weight for that keyword phrase? Would I also be correct in assuming that the H1 is more important than the title tag or should both be treated as equals in terms of SEO. My dimemla is that for certain products we hire, there's only really one clear winner in terms of keyword phrase. The others I find in keyword planner are way down the volume list , so I have tended to put the H1 and title tag as the same and then have H2 tag and a slightly different heading. Is that the best philosphy or should I really mix them up , so the the title tag, h1, h2 are different ? Also Currently My on page content mentions the the H1 tag near the beginning of the content. Is this correct or should I really be using the H2 tag phrase near the beginning of the content. For example - One of the products we hire out is carpet cleaners. Therefore the main keyword phrase is carpet cleaner hire
Intermediate & Advanced SEO | | PeteC12
and for our local pages its' carpet cleaner hire <city name="">.
This is my title tag and H1 tag and then for my h2 tag , I have something like "carpet cleaning equipment" with the content
mentioning carpet cleaner hire near the beginning.</city> I don't want to look likes its over optimization or mention the word hire to much but being a hire website, it's difficult not to and other keywords that don't mention it in it, are to varied so could increase bounce rates ?. When I look in GWT against my content keywords - the word hire shows a full bar. Just wondered what peoples thoughts are if what I am doing it okay?
thanks
Pete0 -
Website with only a portion being 'mobile friendly' -- what to tell Google?
I have a website for desktop that does a lot of things, and have converted part of it do show pages in a mobile friendly format based on the users device. Not responsive design, but actual diff code with different formatting by mobile vs desktop--but each still share the same page url name. Google allows this approach. The mobile-friendly part of the site is not as extensive as desktop, so there are pages that apply to the desktop but not for mobile. So the functionality is limited some for mobile devices, and therefore some pages should only be indexed for desktop users. How should that page be handled for Google crawlers? If it is given a 404 not found for their mobile bot will Google properly still crawl it for the desktop, or will Google see that the url was flagged as 'not found' and not crawl it for the desktop? I asked a similar question yest, but it was not stated clearly. Thanks,Ted
Intermediate & Advanced SEO | | friendoffood0 -
Interlinking vs. 'orphaning' mobile page versions in a dynamic serving scenario
Hi there, I'd love to get the Moz community's take on this. We are working on setting up dynamic serving for mobile versions of our pages. During the process of planning the mobile version of a page, we identified a type of navigational links that, while useful enough for desktop visitors, we feel would not be as useful to mobile visitors. We would like to remove these from our mobile version of the page as part of offering a more streamlined mobile page. So we feel that we're making a fine decision with user experience in mind. On any single page, the number of links removed in the mobile version would be relatively few. The question is: is there any danger in “orphaning” the mobile versions of certain pages because links don’t exist pointing to those pages on our mobile pages? Is this a legitimate concern, or is it enough that none of the desktop versions of pages are orphaned? We were not sure whether it’s even possible, in Googlebot’s eyes, to orphan a mobile version of a page if we use dynamic serving and if there are no orphaned desktop versions of our pages. (We also plan to link to "full site" in the footer.) Thank you in advance for your help,
Intermediate & Advanced SEO | | Eric_R
Eric0 -
Why are these m. results showing as blocked?
If you go to http://bit.ly/173gdWK, you'll see that m. results are showing as blocked by robots.txt, but we don't have anything in our robots.txt file that specifies to block m. results. Any ideas why these URLs show as blocked?
Intermediate & Advanced SEO | | nicole.healthline0 -
Add noindex,nofollow prior to removing pages resulting in 404's
We're working with another site that unfortunately due to how their website has been programmed creates a bit of a mess. Whenever an employee removes a page from their site through their homegrown 'content management system', rather than 301'ing to another location on their site, the page is deleted and results in a 404. The interim question until they implement a better solution in managing their website is: Should they first add noindex,nofollow to the pages that are scheduled to be removed. Then once they are removed, they become 404's? Of note, it is possible that some of these pages will be used again in the future, and I would imagine they could submit them to Google through Webmaster Tools and adding the pages to their sitemap.
Intermediate & Advanced SEO | | Prospector-Plastics0 -
Block search engines from URLs created by internal search engine?
Hey guys, I've got a question for you all that I've been pondering for a few days now. I'm currently doing an SEO Technical Audit for a large scale directory. One major issue that they are having is that their internal search system (Directory Search) will create a new URL everytime a search query is entered by the user. This creates huge amounts of duplication on the website. I'm wondering if it would be best to block search engines from crawling these URLs entirely with Robots.txt? What do you guys think? Bearing in mind there are probably thousands of these pages already in the Google index? Thanks Kim
Intermediate & Advanced SEO | | Voonie0