SEO agency makes "hard to believe" claims
-
Hi
I operate in a highly competitive niche of "sell house fast" in UK.
Sites that are in top 1-3 tend to have thousands of links. Some of these are spammy type links. These sites have Domain Authority too.
My site has good content http://propertysaviour.co.uk and is listed with around 12 well known directories. I have been building back-links manually over the last 3-4 months.
The SEO agency we are looking to work with are claiming they can get my website to first page with above keyword.
How would you go about this strategy? What questions would you ask SEO agency?
What elements can do I myself? By the way, I am good at producing content!
-
There are companies that can get rankings that quickly in competitive verticals. But to do so they'll need to use techniques that are not within Google's guidelines such as using private blog networks or injecting links via hacking other people's sites.
If this is a site that you can walk away from should it get penalized, then hiring them might make sense. I'd want to ask how they are getting links and to make sure that they are not doing anything illegal or anything that involves hacking other websites. Getting links from a private blog network (if that's what they are doing) is not illegal or immoral, but if Google catches on and the site gets penalized then you may never be able to recover it.
If this is a site that you plan to keep for the long run, then it makes more sense to hire an agency that will improve your search presence gradually. Good SEO usually takes a lot of time.
I have been building back-links manually over the last 3-4 months.
Be careful. I'm seeing links that Google will likely see as unnatural such as:
http://www.linkaddurl.com/Arts___Humanities/Performing_Arts/Business/Real_Estate/?p=6
http://cqycxzfwzx.com/sell-my-house-fast-for-cash-your-request-fulfilled/
You're in a competitive space. I am betting that those sites above you with spammy links will drop out of site the next time Penguin hits. If they don't then it's because they're using private blog networks that Google hasn't been able to detect algorithmically and they're prime targets for a manual penalty.
My advice for hiring an SEO company would be to ask them exactly what they will do for your site. Are they going to improve your on page SEO? If so, how? Are they able to get you links? If so, how? If they tell you that the process is proprietary then it's probably not above board. If they give you marketing speak like, "We'll leverage your blah blah blah and produce stellar content marketing blah blah blah" without actually giving you a concrete idea of what they'll do then that's not good. I'd also ask for references. If they hide behind an NDA that's not good. Good SEO companies will have people lined up to give them a reference.
-
It's still as easy as it ever was to build spammy links. But it's just not what you should be doing to build a long-term high traffic site.
-
The SEO agency we are looking to work with are claiming they can get my website to first page with above keyword.
Walk away. No-one can make these claims without doing something a bit dodgy. Generally these sorts of claims turn into results that are short-lived and after a few more months, Google has caught up with what they are doing and you end up with a penalty.
Link building is not as easy as it once was. Now you need to have a reason for someone to want to link to you - if you want good links that is. A good linkable asset can make a real difference, so you need to start thinking outside the box a little bit on what you can do to help make you stand out a little.
-Andy
-
Sam
Thank you. I can do the link building. What I am not so good at is finding good sources of manual link building. Is there any recommendations?
I have used the Web Explorer tool as seen my competitors spammy links being built. I do not want to go down that route.
Would love to get your opinion on this. What tools do you use for finding good links?
-
In all likelihood they're telling the truth.
Shock? Perhaps, but here's another truth in the following 12 months you'll get slapped so hard your site will be in a worse position than today!
Sam
P.S. Talking from experience as I once used one of these agencies!
-
Thank you for your input.
The SEO agency is claiming that in 2-3 months, I should be ranking in top 1-5 positions.
How do I increase domain authority and page rank for my website. The websites I'm up against have tens of thousands of links (mostly spammy)!
I have seen guide to building backlinks.
Help!
-
I dont have a huge amount to add to this, other than in my experiance a lot of agencies tend to be 2 or 3 years behind in general practises, especially the smaller ones.
"The times of looking to rank numero uno for one term are long gone." is completely correct though, so be very careful about messing up other key words etc.
-
The times of looking to rank numero uno for one term are long gone. What sounds like your existing strategy of having good content is the way to go. The risk with an agency that would even focus up on one term and promise number one is that they are going to inadvertently wreck your link profile.
If I were you, I would read the beginner's guide to link building https://moz.com/beginners-guide-to-link-building and continue to make good content. Don't allow an agency to manipulate the hell out of your profile for the sake of one term.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexed "Lorem Ipsum" content on an unfinished website
Hi guys. So I recently created a new WordPress site and started developing the homepage. I completely forgot to disallow robots to prevent Google from indexing it and the homepage of my site got quickly indexed with all the Lorem ipsum and some plagiarized content from sites of my competitors. What do I do now? I’m afraid that this might spoil my SEO strategy and devalue my site in the eyes of Google from the very beginning. Should I ask Google to remove the homepage using the removal tool in Google Webmaster Tools and ask it to recrawl the page after adding the unique content? Thank you so much for your replies.
Intermediate & Advanced SEO | | Ibis150 -
Internal search pages (and faceted navigation) solutions for 2018! Canonical or meta robots "noindex,follow"?
There seems to conflicting information on how best to handle internal search results pages. To recap - they are problematic because these pages generally result in lots of query parameters being appended to the URL string for every kind of search - whilst the title, meta-description and general framework of the page remain the same - which is flagged in Moz Pro Site Crawl - as duplicate, meta descriptions/h1s etc. The general advice these days is NOT to disallow these pages in robots.txt anymore - because there is still value in their being crawled for all the links that appear on the page. But in order to handle the duplicate issues - the advice varies into two camps on what to do: 1. Add meta robots tag - with "noindex,follow" to the page
Intermediate & Advanced SEO | | SWEMII
This means the page will not be indexed with all it's myriad queries and parameters. And so takes care of any duplicate meta /markup issues - but any other links from the page can still be crawled and indexed = better crawling, indexing of the site, however you lose any value the page itself might bring.
This is the advice Yoast recommends in 2017 : https://yoast.com/blocking-your-sites-search-results/ - who are adamant that Google just doesn't like or want to serve this kind of page anyway... 2. Just add a canonical link tag - this will ensure that the search results page is still indexed as well.
All the different query string URLs, and the array of results they serve - are 'canonicalised' as the same.
However - this seems a bit duplicitous as the results in the page body could all be very different. Also - all the paginated results pages - would be 'canonicalised' to the main search page - which we know Google states is not correct implementation of canonical tag
https://webmasters.googleblog.com/2013/04/5-common-mistakes-with-relcanonical.html this picks up on this older discussion here from 2012
https://moz.com/community/q/internal-search-rel-canonical-vs-noindex-vs-robots-txt
Where the advice was leaning towards using canonicals because the user was seeing a percentage of inbound into these search result pages - but i wonder if it will still be the case ? As the older discussion is now 6 years old - just wondering if there is any new approach or how others have chosen to handle internal search I think a lot of the same issues occur with faceted navigation as discussed here in 2017
https://moz.com/blog/large-site-seo-basics-faceted-navigation1 -
Best way to "Prune" bad content from large sites?
I am in process of pruning my sites for low quality/thin content. The issue is that I have multiple sites with 40k + pages and need a more efficient way of finding the low quality content than looking at each page individually. Is there an ideal way to find the pages that are worth no indexing that will speed up the process but not potentially harm any valuable pages? Current plan of action is to pull data from analytics and if the url hasn't brought any traffic in the last 12 months then it is safe to assume it is a page that is not beneficial to the site. My concern is that some of these pages might have links pointing to them and I want to make sure we don't lose that link juice. But, assuming we just no index the pages we should still have the authority pass along...and in theory, the pages that haven't brought any traffic to the site in a year probably don't have much authority to begin with. Recommendations on best way to prune content on sites with hundreds of thousands of pages efficiently? Also, is there a benefit to no indexing the pages vs deleting them? What is the preferred method, and why?
Intermediate & Advanced SEO | | atomiconline0 -
Duplicate content across similar computer "models" and how to properly handle it.
I run a website that revolves around a niche rugged computer market. There are several "main" models for each computer that also has several (300-400) "sub" models that only vary by specifications for each model. My problem is I can't really consolidate each model to one product page to avoid duplicate content. To have something like a drop down list would be massive and confusing to the customer when they could just search the model they needed. Also I would say 80-90% of the market searches for a specific model when they go to purchase or in Google. A lot of our customers are city government, fire departments, police departments etc. they get a list of approved models and purchase off that they don't really search by specs or "configure" a model so each model number having a chance to rank is important. Currently we have all models in each sub category rel=canonical back to the main category page for that model. Is there a better way to go about this? Example page you can see how there are several models all product descriptions are the same they only vary by model writing a unique description for each one is an unrealistic possibility for us. Any suggestions on this would be appreciated I keep going back on forth on what the correct solution would be.
Intermediate & Advanced SEO | | The_Rugged_Store0 -
What is a "good" dwell time?
I know there isn't any official documentation from Google about exact number of seconds a user should spend on a site, but does anyone have any case studies that looks at what might be a good "dwell time" to shoot for? We're looking on integrating an exact time on site into or Google Analytics metrics to count as a 'non-bounce'--so, for example, if a user spends 45 seconds on an article, then, we wouldn't count it as a bounce, since the reader likely read through all the content.
Intermediate & Advanced SEO | | nicole.healthline0 -
SEO for bigcommerce site
I have a site on bigcommerce platform .from Where do i need start SEO for these types of ecommerce sites.Looking for Experts ideas . Thank you.
Intermediate & Advanced SEO | | innofidelity0 -
"nocontent" class use for Google Custom Search: SEO Ramifications?
Hi all, Have a client that uses Google Custom Search tool which is crawling, indexing and returning millions of irrelevant results for keywords that are on every page of the site. IT/Web dev. team is considering adding a class attribute to prohibit Google Custom Search from indexing bolierplate content regions. Here's the link to Google's custom search help page: http://support.google.com/customsearch/bin/answer.py?hl=en&answer=2364585 "...If your pages have regions containing boilerplate content that's not relevant to the main content of the page, you can identify it using the nocontent class attribute. When Google Custom Search sees this tag, we'll ignore any keywords it contains and won't take them into account when calculating ranking for your Custom Search engine. (We'll still follow and crawl any links contained in the text marked nocontent.) To use the nocontent class attribute, include the boilerplate content in a tag (for example, span or div) like this: Google Custom Search also notes:"Using nocontent won't impact your site's performance in Google Web Search, or our crawling of your site, in any way. We'll continue to follow any links in tagged content; we just won't use keywords to calculate ranking for your Custom Search engine."Just want to confirm if anyone can forsee any SEO implications the use of this div could create? Anyone have experience with this?Thank you!
Intermediate & Advanced SEO | | MRM-McCANN0 -
To "Rel canon" or not to "Rel canon" that is the question
Looking for some input on a SEO situation that I'm struggling with. I guess you could say it's a usability vs Google situation. The situation is as follows: On a specific shop (lets say it's selling t-shirts). The products are sorted as follows each t-shit have a master and x number of variants (a color). we have a product listing in this listing all the different colors (variants) are shown. When you click one of the t-shirts (eg: blue) you get redirected to the product master, where some code on the page tells the master that it should change the color selectors to the blue color. This information the page gets from a query string in the URL. Now I could let Google index each URL for each color, and sort it out that way. except for the fact that the text doesn't change at all. Only thing that changes is the product image and that is changed with ajax in such a way that Google, most likely, won't notice that fact. ergo producing "duplicate content" problems. Ok! So I could sort this problem with a "rel canon" but then we are in a situation where the only thing that tells Google that we are talking about a blue t-shirt is the link to the master from the product listing. We end up in a situation where the master is the only one getting indexed, not a problem except for when people come from google directly to the product, I have no way of telling what color the costumer is looking for and hence won't know what image to serve her. Now I could tell my client that they have to write a unique text for each varient but with 100 of thousands of variant combinations this is not realistic ir a real good solution. I kinda need a new idea, any input idea or brain wave would be very welcome. 🙂
Intermediate & Advanced SEO | | ReneReinholdt0