Another Penalty Question - Should I Start from Scratch?
-
I've seen many questions on google penalties recently. Not really sure where to go from here. I realised a year or so we would be living on borrowed time with our link building methods. We have been really successful in the past and are keen to build a site that has a bit more longevity.
We have not received a warning from google but have lost pretty much all of our ranking for everything.
My question is with our backlink profile as it is. Building links from various blog networks for the past 3 years. Is it just worth rebranding and starting from scratch rather than trying to get over a million links removed?
We have a lot of content that I guess could be classed as spam. Should I really remove all of the content? or leave it running as we are still getting some traffic from other marketing activities.
Or should I just get a new domain and transfer all the decent content?
-
Hi David,
When did the drop in rankings take place? After the blog network deindexation? After the latest Panda release? After Penguin?
From the way you've phrased your question, it sounds like you know the answer already. You have a lot of spammy content, lots of problematic links, and want to do things, as you phrased it, for a bit more longevity.
It sounds to me it'll be more cost efficient to start from scratch, do it properly this time, building with long term goals and strategy regarding site design, content creation, and link building, than to clean up everything and work from there.
Hope this helps,
Mark
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Yet Another, Yet Important URL structure query.
Massive changes to our stock media site and structure here. While we have an extensive category system previously our category pages have only been our search pages with ID numbers for sorting categories. Now we have individual category pages. We have about 600 categories with about 4 max tiers. We have about 1,000,000 total products and issues with products appearing to be duplicate. Our current URL structure for producta looks like this: http://example.com/main-category/12345/product-name.htm Here is how I was planning on doing the new structure: Cat tier 1: http://example.com/category-one/ Cat tier 2: http://example.com/category-one/category-two/ Cat tier 3: http://example.com/category-one-category-two/category-three Cat tier 4: http://example.com/category-one-category-two-category-three/category-four/ Product: http://example.com/category-one-category-two-category-three/product-name-12345.htm Thoughts? Thanks! Craig
Technical SEO | | TheCraig0 -
Question on Google's Site: Search
A client currently has two domains with the same content on each. When I pull up a Cached version of the site, I noticed that it has a Cache of the correct page on it. However, when I do a site: in Google, I am seeing the domain that we don't want Google indexing. Is this a problem? There is no canonical tag and I'm not sure how Google knows to cache the correct website but it does. I'm assuming they have this set in webmaster tools? Any help is much appreciated! Thanks!
Technical SEO | | jeff_46mile0 -
Why has google stopped showing one domain and switching to showing another that points to the same website?
My client has a website here: http://www.savannahchiropractic.com/ The site ranked really well for searches for the term "savannah chiroprtactor". Sometime last week (April 21 mobilegeddon?) the site slipped in rankings, but reappeared under the domain name: "www.whitemarshchiropractic.com" Both URLs go to the exact same site. and I don't even think that the whitemarshchripractic.com site is a redirect. I'm not sure how it's setup, but it's definitely not a duplicate site. The ranking is still lower than the savannahchiropractic.com site, but now appears. Anyone have any insights on this?
Technical SEO | | aj6130 -
Domain hacked and redirected to another domain
2 weeks ago my home page plus some others had a 301 redirect to another cloned domain for about 1 week (due to a hack).The original pages were then de-indexed and the new bad domain was indexed and in effect stole my rankings.Then the 301 was removed/cleaned from my domain and the bad domain was fully de-indexed via a request I made in WMT (this was 1 week ago).Then my pages came back into the index but without any ranking power (as if it's just in the supplemental index).It's been like this for a week now and the algorithms have not been able to correct it. So how do I get this damage undone or corrected? Can someone at Google reverse/cancel the 301 ranking transfer since the algorithms don't seem to be able to?I have the option to do a "Change of Address" in WMT from bad domain to my domain. But I don't think this would work properly because it says I also need to place a 301 on the bad domain back to mine. Would a change of address still work without the 301?Please advise/help what to do in order to get my rankings back to where they were.
Technical SEO | | Dantek0 -
SEOMoz Crawler vs Googlebot Question
I read somewhere that SEOMoz’s crawler marks a page in its Crawl Diagnostics as duplicate content if it doesn’t have more than 5% unique content.(I can’t find that statistic anywhere on SEOMoz to confirm though). We are an eCommerce site, so many of our pages share the same sidebar, header, and footer links. The pages flagged by SEOMoz as duplicates have these same links, but they have unique URLs and category names. Because they’re not actual duplicates of each other, canonical tags aren’t the answer. Also because inventory might automatically come back in stock, we can’t use 301 redirects on these “duplicate” pages. It seems like it’s the sidebar, header, and footer links that are what’s causing these pages to be flagged as duplicates. Does the SEOMoz crawler mimic the way Googlebot works? Also, is Googlebot smart enough not to count the sidebar and header/footer links when looking for duplicate content?
Technical SEO | | ElDude0 -
Keyword density question.
For instance, if the keyword I'm targeting on a specific page is "New Orleans", the Keyword is everywhere it's supposed to be, title, meta, content, internal links, etc, .... So when I check my most relative key words with different tools, it always breaks the word up like: new - 12 times 2.3% orleans - 12 times 2.3% Should I try to fix this? or is this normal? and does google view this as 1 keyword when evaluating my site?
Technical SEO | | Nola5040 -
Different URLS for our multi language pages caused penalty?
Hi all, We have a website www.phoneboxlanguage.com with 4 different language versions (Spanish, French, Italian, German). We have all the different versions on totally different URLS. E.G the French URL is www.cours-telephone-anglais.com. Recently this month we saw a huge drop in SERPS for all the 'foreign' language pages. This had happened before for the Spanish and French, which we put down to keyword density issues, so created new URLS for those pages. However now all 4 foreign pages have dropped. Could this be due instead to a penalty for duplicate sites? The content is obviously different due to different languages, but the coding and templates for the sites are the same. How can we find out this is the case and what should we do? I was thinking after some research on the forum to create subfolders in the original (phoneboxlanguage.com) and then create 301 redirects, from the old dropped sites, or would their penalties be bad for our original site, if this were the case? We are obviously very keen to not further damage the site and the original site which remains o.k. Many thanks for your kind help. Quime.
Technical SEO | | Quime0 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0