Original content, widely quoted - yet ignored by Google
-
Our website is https://greatfire.org. We are a non-profit working to bring transparency to online censorship in China. By helping us resolve this problem you are helping us in the cause of internet freedom.
If you search for "great firewall" or "great firewall of china", would you be interested in finding a database of what websites and searches are blocked by this Great Firewall of China? We have been running a non-profit project with this objective for almost a year and in so doing have created the biggest and most updated database of online censorship in China. Yet, to this date, you cannot find it in Google by searching for any relevant keywords.
A similar website, www.greatfirewallofchina.org, is listed as #3 when searching for "great firewall". Our website provides a more accurate testing tool, as well as historic data. Regardless of whether our service is better, we believe we should at least be included in the top 10.
We have been testing out an Adwords campaign to see whether our website is of interest to users using these keywords. For example, users searching for "great firewall of china" end up browsing on average 2.62 pages and spending 03:18 minutes on the website. This suggests to us that our website is of interest to users searching for these keywords.
Do you have any idea what the problem could be that is grave enough to not even include us in the top 100 for these keywords?
We have recently posted this same question on the Google Webmaster Central but did not get a satisfactory answer: http://www.google.com/support/forum/p/Webmasters/thread?tid=5c14a7e16c07cbb7&hl=en&fid=5c14a7e16c07cbb70004b5f1d985e70e
-
Thanks very much for your reply Jerod!
Google Webmaster Tools is set up and working. Some info:
-
No detected malware
-
1 crawl error (I think this must have been temporary. Only reported once, and this url is not in the robots.txt now):
- http://greatfire.org/url/190838
- URL restricted by robots.txt
- Dec 10, 2011
-
Pages crawled per day, average: 1102
-
Time spent downloading a page (in milliseconds), average: 2116
The robots.txt is mostly the standard one provided by Drupal. We've added "Disallow: /node/" because all interesting urls should have a more interesting alias than that. We'll look more into whether this can be the cause.
Anything else that you notice?
-
-
Hi, GreatFire-
We had a very similar problem with one of the sites we manage at http://www.miwaterstewardship.org/. The website is pretty good, the domain has dozens of super high-quality backlinks (including EDU and GOV links), but The Googles were being a real pain and not displaying the website in a SERP no matter what we did.
Ultimately, we think we found the solution in robots.txt. The entire site had been disallowed for quite a long time (at the client's request) while it was being built and updated. After we modified the robots.txt file, made sure Webmaster tools was up and running, pinged the site several times, etc. it was still being blocked in the SERPs. After two months or more of researching, trying fixes, and working on the issue, the site finally started being displayed. The only thing we can figure is that Google was "angry" (for all intents and purposes) at us for leaving the site blocked for so long.
No one at Google would come out and tell us that this was the case or even that it was a possibility. It's just our best guess at what happened.
I can see that greatwall.org also has a rather substantial robots.txt file in place. It looks like everything is in order in that file but it might still be causing some troubles.
Is Webmaster tools set up? Is the site being scanned and indexed properly?
You can read up on our conversation with SEOmoz users here if you're interested: http://www.seomoz.org/q/google-refuses-to-index-our-domain-any-suggestions
Good luck with this. I know how frustrating it can be!
Jerod
-
Hi GreatFire,
With regard to the homepage content - you really don't have much there for the search engines to get their teeth into. I would work on adding a few paragraphs of text explaining what your service does and what benefits it provides to your users.
I disagree that your blog should be viewed as only an extra to your website. It can be a great way to increase your keyword referral traffic, engage with your audience and get picked up by other sites.
Just because Wikipedia have written about your topic already doesn't mean you should't cover the subject in more detail - otherwise no one would have anything to write about!
As you have the knowledge on the subject, involved with it everyday, and have a website dedicated to it - you are the perfect candidate to start producing better content and become the 'hub' for all things related to the how China uses the internet.
Cheers
Andrew
-
Hi Andrew,
Thank you very much for your response. The two main differences you point out are very useful for us. We will keep working on links and social mentions.
One thing I am puzzled about though is the labeling of the site as "not having a lot of content". I feel this is misunderstanding the purpose of the website. The blog is only an extra. What we provide is a means to test whether any url is blocked or not in China, as well as download speed. For each url in our database, we provide a historic, calendar-view to help identify when a website was blocked or unblocked in the past.
So our website first and foremost offers a tool and a lot of non-text data. To me, expanding on the text content, while I understand the reasoning, sounds like recommending Google to place a long description of what a search engine is on their front page.
If you want to read the history of the Great Firewall of China, you can do it on Wikipedia. I don't see why we should explain it, when they do it better. On the other hand, if you want to know if website X is blocked or not in China, Wikipedia is not practical since it's only manually updated. Our data offers the latest status at all times.
Do you see what I mean? It would be great to hear what you think about this.
-
Hi GreatFire,
Your competitor has a much stronger site in the following two main areas:
- More backlinks (resulting in a higher PR)
- More social mentions
Focus on building more backlinks by researching your competitors domain with Open Site Explorer and MajesticSEO. Keep up your activity in your social circles, and also get going with Google+ if you haven't already.
You should also fix your title tag to include the target keyword at the start - not at the end. So it would read something like 'Great firewall of china - bringing transparency from greatfire.org'
Looking through your site you don't appear to have that much content (this was also mentioned in your Google Support thread) so I would focus on building out the content on the homepage and also further developing your blog. For example your 'Wukan Blocked only on Weibo' blog post is not really long enough to generate you much referral traffic. Larger authority articles of 1000+ words plus with richer content (link references, pictures, Google+ author/social connections) etc will help you far more.
Conduct the relevant keyword research for your blog posts in the same way you did with your root domain. This will keep your website niche focused and generating lots of similar 'china firewall' terms.
Hope that helps.
Cheers,
Andrew
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexed "Lorem Ipsum" content on an unfinished website
Hi guys. So I recently created a new WordPress site and started developing the homepage. I completely forgot to disallow robots to prevent Google from indexing it and the homepage of my site got quickly indexed with all the Lorem ipsum and some plagiarized content from sites of my competitors. What do I do now? I’m afraid that this might spoil my SEO strategy and devalue my site in the eyes of Google from the very beginning. Should I ask Google to remove the homepage using the removal tool in Google Webmaster Tools and ask it to recrawl the page after adding the unique content? Thank you so much for your replies.
Intermediate & Advanced SEO | | Ibis150 -
Why is Google no longer Indexing and Ranking my state pages with Dynamic Content?
Hi, We have some state specific pages that display dynamic content based on the state that is selected here. For example this page displays new york based content. But for some reason google is no longer ranking these pages. Instead it's defaulting to the page where you select the state here. But last year the individual state dynamic pages were ranking. The only change we made was move these pages from http to https. But now google isn't seeing these individual dynamically generated state based pages. When I do a site: url search it doesn't find any of these state pages. Any thoughts on why this is happening and how to fix it. Thanks in advance for any insight. Eddy By the way when I check these pages in google search console fetch as google, google is able to see these pages fine and they're not being blocked by any robot.txt.
Intermediate & Advanced SEO | | eddys_kap0 -
Same content different URL - Google Analytics other Options
One of my clients came to me today asking to mirror their current website and host it on a different domain (i.e domain.com and domain.ca). Their reasoning is that there is no actual way to confirm via form entries from their website which visitors are submitting the form from an organic perspective and which ones are submitting the form through an Adwords ad campaign referral they are running (which I have nothing to do with). I know Google Analytics will show you visits to pages on a site and then you can find out which source (organic vs. cpc) they came from, but it won't confirm the source on the actual form entry. So my client feels the only way to find out this information would be to mirror the website so that the separate analytics would validate their Ad spend. Does anyone know of any tools that I could use for something like this? I do NOT want to mirror the website as I am fearful of duplicate content and the YEARS of organic SEO work I have put into this website going to waste. The other element I should mention is the client only wants to have this "mirorred" site up for 2 months. Any thoughts, suggestions and arguments for a mirorred website are welcomed! Thanks!
Intermediate & Advanced SEO | | MainstreamMktg0 -
Putting "noindex" on a page that's in an iframe... what will that mean for the parent page?
If I've got a page that is being called in an iframe, on my homepage, and I don't want that called page to be indexed.... so I put a noindex tag on the called page (but not on the homepage) what might that mean for the homepage? Nothing? Will Google, Bing, Yahoo, or anyone else, potentially see that as a noindex tag on my homepage?
Intermediate & Advanced SEO | | Philip-DiPatrizio0 -
Strange 404s in GWT - "Linked From" pages that never existed
I’m having an issue with Google Webmaster Tools saying there are 404 errors on my site. When I look into my “Not Found” errors I see URLs like this one: Real-Estate-1/Rentals-Wanted-228/Myrtle-Beach-202/subcatsubc/ When I click on that and go to the “Linked From” tab, GWT says the page is being linked from http://www.myrtlebeach.com/Real-Estate-1/Rentals-Wanted-228/Myrtle-Beach-202/subcatsubc/ The problem here is that page has never existed on myrtlebeach.com, making it impossible for anything to be “linked from” that page. Many more strange URLs like this one are also showing as 404 errors. All of these contain “subcatsubc” somewhere in the URL. My Question: If that page has never existed on myrtlebeach.com, how is it possible to be linking to itself and causing a 404?
Intermediate & Advanced SEO | | Fuel0 -
What are the ranking factors for "Google News"? How can we compete?
We have a few sport news websites that are picked up by Google News. Once in a blue moon, one of our articles ranks for a great keyword and shows in one of the 3 listings that Google News has in SERPS. Any tips on how we can we optimise more of our articles to compete in these 3 positions?
Intermediate & Advanced SEO | | betnl0 -
Should I "NoIndex" Pages with Almost no Unique Content
I have a real estate site with MLS data (real estate listings shared across the Internet by Realtors, which means data exist across the Internet already). Important pages are the "MLS result pages" - the pages showing thumbnail pictures of all properties for sale in a given region or neighborhood. 1 MLS result page may be for a region and another for a neighborhood within the region:
Intermediate & Advanced SEO | | khi5
example.com/region-name and example.com/region-name/neighborhood-name
So all data on the neighborhood page will be 100% data from the region URL. Question: would it make sense to "NoIndex" such neighborhood page, since it would reduce nr of non-unique pages on my site and also reduce amount of data which could be seen as duplicate data? Will my region page have a good chance of ranking better if I "NoIndex" the neighborhood page? OR, is Google so advanced they know Realtors share MLS data and worst case simple give such pages very low value, but will NOT impact ranking of other pages on a website? I am aware I can work on making these MLS result pages more unique etc, but that isn't what my above question is about. thank you.0