Original content, widely quoted - yet ignored by Google
-
Our website is https://greatfire.org. We are a non-profit working to bring transparency to online censorship in China. By helping us resolve this problem you are helping us in the cause of internet freedom.
If you search for "great firewall" or "great firewall of china", would you be interested in finding a database of what websites and searches are blocked by this Great Firewall of China? We have been running a non-profit project with this objective for almost a year and in so doing have created the biggest and most updated database of online censorship in China. Yet, to this date, you cannot find it in Google by searching for any relevant keywords.
A similar website, www.greatfirewallofchina.org, is listed as #3 when searching for "great firewall". Our website provides a more accurate testing tool, as well as historic data. Regardless of whether our service is better, we believe we should at least be included in the top 10.
We have been testing out an Adwords campaign to see whether our website is of interest to users using these keywords. For example, users searching for "great firewall of china" end up browsing on average 2.62 pages and spending 03:18 minutes on the website. This suggests to us that our website is of interest to users searching for these keywords.
Do you have any idea what the problem could be that is grave enough to not even include us in the top 100 for these keywords?
We have recently posted this same question on the Google Webmaster Central but did not get a satisfactory answer: http://www.google.com/support/forum/p/Webmasters/thread?tid=5c14a7e16c07cbb7&hl=en&fid=5c14a7e16c07cbb70004b5f1d985e70e
-
Thanks very much for your reply Jerod!
Google Webmaster Tools is set up and working. Some info:
-
No detected malware
-
1 crawl error (I think this must have been temporary. Only reported once, and this url is not in the robots.txt now):
- http://greatfire.org/url/190838
- URL restricted by robots.txt
- Dec 10, 2011
-
Pages crawled per day, average: 1102
-
Time spent downloading a page (in milliseconds), average: 2116
The robots.txt is mostly the standard one provided by Drupal. We've added "Disallow: /node/" because all interesting urls should have a more interesting alias than that. We'll look more into whether this can be the cause.
Anything else that you notice?
-
-
Hi, GreatFire-
We had a very similar problem with one of the sites we manage at http://www.miwaterstewardship.org/. The website is pretty good, the domain has dozens of super high-quality backlinks (including EDU and GOV links), but The Googles were being a real pain and not displaying the website in a SERP no matter what we did.
Ultimately, we think we found the solution in robots.txt. The entire site had been disallowed for quite a long time (at the client's request) while it was being built and updated. After we modified the robots.txt file, made sure Webmaster tools was up and running, pinged the site several times, etc. it was still being blocked in the SERPs. After two months or more of researching, trying fixes, and working on the issue, the site finally started being displayed. The only thing we can figure is that Google was "angry" (for all intents and purposes) at us for leaving the site blocked for so long.
No one at Google would come out and tell us that this was the case or even that it was a possibility. It's just our best guess at what happened.
I can see that greatwall.org also has a rather substantial robots.txt file in place. It looks like everything is in order in that file but it might still be causing some troubles.
Is Webmaster tools set up? Is the site being scanned and indexed properly?
You can read up on our conversation with SEOmoz users here if you're interested: http://www.seomoz.org/q/google-refuses-to-index-our-domain-any-suggestions
Good luck with this. I know how frustrating it can be!
Jerod
-
Hi GreatFire,
With regard to the homepage content - you really don't have much there for the search engines to get their teeth into. I would work on adding a few paragraphs of text explaining what your service does and what benefits it provides to your users.
I disagree that your blog should be viewed as only an extra to your website. It can be a great way to increase your keyword referral traffic, engage with your audience and get picked up by other sites.
Just because Wikipedia have written about your topic already doesn't mean you should't cover the subject in more detail - otherwise no one would have anything to write about!
As you have the knowledge on the subject, involved with it everyday, and have a website dedicated to it - you are the perfect candidate to start producing better content and become the 'hub' for all things related to the how China uses the internet.
Cheers
Andrew
-
Hi Andrew,
Thank you very much for your response. The two main differences you point out are very useful for us. We will keep working on links and social mentions.
One thing I am puzzled about though is the labeling of the site as "not having a lot of content". I feel this is misunderstanding the purpose of the website. The blog is only an extra. What we provide is a means to test whether any url is blocked or not in China, as well as download speed. For each url in our database, we provide a historic, calendar-view to help identify when a website was blocked or unblocked in the past.
So our website first and foremost offers a tool and a lot of non-text data. To me, expanding on the text content, while I understand the reasoning, sounds like recommending Google to place a long description of what a search engine is on their front page.
If you want to read the history of the Great Firewall of China, you can do it on Wikipedia. I don't see why we should explain it, when they do it better. On the other hand, if you want to know if website X is blocked or not in China, Wikipedia is not practical since it's only manually updated. Our data offers the latest status at all times.
Do you see what I mean? It would be great to hear what you think about this.
-
Hi GreatFire,
Your competitor has a much stronger site in the following two main areas:
- More backlinks (resulting in a higher PR)
- More social mentions
Focus on building more backlinks by researching your competitors domain with Open Site Explorer and MajesticSEO. Keep up your activity in your social circles, and also get going with Google+ if you haven't already.
You should also fix your title tag to include the target keyword at the start - not at the end. So it would read something like 'Great firewall of china - bringing transparency from greatfire.org'
Looking through your site you don't appear to have that much content (this was also mentioned in your Google Support thread) so I would focus on building out the content on the homepage and also further developing your blog. For example your 'Wukan Blocked only on Weibo' blog post is not really long enough to generate you much referral traffic. Larger authority articles of 1000+ words plus with richer content (link references, pictures, Google+ author/social connections) etc will help you far more.
Conduct the relevant keyword research for your blog posts in the same way you did with your root domain. This will keep your website niche focused and generating lots of similar 'china firewall' terms.
Hope that helps.
Cheers,
Andrew
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is possible to submit a XML sitemap to Google without using Google Search Console?
We have a client that will not grant us access to their Google Search Console (don't ask us why). Is there anyway possible to submit a XML sitemap to Google without using GSC? Thanks
Intermediate & Advanced SEO | | RosemaryB0 -
Risk Using "Nofollow" tag
I have a lot of categories (like e-commerce sites) and many have page 1 - 50 for each category (view all not possible). Lots of the content on these pages are present across the web on other websites (duplicate stuff). I have added quality unique content to page 1 and added "noindex, follow" to page 2-50 and rel=next prev tags to the pages. Questions: By including the "follow" part, Google will read content and links on pages 2-50 and they may think "we have seen this stuff across the web….low quality content and though we see a noindex tag, we will consider even page 1 thin content, because we are able to read pages 2-50 and see the thin content." So even though I have "noindex, follow" the 'follow' part causes the issue (in that Google feels it is a lot of low quality content) - is this possible and if I had added "nofollow" instead that may solve the issue and page 1 would increase chance of looking more unique? Why don't I add "noindex, nofollow" to page 2 - 50? In this way I ensure Google does not read the content on page 2 - 50 and my site may come across as more unique than if it had the "follow" tag. I do understand that in such case (with nofollow tag on page 2-50) there is no link juice flowing from pages 2 - 50 to the main pages (assuming there are breadcrumbs or other links to the indexed pages), but I consider this minimal value from an SEO perspective. I have heard using "follow" is generally lower risk than "nofollow" - does this mean a website with a lot of "noindex, nofollow" tags may hurt the indexed pages because it comes across as a site Google can't trust since 95% of pages have such "noindex, nofollow" tag? I would like to understand what "risk" factors there may be. thank you very much
Intermediate & Advanced SEO | | khi50 -
Difference in Number of URLS in "Crawl, Sitemaps" & "Index Status" in Webmaster Tools, NORMAL?
Greetings MOZ Community: Webmaster Tools under "Index Status" shows 850 URLs indexed for our website (www.nyc-officespace-leader.com). The number of URLs indexed jumped by around 175 around June 10th, shortly after we launched a new version of our website. No new URLs were added to the site upgrade. Under Webmaster Tools under "Crawl, Site maps", it shows 637 pages submitted and 599 indexed. Prior to June 6th there was not a significant difference in the number of pages shown between the "Index Status" and "Crawl. Site Maps". Now there is a differential of 175. The 850 URLs in "Index Status" is equal to the number of URLs in the MOZ domain crawl report I ran yesterday. Since this differential developed, ranking has declined sharply. Perhaps I am hit by the new version of Panda, but Google indexing junk pages (if that is in fact happening) could have something to do with it. Is this differential between the number of URLs shown in "Index Status" and "Crawl, Sitemaps" normal? I am attaching Images of the two screens from Webmaster Tools as well as the MOZ crawl to illustrate what has occurred. My developer seems stumped by this. He has submitted a removal request for the 175 URLs to Google, but they remain in the index. Any suggestions? Thanks,
Intermediate & Advanced SEO | | Kingalan1
Alan0 -
Dates in the URLs for a "hot" content website (tipping service)
Hi, I'm planning to build a website that will present games previews for different sports. I think that the date should be included in the URL as the content will be valuable until the kick off f the game. So first i want to know if this is the right approach and second the URL structure i have imagined is /tips/sport/competition/year/month/day Ex : /tips/football/premier_league/2013/11/05 Is this a good structure ? Guillaume.
Intermediate & Advanced SEO | | betadvisor0 -
Can a website be punished by panda if content scrapers have duplicated content?
I've noticed recently that a number of content scrapers are linking to one of our websites and have the duplicate content on their web pages. Can content scrapers affect the original website's ranking? I'm concerned that having duplicated content, even if hosted by scrapers, could be a bad signal to Google. What are the best ways to prevent this happening? I'd really appreciate any help as I can't find the answer online!
Intermediate & Advanced SEO | | RG_SEO0 -
Building "keyword" backlinks
Looking for some opinions here please. Been involved in seo for a couple of years mainly working on my websites and picking up the odd client here and there through word of mouth. I must admit that up until a few months back I was guilty of using some grey methods of link building - linkvana, unique article wizard and the such. While no penalties were handed out to my domains and some decent rankings gained, I got tired of always being on the lookout for what the next Google update will do to my results and which networks were being hit, and so I moved a lot more into the 'proper' way of seoing. These days my primary sources for backlinks are much more respectable... myblogguest bloggerlinkup postjoint Guest Blog Finder http://ultramarketer.com/guest-blogger-finder/ - not sure where i came across this resource but it's very handy I use these sources alongside industry only directories and general word of mouth. Ironically I have found that doing the word by hand not only leads to results I can happyily show people (content wise) but also it's much quicker and cheaper. The increased authority of the sites means far fewer links are needed. The one area I still am having a little issue with is that of building keyword based backlinks. I now find it fairly easy to get my content on a reasonable quality site - DA to 40 and above, however the vast majority of these sites will allow the backlink only as the company name or as a generic read more type thing. This is fine and it is improving my website performance and authority. The trouble I am finding is that while i am ranking for the title tag and some keywords in the page, I am struggling to get backlinks for other keywords. In an ideal world every page on the site would be optimised for a different keyword and you could then just the site name as anchor text to build the authority of that page and make it rank for it's content, but what about when you (or the client) wants to rank the home for a number of different keywords, some not featured on the page. The keywords are too similar to go to the trouble of making unique pages for, and that would also add no value to the site. My question really then, after a very long winded way of getting there, is are others finding it much more difficult to gain keyword based backlinks these days? The great thing about the grey seo tools, as mentioned above, is that it was super easy to get the backlinks with whatever anchor text you wanted - even if you needed hundreds of the thing to compensate for the low value of each!! Thanks Carl
Intermediate & Advanced SEO | | GrumpyCarl0 -
We are ignored by Google - what should we do?
Hi, We believe that our website - https://en.greatfire.org - is being all but ignored by Google Search. The following two examples illustrate our case. 1. Searching for “China listening in on Skype - Microsoft assumes you approve”. This is the title of a blog post that we wrote which received some 50,000 visits. On Yahoo and Bing search, we rank first for this search. On Google, however, we rank 7th. Each of the six pages ranking higher than us are quoting and linking to our story. 2. Searching for “Online Censorship In China”. This is the title of our front page. Yahoo and Bing both rank us third for this search. On Google, however, we are not even among the first 300 results. Two of the pages among the first 10 results link to us. Our website has an average of around 1000 visits per day. We are quoted in and linked from virtually all Western mainstream media (see https://en.greatfire.org/press). Yet to this day we are receiving almost no traffic from Google Search. Our mission is to bring transparency to online censorship in China. If people could find us in Google, it would greatly help to spread awareness of the extent of Internet restrictions here. If you could indicate to us what the cause of our poor rankings could be, we would be very grateful. Thank you for your time and consideration.
Intermediate & Advanced SEO | | GreatFire.org0 -
Rel="prev" and rel="next" implementation
Hi there since I've started using semoz I have a problem with duplicate content so I have implemented on all the pages with pagination rel="prev" and rel="next" in order to reduce the number of errors but i do something wrong and now I can't figure out what it is. the main page url is : alegesanatos.ro/ingrediente/ and for the other pages : alegesanatos.ro/ingrediente/p2/ - for page 2 alegesanatos.ro/ingrediente/p3/ - for page 3 and so on. We've implemented rel="prev" and rel="next" according to google webmaster guidelines without adding canonical tag or base link in the header section and we still get duplicate meta title error messages for this pages. Do you think there is a problem because we create another url for each page instead of adding parameters (?page=2 or ?page=3 ) to the main url alegesanatos.ro/ingrediente?page=2 thanks
Intermediate & Advanced SEO | | dan_panait0