2000 pages indexed in Yahoo, 0 in Google. NO PR, What is wrong?
-
Hello Everyone,
I have a friend with a blog site that has over 2000 pages indexed in Yahoo but none in Google and no page rank. The web site is http://www.livingorganicnews.com/ I know it is not the best site but I am guessing something is wrong and I don't see it.
Can you spot it? Does he have some settings wrong? What should he do?
Thank you.
-
The site is just looking like a site of a blog network. The domain is 5 years old & the home page has DA & PA of 34 still not indexed by Google. I searched for site:livingorganicnews.com in Google which is not giving any results. So it shows that the site is penalized by Google. Use Google webmaster tools for further verification so as to find the reason.
Most probably it's penalized because of being a site of a paid blog network.
-
LOL, the fact that there's a tonne of clearly spun content won't help. I gather this is part of a content scraping or sharing network like LinkVine?
Have you tried reading the articles published? Could do with some quality guidelines for what gets accepted imho
Even when it gets indexed, it's not going to rank anywhere... this is exactly the kind of site that Panda wanted to stop. Regurgitated, nonsensical, spun, tosh that looks as if it was written by a lunatic and only really exists for the sake of its outgoing links, that point to other rubbish.
I'd tell your friend to give up on this site entirely and start looking at less automated ways of doing things. Google is only going to get tougher and tougher on these sites so he's fighting a losing battle.
I don't mean to be rude but I hope it doesn't get indexed ever, what value does it offer to anyone for anything? Most people don't want stuff like that clogging up the web. I don't mean to sound harsh but tell your friend the problem with the site is.... it's crap.
-
Another one of the many not-quite-right things on the site are some of the older posts like http://www.livingorganicnews.com/games/2010/panasonic-announced-the-jungle-handheld-gaming-platform/1965/ which end with "incoming search terms" and several search terms that all hyperlink to that exact same article. Search engines will not see that as providing any value to the user (users are already on that page, they don't need to link to that page) and they will see it as just another attempt to manipulate the engines.
-
It is interesting to have a new set of eyes here. I had noticed his different writing but figured it was because English is not his first language. I will ask if he is actually writing this.
-
Keri is absolutely right.
I did not look at the site's content. It couldn't be much worse. It is a 100% spam site which should never be indexed. Clearly the site is under a penalty.
Google's job is to satisfy a user's search query by giving them the content they seek. If you create a site like that, NO ONE will ever want to get that site as the result of a search query. Google correctly recognizes this fact and removes the site from their database.
-
When there are a couple of thousand other pages like this, yes.
http://www.livingorganicnews.com/games/2011/get-cool-with-selected-berber-carpet-tiles-now/3215/
The subject of the article is about berber carpet tiles, yet the text has links (I used bold) that are totally off base and make no sense. For example:
"The berber carpet tiles might also be renowned for the durability and stain resistance at extended stay motel rates."
"To get rid of the difficult to vacuum Provillus scam dust particles..."
"An important benefit in using berber carpet tiles is a likelihood to eliminate the damaged location alone and replace it with a new carpet tile, a comparatively low-cost way of capatrex scam damage control, to make your ground look just like new."
-
Absolutely.
It is entirely possible he has been removed from Google's index as a result of a penalty. If he links to sites which receive a penalty (mobile casinos would be a very bad choice of sites to link to) then his site could receive a penalty as well.
My suggestion is not to jump to the conclusion the site is under penalty. Start by checking WMT, then if nothing is discovered submit the sitemap. If you don't see any results after a few days, then proceed to inquiring with Google about the site being under a penalty.
-
The text doesn't really seem like a human wrote it. The current most recent article has the title "Religious Credit card debt Enable Provides You With the Meaningful and Economical You Need". Other posts are about acne treatment reviews, alcoholism, and other seemingly random things.
It really looks like it's been through an article spinner. The article about alcoholism ends with "So, Think before you Beverage." Uh..really? Or what about "As emission safety glasses are put on in the office, they need to provide ease and comfort, safe healthy and crystal clear eyesight to make sure they are usually not golf clubs to the wearer." An article I found that wasn't spun is instead indexed 94 other times on the web.
I would say the content is why Google has not indexed it. They can't find the value to the user for returning this in a search result. Is this truly the content that your friend has put up, or has the site gotten hacked?
-
Hello Bryce,
That sounds possible to loose credibillity but could it be the reason for no index?
-
Thank you Ryan,
I will ask him about GWT. Perhaps it is just a sitemap issue but I wonder why Yahoo would spot it and Google would totally miss it. I often see that they have a difference in pages indexed but this is the first time I have seen thousands verses zero.
-
'm thinking that by linking out to Mobile Casinos and Polish Rock Bands, he's probably losing credibility.
-
I didn't notice any obvious problem with your site. Have you logged into Google Webmaster Tools and looked at the site? That would be the logical next step.
The robots.txt file looks fine, there is not any "noindex" tag on the home page, a GA code is present on the page, etc. I would suggest reviewing the site in Google's WMT and look for any issues.
If none are present, the next step would be to submit a sitemap. If your friend does not have a sitemap already set up, you can use http://www.xml-sitemaps.com/ I think the free version only maps 500 pages, but that is enough to get you started.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should search pages be indexed?
Hey guys, I've always believed that search pages should be no-indexed but now I'm wondering if there is an argument to index them? Appreciate any thoughts!
Technical SEO | | RebekahVP0 -
Trying to find all internal links to a specific page (without index)
Hi guys -- Still waiting on Moz to index a page of mine. We launched a new site over two months ago. In the meantime, I really just need a list of internal links to a specific page because I want to change its URL. Does anybody know how to find that list (of internal links to 1 of my pages) without the Moz index? I appreciate the help!
Technical SEO | | marchexmarketingmcc1 -
How to get google to forget my old but still working page and list my new fully optimized page for a keyword?
Hi There! (i am beginner in seo) I have dynamic and static pages on our site. I created a static page for a specific keyword. Fully optimized it, (h1, alt, metas, etc.....maybe too optimized). My problem is that this page is alive for weeks, checked it in GWT and it is in robots.txt, google sees it, and indexed it. BUT whenewer i do a search for that keyword, we still appear with the dynamically created link in the google listings. How could i "redirect" google, if sy make a search for that keyword than shows our optimized page? Is there a tool for that? I cant delete the dynamic page... Any ideas? Thx Andrew
Technical SEO | | Neckermann0 -
My some pages are not showing cached in Google, WHY?
I have website http://www.vipcollisionlv.com/ and when i check the cache status with tags **site:http:vipcollisionlv.com, **some page has no cache status.. you can see this in image. How to resolve this issue. please help me.
Technical SEO | | 1akal0 -
Google Seeing Way More Pages Than My Site Actually Has
For one of my sites, A-1 Scuba Diving And Snorkeling Adventures, Google is seeing way more pages than I actually have. It sees almost 550 pages but I only have about 50 pages in my XML. I am sure this is an error on my part. Here is the search results that show all my pages. Can anyone give me some guidance on what I did wrong. Is it a canonical url problem, a redirect problem or something else. Built on Wordpress. Thanks in advance for any help you can give. I just want to make sure I am delivering everything I can for the client.
Technical SEO | | InfinityTechnologySolutions0 -
Why blocking a subfolder dropped indexed pages with 10%?
Hy Guys, maybe you can help me to understand better: on 17.04 I had 7600 pages indexed in google (WMT showing 6113). I have included in the robots.txt file, Disallow: /account/ - which contains the registration page, wishlist, etc. and other stuff since I'm not interested to rank with registration form. on 23.04 I had 6980 pages indexed in google (WMT showing 5985). I understand that this way I'm telling google I don't want that section indexed, by way so manny pages?, Because of the faceted navigation? Cheers
Technical SEO | | catalinmoraru0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
How to remove a sub domain from Google Index!
Hello, I have a website having many subdomains having same copy of content i think its harming my SEO for that site since abc and xyz sub domains do have same contents. Thus i require to know i have already deleted required subdomain DNS RECORDS now how to have those pages removed from Google index as well ? The DNS Records no more exists for those subdomains already.
Technical SEO | | anand20100