Why Google did not index our domain?
-
Hi,
We launched tmart 60 days ago and submitted to google, bing, yahoo 20 days later. But google had never indexed our website still when yahoo indexed it in one week.
What we have checked or tried:
1. We got 20~50 inlinks in one month and now 81 inlinks via yahoo site explorer.
2. This domain has registered for 13 years and we purchased it from sedo last year. We
did not find any problems from domain archive pages.3. Page similar: the homepage is 50% similar to one of our competitors when we just launched.
So we adjusted the page structure and modified the content one month later and decreased the similarity to 30% (by tools from webconfs.com)4. Google Robots: googlebot crawled our website every day after we submitted for indexing.
We opened GWT account for it and added the xml sitemap last week. GWT said nothing
was wrong except the time of page loading.Our questions:
- Why google did not indexed our website?
- What should we do?
Thanks,
wu
-
I see you have about 80 pages indexed now. Did you do anything to the site in the past month, or did Google just decide on its own to start indexing your site?
-
Try asking for reconsideration, as you bought the domain it may be flagged? worth a try
-
yes, we had included www to webmaster tools.
-
Thanks, Saibose.
-
Did you add both www.tmart.com and tmart.com to webmaster tools?
If you just added tmart.com, add the WWW variant too which may sort you out.
-
Its tmart.com
-
Whats your url, see if a second set of eye can spot somthing
-
If you are using Google webmaster tools, it maybe a good thing to check the option "fetch as Googlebot". Its under the labs menu on the left navigation sidebar. That should give you a better insight of what Google is seeing. I think that you should have the capibility to insert some text (if possible) on the homepage in plain HTML so that Google doesnt only come across javascripts but also comes across meaningful content. I dont see much of a problem with your website as of now, but fetch as Googlebot would definitely help.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Old domain to new domain
Hi, A website on server A is no longer required. The owner has redirected some URLS of this website (via plugin) to his new website on server B -but not all URLS. So when I use COMMAND site:website A , I see a mixture of redirected URLS and not redirected URLS.Therefore two websites are still being indexed in some form and causing duplication. However, weirdly when I crawl with Screaming Frog I only see one URL which is 301 redirected to the new website. I would have thought I'd see lots of URLs which hadn't been redirected. How come it is different to using the site:command? Anyway, how do I move to the new website completely without the old one being indexed anymore. I thought I knew this but have read so many blogs I've confused myself! Should I: Redirect all URLS via the HTACESS file on old website on server A? There are lots of pages indexed so a lot of URLs. What if I miss some? or Point the old domain via DNS to server B and do the redirects in website B HTaccess file? This seems more sensible but does this method still retain the website rankings? Thanks for any help
Technical SEO | | AL123al0 -
Example of Google Indexing my Feedburner Links
As you can see, there are 2 results for the same page. One is the correct page URL, the other has the Feedburner parameters at the end: http://www.thewebhostinghero.com/articles/improving-user-engagement-with-the-right-blog-commenting-system.html http://www.thewebhostinghero.com/articles/improving-user-engagement-with-the-right-blog-commenting-system.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+thewebhostinghero+(TheWebHostingHero.com) Can this cause duplicate content issues? Can I prevent Google from indexing my Feedburner links? My Feedburner settings are already set to noindex, what else can I do?!? 22cfThX.png
Technical SEO | | sbrault740 -
Domain Forwarding / Multiple Domain Names / or Rebuild Blogs on them
I am considering forwarding 3 very aged and valuable domain names to my main site. There were once over 100 blog posts on each blog and each one has a page authority of 45 and domain authority of 37. My question is should i put up three blogs on the domains and link them to my site or should i just forward the domains to my main site? Which will provide me with more value. I have the capability to have some one blog on them every day. However, i do not have access to any of the old blog posts. I guess i could scrape it of archive.org. Any advice would be appreciated. Scott
Technical SEO | | WindshieldGuy-2762210 -
CDN Being Crawled and Indexed by Google
I'm doing a SEO site audit, and I've discovered that the site uses a Content Delivery Network (CDN) that's being crawled and indexed by Google. There are two sub-domains from the CDN that are being crawled and indexed. A small number of organic search visitors have come through these two sub domains. So the CDN based content is out-ranking the root domain, in a small number of cases. It's a huge duplicate content issue (tens of thousands of URLs being crawled) - what's the best way to prevent the crawling and indexing of a CDN like this? Exclude via robots.txt? Additionally, the use of relative canonical tags (instead of absolute) appear to be contributing to this problem as well. As I understand it, these canonical tags are telling the SEs that each sub domain is the "home" of the content/URL. Thanks! Scott
Technical SEO | | Scott-Thomas0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Domain Masking with New Keyword-Rich Domains
Hello, friends. We have an ecommerce site and we also own several keyword-rich domains but haven't done anything with them yet. Is there any value in using domain masking to point them to either product pages or special landing pages on our primary ecommerce site? Here's an example: Primary site is widgetzone.com Keyword rich URL is acmewidget.com (which is totally blank and isn't indexed) It could point to our category page for Acme Widgets: widgetzone.com/category/acme-widgets or it could point to a new landing page: widgetzone.com/acme-widgets My concern is that because the keyword-rich URL hasn't been utilized at all there's really no point in redirecting it. I'm of the mind that it's either going to be ineffective at best or a duplicate content issue at worst. What do you guys think? As a follow-up, if we don't redirect these domains, what should we do with them? Just try to sell them off rather than create totally new sites?
Technical SEO | | jbreeden0 -
2000 pages indexed in Yahoo, 0 in Google. NO PR, What is wrong?
Hello Everyone, I have a friend with a blog site that has over 2000 pages indexed in Yahoo but none in Google and no page rank. The web site is http://www.livingorganicnews.com/ I know it is not the best site but I am guessing something is wrong and I don't see it. Can you spot it? Does he have some settings wrong? What should he do? Thank you.
Technical SEO | | QuietProgress0 -
Duplicate exact match domains flagged by google - need help reinclusion
Okay I admit, I've been naughty....I have 270+ domains that are all exact match for city+keyword and have built tons of back links to all of them. I reaped the benefits....and now google has found my duplicate templates and flagged them all down. Question is, how to get the reincluded quickly? Do you guys think converting a site to a basic wordpress template and then simply using 275 different templates and begging applying each site manually would do it, or do you recommend. 1. create a unique site template for each site 2. create unique content any other advice for getting reincluded? Aside from owning up and saying, "hey i used the same template for all the sites, and I have created new templates and unique content, so please let me back".
Technical SEO | | ilyaelbert3