Why Can't I Get Indexed?
-
I cannot seem to get my website indexed by Google! I submitted the sitemap using Google WMT about a month ago but only one page is being indexed. There are very few backlinks to the site, so I don't believe there are any penalties due to over-optimization that would prevent indexing. Also, my robots.txt file is properly configured and is not preventing any pages from being crawled.
I've tried using the "Fetch as Google" settings in WMT with no luck. Any ideas?
-
Jolly good show there Jesse.
-
wow I have absolutely no idea how that I got in there and didn't even think to check. send me your email address! I owe you a beer!
-
Your homepage has this tag in your header:
name='robots' content='noindex,nofollow' />
That's your problem. Remove that.
-
It is a new site, but the domain has been owned since 2004. I've never had an issue like this before with any other site, even after purchasing a domain and then launching a site as little as a week later. I agree with some of your suggestions, but I feel there is some other reason that this site isn't being indexed.
-
If it's a brand new site, getting fully indexed can take a while (some months), especially if, as you say, you have very few links. Work on getting to know others in your niche and complementary markets via social media and give them a reason to talk about you and share info about your site/content. Indexation is about authority and importance--the more of those your site has, the more often and more fully your site will be indexed.
-
-
Please share your domain. Otherwise all we can do is guess.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing non www and index.php
Hi, I'm green when it comes to altering the htaccess file to remove non www and index.php. I think I've managed to redirect the urls to www however not sure if I've managed to remove the index.php. I'm pasting the contents of the htaccess file here maybe someone can identify if I have unwanted lines of code and if it is up to standard (there are a lot of comments in #) not sure if needed but I've left them as I don't want to screw up anything. Thanks 🙂 @package Joomla @copyright Copyright (C) 2005 - 2016 Open Source Matters. All rights reserved. @license GNU General Public License version 2 or later; see LICENSE.txt READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! The line 'Options +FollowSymLinks' may cause problems with some server configurations. It is required for the use of mod_rewrite, but it may have already been set by your server administrator in a way that disallows changing it in this .htaccess file. If using it causes your site to produce an error, comment it out (add # to the beginning of the line), reload your site in your browser and test your sef urls. If they work, then it has been set by your server administrator and you do not need to set it here. No directory listings IndexIgnore * Can be commented out if causes errors, see notes above. Options +FollowSymlinks
On-Page Optimization | | KeithBugeja
Options -Indexes Mod_rewrite in use. RewriteEngine On
RewriteCond %{REQUEST_URI} ^/index.php/
RewriteRule ^index.php/(.*) /$1 [R,L] Begin - Rewrite rules to block out some common exploits. If you experience problems on your site then comment out the operations listed below by adding a # to the beginning of the line. This attempts to block the most common type of exploit attempts on Joomla! Block any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]([^)]) [OR] Block any script that includes a0 -
Can I robots.txt an entire site to get rid of Duplicate content?
I am in the process of implementing Zendesk and will have two separate Zendesk sites with the same content to serve two separate user groups (for the same product-- B2B and B2C). Zendesk does not allow me the option to changed canonicals (nor meta tags). If I robots.txt one of the Zendesk sites, will that cover me for duplicate content with Google? Is that a good option? Is there a better option. I will also have to change some of the canonicals on my site (mysite.com) to use the zendesk canonicals (zendesk.mysite.com) to avoid duplicate content. Will I lose ranking by changing the established page canonicals on my site go to the new subdomain (only option offered through Zendesk)? Thank you.
On-Page Optimization | | RoxBrock0 -
When Content creation isn't an option...
I currently work as an SEO/SEO in training. Oftentimes I get projects that require me to look at well established websites of big brands, the kind one would assume put a lot of effort into their sites, and make SEO changes. Additionally they want "actionable" changes that can be made on the fly so content creation, and most linkbuilding, is usually out of the question. Does this limit me to just changing meta titles and descriptions? What if all that seems alright too?
On-Page Optimization | | Resolute0 -
Problem with getting a site to rank at all
We pushed this Word Press site live about a month ago www.primedraftarchitecture.com. Since then we've been adding regular content, blog posts 3 times a week with social posts on facebook, twitter, G+ and LinkedIn. We also submitted via Moz Local about 3 weeks ago. Yext about two weeks ago and have been adding about 5 listings to small local directories a week. Webmaster tools shows that the site map is valid and the pages of the site are getting indexed and it shows links from 7 sites, mostly directories. I'm just not seeing the site ranking for anything. We're getting zero organic traffic. I though we did a good job not over optimizing the pages. I'm just stymied trying to figure out what's wrong. Usually we push a site live and see at least some low rankings after just a couple of weeks. Can anyone see anything that looks bad or where we've gone wrong?
On-Page Optimization | | DonaldS0 -
Duplicate Content - But it isn't!
Hi All, I have a site that releases alerts for particular problem/events/happenings. Due to legal stuff we keep the majority of the content the same on each of these event pages. The URLs are all different but it keeps coming back as duplicate content. The canonical tag is not right (i dont think for this) egs http://www.holidaytravelwatch.com/alerts/call-to-arms/egypt/coral-sea-waterworld-resort-sharm-el-sheikh-egypt-holiday-complaints-july-2014 http://www.holidaytravelwatch.com/alerts/call-to-arms/egypt/hotel-concorde-el-salam-sharm-el-sheikh-egypt-holiday-complaints-may-2014
On-Page Optimization | | Astute-Media0 -
On page link question, creating an additional 'county' layer between states and zips/cities
Question We have a large site that has a page for all 50 states. Each of these pages has unique content, but following the content has a MASSIVE amount of links for each zip AND city in that state. I am also in the process of creating unique content for each of these cities and zips HOWEVER, I was wondering would it make sense to create an additional 'county' layer between the states and the zips/cities. Would the additional 'depth' of the links bring down the overall rank of the long tail city and zip pages, or would the fact that the counties would knock the on page link count down from a thousand or so, to a management 50-100 substantially improve the overall quality and ranking of the site? To illustrate, currently I have State -> city and zip pages (1200+ links on each state page) what i want to do is do state -> county (5-300 counties on each state page) -> city + zip (maybe 50-100 links on each county page). What do you guys think? Am I incurring some kind of automatic penalty for having 1000+ links on a page?
On-Page Optimization | | ilyaelbert0 -
Don't understand this ... :-(
Hello, I'm going nuts as I don't understand what's going on with this domain of a client. We have this classical htaccess redirect from http://domain.com to http://www.domain.com But I'm getting Page Authority for both domains, and the non-www, which shouldn't be crawled, is gettting higher PA .. http://www.myanamar.rundreisen.de - PA 34 http://myanamr-rundreisen.de - PA 36 I attach a file, you see there that google robot is recognizing the 301 redirecht from non-www to www ... But, the site isn't doing good at all in google, it seems the home page has a penalty ... duplicate content due to non-www and www home page? So it would be great if somebody has a hint for me ... my client is losing trust in me Thx! GbDC4.jpg
On-Page Optimization | | hgw570 -
Can you help with quality and interesting content ideas?
Hi I'm ranking for "online biology degree" and "online wildlife biology degree" but I have bad content and I receive almost 100% bounce rate. Can you help me with ideas on good content and interesting information I can provide for people looking for "online wildlife biology degree" and "online biology courses" "biology degree online" Every idea would be appreciated. Yoseph
On-Page Optimization | | Joseph-Green-SEO0