Two weeks is pretty short time for a new site to get accurate reports from GWT. The back links I found weren't valuable - none with a page authority over 1.
I would secure at least one high quality link and wait a few more weeks.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Two weeks is pretty short time for a new site to get accurate reports from GWT. The back links I found weren't valuable - none with a page authority over 1.
I would secure at least one high quality link and wait a few more weeks.
Appears I broke the site... sorry
Here is how one could test this to be sure:
While I have found it does, you could always use a logo link to accomplish this.
To be sure I understand; you have a site-wide header ,
<nav>section but you are not seeing the backlinks from all the pages in the GWT internal links report?
(Incidentally, my experience has shown these links do count.)
Could we see the site?
How long ago did you post the nav element?
</nav>
This seems reasonable and a good way to ensure the link is allocated correctly.
I presume your issue is you have external links inside a
<nav>container?
Follow up: it appears the specifications do suggest the nav element is for internal links - the element is primarily intended for sections that consist of major navigation blocks. External links are generally not considered major navigation, no?
http://www.whatwg.org/specs/web-apps/current-work/multipage/sections.html#the-nav-element
</nav>
Fact is, the robots file alone will never work (the link has a good explanation why - short form: all it does is stop the bots from indexing again).
Best to request removal then wait a few days.
You should do a remove request in Google Webmaster Tools. You have to first verify the sub-domain then request the removal.
See this post on why the robots file alone won't work...
http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
Option 1 could come with a small performance hit if you have a lot of txt files being used on the server.
There shouldn't be any negative side effects to option 2 if the rewrite is clean (IE not accidently a redirect) and the content of the two files are robots compliant.
Good luck
The fact a Caps create a 404 error on LAMP site is a pet peeve of mine - so is the fact Google thinks mix cases on IIS are separate (thus duplicate) URLs.
Too arbitrary to be picky about and cause user frustration.
Thanks goodness at lease DoMaInS can be what ever.
Sounds like (from other discussions) you may be stuck requiring a dynamic robot.txt file which detects what domain the bot is on and changes the content accordingly. This means the server has to run all .txt file as (I presume) PHP.
Or, you could conditionally rewrite the /robot.txt URL to a new file according to sub-domain
RewriteEngine on
RewriteCond %{HTTP_HOST} ^subdomain.website.com$
RewriteRule ^robotx.txt$ robots-subdomain.txt
Then add:
User-agent: *
Disallow: /
to the robots-subdomain.txt file
(untested)