Please check that you have relevant keywords in Overview -> MAnage Keywords. The reports are generated based on these keywords only.
- Nitin
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Job Title: SEO under training
Company: www.codeuniv.com
Favorite Thing about SEO
So much to learn and implement.
Please check that you have relevant keywords in Overview -> MAnage Keywords. The reports are generated based on these keywords only.
Hi Rafi
I have never come across any cache where such a thing exist. Also it totally defeats the purpose of a date. If you can give a real example then it will be easier to understand.
Second, though Google does not guarantee to use markup, it is a warning. I have yet to come across a website which has proper content and also has markups which Google does not use in search results. This warning is in place for sites which Google system recognizes as spammy / stuffed / fishing sites etc.
The Rich Snippet tool is working fine but the Google crawler has not yet crawled the Page again. You can see the cached version of your page and it says they crawled it on Mar 24. So when Google does it again, it will be added.
You can resubmit your sitemap to Google to make it fast. But it still is not assured that Google will do it in one day or not.
IMO it is more about ease of use for the end user and less about SEO. If you have a good help sub-domain, it will automatically redirect users to the product site.
Still If I had to make a decision, I would have compared metrics like pages/visit, time on site, bounce rate of help site and main site. If help metrics are better than the main site, adding content to main will add value else it will deplete value. Also if it is only one product, it makes sense to have help within main site bur for multiple products, you should be better off with sub domains (product wise and not docs vs main).
Please see and decide what is best for your users first, keeping SEO at second priority. Hope it helps.
On Page Reports are generated weekly so it may take one week to actually get the same. In the meantime you can use research tools to see them. It can be found here.
Note that you will need to provide URL as well in this case. Weekly reports see that automatically.
Stephen
You are a Guru and I suspect you are already working on this but there are some issues on the site. Some pages do not have meta description, have multiple H1/H2, image sizes are large and image alt is not there. A lot can be done on this site.
You should check with http://pro.seomoz.org/tools/crawl-test / Google Bot so that the error can be found. BTW I checked with spider and it was working fine.
Also It may be one time issue i.e. your website was down technically and Roger crawled it same time. So check website logs as well.
SEO spider is showing meta descriptions and is not saying that content is duplicate. It means it is not checking rel canonical on these pages as well. So it is not an issue.
Note that duplicate title / desc does not mean content is duplicate.
Tools which are looking at one thing only will give this issue. Tools which are specific for finding duplicate content will not give an issue.
You should rel=canonical only in cases you have duplicate content, which may be case when
1. You want to have both pages available
2. You have similar content due to some choices at your page- you have pages for 10 items and 20 items per page while this list has only 5 items in it so both pages showing same content.
3. You need to control the level of parameters you want search engines to take care of. For e.g. you have 3 parameters state, city, street but as taking all three together will give you a lot of data so you may be just going upto city even when street was there in URL.
Also you should use re=prev and rel=next for URLs with pagination.
You can read more about it at http://googlewebmastercentral.blogspot.in/2011/09/pagination-with-relnext-and-relprev.html
But the pages were there before add-on was added. Right ?
If they were then the Google may have crawled them and SEOMoz may have picked them from Google or some other engines which resulted in the issue.
So I suggest to wait and watch as you will get Crawl Errors every week from SEOMoz.
1. It has to be a personal profile and not company profile. In case there are more than one authors to an article / blog, rich snippet implementation should be handling the same properly.
2. Right now Google+ seems to be the only way to it.
You can refer to Google's guide on author information here - http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1408986.
It is perfectly normal to redirect URLs automatically. You just need to keep basics of SEO in account. Danny Dover, in his book SEO Secrets, wrote "A pyramid structure for a website allows the most possible link juice to get to all the website’s pages with the fewest number of links."
One issue which generally comes is that the old pages are properly redirected making 2 pages available to search engines, thus eroding value or URLs.
You should rel=canonical only in cases you have duplicate content, which may be case when
1. You want to have both pages available
2. You have similar content due to some choices at your page- you have pages for 10 items and 20 items per page while this list has only 5 items in it so both pages showing same content.
3. You need to control the level of parameters you want search engines to take care of. For e.g. you have 3 parameters state, city, street but as taking all three together will give you a lot of data so you may be just going upto city even when street was there in URL.
Also you should use re=prev and rel=next for URLs with pagination.
You can read more about it at http://googlewebmastercentral.blogspot.in/2011/09/pagination-with-relnext-and-relprev.html
You can use it. Google supports cross domain rel="canonical" link element.
You can check out the Google official blog for this - http://googlewebmastercentral.blogspot.in/2009/12/handling-legitimate-cross-domain.html.
Google content guidelines say "There are situations where it's not easily possible to set up redirects. This could be the case when you need to migrate to a new domain name using a web server that cannot create server-side redirects. In this case, you can use the rel="canonical"
link element to specify the exact URL of the domain preferred for indexing. While the rel="canonical"
link element is seen as a hint and not an absolute directive, we do try to follow it where possible."
So you can use it without any harm to your site.
There are many ways to do it - I recommend you to read http://en.wikipedia.org/wiki/URL_redirection#Techniques
Out of the given methods Apache mod rewrite is best as it gives proper information to search engines that the page is permanently redirected. Apart from this you can use 'noindex' on old URL so that search engines do not index it.
You should use a hyphen for your SEO URLs. Google treats a hyphen as a word separator, but does ****nottreat an underscore that way. Google treats underscore as a word joiner — so seo_moz is the same as seomoz to Google. In fact using dashes over underscores will have a (minor) ranking benefit.
Also Note that 301 redirects passes 90%-99% value to redirected link so earlier you do it the better.
Hope it makes sense.
On Page Reports are generated weekly so it may take one week to actually get the same. In the meantime you can use research tools to see them. It can be found here.
Note that you will need to provide URL as well in this case. Weekly reports see that automatically.
Can you confirm on which platform your site is made ? PHP / Drupal / ASP.net or anything else.
Then you must be having a plugin in it which handles redirection automatically. something like this one http://wordpress.org/extend/plugins/redirection/.
It is highly recommended you have this kind of plugin so that when you change URL, it automatically makes a redirect entry.
In case you do not want to use it then you will need to make entries in Apache.
Hope this makes sense.
Looks like your connection to Moz was lost, please wait while we try to reconnect.