Here is what I have changed it to...found various articles including the one listed above and decided to go with this, not sure if it is good or bad.
Posts made by ENSO
-
RE: What content should I block in wodpress with robots.txt?
-
What content should I block in wodpress with robots.txt?
I need to know if anyone has tips on creating a good robots.txt. I have read a lot of info, but I am just not clear on what I should allow and not allow on wordpress. For example there are pages and posts, then attachments, wp-admin, wp-content and so on. Does anyone have a good robots.txt guideline?
-
RE: Crawl Errors Confusing Me
I have the same problem looks like MSN bot is disallowed from accessing wordpress content. So pages show up as ?page=111 so from what I understand so far anything that shows as below is blocked from MSNbot. I don't have a definite answer for you as to what to do, but I can tell you will need to "allow" msn bot the googlebot is.
Disallow: /key-west-blog/*?*
-
RE: Help needed with robots.txt regarding wordpress!
So basically this site was duplicated and apparently the robots.txt file was duplicated. There is no sitemap for the blog created for the enso plastics site, so I am not sure how to proceed at this point. Should I just create a new robots.text file for enoplastics and replace this one? Or do I edit this one, and go create a sitemap for my blog?
-
RE: Help needed with robots.txt regarding wordpress!
Well that is a problem isn't it? Like I said I am new to a lot of this and I didn't develop either site, this robot.txt file is pointing to the wrong site map. So I am going to change that.
However I am guessing I may need to change some of the rules to get it to where it is not blocking wordpress content.
-
RE: Help needed with robots.txt regarding wordpress!
Well that is a problem isn't it? Like I said I am new to a lot of this and I didn't develop either site, this robot.txt file is pointing to the wrong site map. So I am going to change that.
However I am guessing I may need to change some of the rules to get it to where it is not blocking wordpress content.
-
Help needed with robots.txt regarding wordpress!
Here is my robots.txt from google webmaster tools. These are the pages that are being blocked and I am not sure which of these to get rid of in order to unblock blog posts from being searched.
http://ensoplastics.com/theblog/?cat=743
http://ensoplastics.com/theblog/?p=240
These category pages and blog posts are blocked so do I delete the /? ...I am new to SEO and web development so I am not sure why the developer of this robots.txt file would block pages and posts in wordpress. It seems to me like that is the reason why someone has a blog so it can be searched and get more exposure for SEO purposes.
IS there a reason I should block any pages contained in wodrpress?
Sitemap: http://www.ensobottles.com/blog/sitemap.xml
User-agent: Googlebot
Disallow: /*/trackback
Disallow: /*/feed
Disallow: /*/comments
Disallow: /?
Disallow: /*?
Disallow: /page/
User-agent: *Disallow: /cgi-bin/
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /wp-content/themes/
Disallow: /trackback
Disallow: /commentsDisallow: /feed
-
RE: Google Webmaster tools error?
Thanks, more thumbs up for you! That worked...got it done...if you read this response can you explain if that will resolve my duplicate page issues when site is crawled. I have been ending up with duplicate page titles for 52 pages I was told to go and add a canonical tag to each page but will this resolve that duplicate page detail?
-
RE: Google Webmaster tools error?
I will try this and see what happens, so I add the www and the root without www. is that what you are saying? Thats what I am reading at least.
-
Google Webmaster tools error?
So I am trying to set the URL preference in google webmaster tools for my site. However when I try to save it it tells me to verify that I own the site. I have already done this so where can I go to verify I own the site exactly? Maybe I am wrong and I have not done this already but even on the homepage of webmaster tools I don't see an option to "verify".
-
RE: Robots.txt is blocking Wordpress Pages from Googlebot?
Here is my robots.txt from google webmaster tools. These are the pages that are being blocked and I am not sure which of these to get rid of in order to unblock blog posts from being searched.
http://ensoplastics.com/theblog/?cat=743
http://ensoplastics.com/theblog/?p=240
These category pages and blog posts are blocked so do I delete the /? ...I am new to SEO and web development so I am not sure why the developer of this robots.txt file would block pages and posts in wordpress. It seems to me like that is the reason why someone has a blog so it can be searched and get more exposure for SEO purposes.
Sitemap: http://www.ensobottles.com/blog/sitemap.xml
User-agent: Googlebot
Disallow: /*/trackback
Disallow: /*/feed
Disallow: /*/comments
Disallow: /?
Disallow: /*?
Disallow: /page/
User-agent: *Disallow: /cgi-bin/
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/plugins/
Disallow: /wp-content/themes/
Disallow: /trackback
Disallow: /commentsDisallow: /feed
-
Robots.txt is blocking Wordpress Pages from Googlebot?
I have a robots.txt file on my server, which I did not develop, it was done by the web designer at the company before me. Then there is a word press plugin that generates a robots.txt file. How Do I unblock all the wordpress pages from googlebot?
-
RE: How do I eliminate duplicate page titles?
So if I want the www. page to be the one that shows up in google...what do I put in the head of the ContactUs.html page exactly? As you can see when I put this in the head then I get the critical error from SEOMOZ. So this fix just isn't making sense to me right now. IF I take it back out then the critical error is gone but then I get the message that I should add the canonical to the page.
<dt>Canonical URL</dt>
<dd>"http://www.ensoplastics.com/ContactUs/ContactUs.html"</dd>
<dt>Explanation</dt>
<dd>If the canonical tag is pointing to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. Make sure you're targeting the right page (if this isn't it, you can reset the target above) and then change the canonical tag to reference that URL.</dd>
<dt>Recommendation</dt>
<dd>We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply.</dd>
-
RE: How do I eliminate duplicate page titles?
Once I add it in and crawl the page I end up with a critical error... so something is not right.
Appropriate Use of Rel Canonical
Moderate fix
<dl>
<dt>Canonical URL</dt>
<dd>"http://www.ensoplastics.com/index.html"</dd>
<dt>Explanation</dt>
<dd>If the canonical tag is pointing to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. Make sure you're targeting the right page (if this isn't it, you can reset the target above) and then change the canonical tag to reference that URL.</dd>
<dt>Recommendation</dt>
<dd>We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply.</dd>
</dl>
-
RE: How do I back track Broken Links?
Yes I see it now, the column you are talking about is all the way at the end and is titled referrer...it would be nice if this just showed up in the results for 404 errors.
-
RE: How do I eliminate duplicate page titles?
Ok I will put the canonical in the head of the html files and see what happens.
-
RE: How do I eliminate duplicate page titles?
This actually does not solve the problem. I have only one index.html file. So how in the world do I access a page that does not exist in my hierarchy? For example if I have the following two pages there is really only on instance of that page I can edit the head with an html file, it is not like there is actually 2 html pages that exist for each one so in this case am I just stuck creating redirects for each instance where this occurs?
|
www.ensoplastics.com/ContactUs/ContactUs.html
ensoplastics.com/ContactUs/ContactUs.html
|
-
Why does this page show it has 166 links in the crawll?
http://ensoplastics.com/theblog/?p=213
This is a page that shows up as having over a 100 links in the crawl, however I don't understand where those links are coming from?
-
RE: How do I eliminate duplicate page titles?
So then in the page without the www I should insert this into the head and do the same for all other pages?
-
RE: How do I eliminate duplicate page titles?
Which is better, and also I am interested in knowing why this happens?
-
How do I eliminate duplicate page titles?
Almost...I repeat almost all of my duplicate page titles show up as such because the page is being seen twice in the crawl. How do I prevent this?
<colgroup><col width="336"> <col width="438"></colgroup>
| www.ensoplastics.com/ContactUs/ContactUs.html | Contact ENSO Plastics |
| ensoplastics.com/ContactUs/ContactUs.html |Contact ENSO Plastics
|
This is what is from the CSV...there are many more just like this. How do I cut out all of these duplicate urls?
-
RE: So I am creating an xml sitemap but what can I do to make it look better?
Thanks for the response I had an idea that would be the answer.
-
So I am creating an xml sitemap but what can I do to make it look better?
I want to make viewable the xml sitemap I created with the xml tool. However I am not sure that if I throw in html code to make it look nice if it will interfere with the site map. Should I just use the xml with google and submit it there then have a separate stiemap.html that is viewable on my site? Or will two sitemaps complicate things?
-
How do I back track Broken Links?
SEOMOZ is great, it told me all the pages with 404 errors, however what would be more useful is if it told me where those links were located? Is this not available through this site, or am I just not seeing that information?
If anyone knows of a how to, that is easy to moderate in difficulty to back track broken links I would appreciate it.
-
Canonical URL problem
On page analysis wanted me to add a canonical url tag. However I added then re ran the on page analysis and it came up with an error. What is the proper way to add a canonical url tag in the head of an index page? ie. add a canonical tag to
would it be
?
Or should I ignore this for a home page?
Because I add it then run the analysis again and get this?
Appropriate Use of Rel Canonical
Moderate fix
<dl>
<dt>Canonical URL</dt>
<dd>"http://www.ensoplastics.com/index.html"</dd>
<dt>Explanation</dt>
<dd>If the canonical tag is pointing to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. Make sure you're targeting the right page (if this isn't it, you can reset the target above) and then change the canonical tag to reference that URL.</dd>
<dt>Recommendation</dt>
<dd>We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply.</dd>
<dd>So do I add it or not? If I don't I get a lower page rating if I take it off I get a higher page rating with room for improvement. </dd>
</dl>