Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Warnings, Notices, and Errors- don't know how to correct these
-
I have been watching my Notices, Warnings and Errors increase since I added a blog to our WordPress site. Is this effecting our SEO? We now have the following:
2 4XX errors. 1 is for a page that we changed the title and nav for in mid March. And one for a page we removed. The nav on the site is working as far as I can see. This seems like a cache issue, but who knows?
20 warnings for “missing meta description tag”. These are all blog archive and author pages. Some have resulted from pagination and are “Part 2, Part 3, Part 4” etc. Others are the first page for authors. And there is one called “new page” that I can’t locate in our Pages admin and have no idea what it is.
5 warnings for “title element too long”. These are also archive pages that have the blog name and so are pages I can’t access through the admin to control page title plus “part 2’s and so on.
71 Notices for “Rel Cononical”. The rel cononicals are all being generated automatically and are for pages of all sorts. Some are for a content pages within the site, a bunch are blog posts, and archive pages for date, blog category and pagination archive pages
6 are 301’s. These are split between blog pagination, author and a couple of site content pages- contact and portfolio. Can’t imagine why these are here.
8 meta-robot nofollow. These are blog articles but only some of the posts. Don’t know why we are generating this for some and not all. And half of them are for the exact same page so there are really only 4 originals on this list. The others are dupes.
8 Blocked my meta-robots. And are also for the same 4 blog posts but duplicated twice each.
We use All in One SEO. There is an option to use noindex for archives, categories that I do not have enabled. And also to autogenerate descriptions which I do not have enabled.
I wasn’t concerned about these at first, but I read these (below) questions yesterday, and think I'd better do something as these are mounting up. I’m wondering if I should be asking our team for some code changes but not sure what exactly would be best.
http://www.seomoz.org/q/pages-i-dont-want-customers-to-see
http://www.robotstxt.org/meta.html
Our site is http://www.fateyes.com
Thanks so much for any assistance on this!
-
Thanks so much, Mike. Good to know I can let this go and I've done my due diligence with checking it all out.
I wish our WP would always create the 301's automaticallybushmen needed, but it doesn't seem to. I just installed Redirection plugin today for a URL change I wanted to make.
-
You don't need to really worry or stress about the missing meta descriptions and long titles.
Meta descriptions do not impact your rankings and Google will automatically create a description for your page if it appears in the SERPs.
Title tags that are too long do not impact your rankings... at least not directly. If your title tag is over by 10 or even 20 characters, it will not impact whether your pages ranks or not. The 70 characters is a suggestion as that was the number of characters that would display in the SERPs; however, now it is based on pixil width. The only other important info you need to know about titles is that you put your most important keywords towards the beginning of the title.
If you are unsure about how or are unable to edit these pages to add or edit the description and title, it isn't going to make our break your site from a ranking standpoint.
Some CMS will automatically generate 301s if you edit a URL's structure. It does this so that any old links pointing to the old URL will be brought to the edited URL. The CMS will not fix broken links that point to the old URL, but on the server side, if someone clicks on an old, broken link, they will be brought to the edited URL page - if that makes sense.
I understand that you want to attack warnings and notices and get things perfect; however, sometimes it just isn't possible. Whether it is a CMS issue or knowing how to fix something complex - what does matter is that you investigate each warning and notice and make sure that it is not negatively impacting your site. From the sounds of it, the handful of warnings and notices you have are just fine.
Hope this helps answer your question.
Mike
-
I'm really sorry to be confusing! It's hard to find the precise language for stuff when you don't really understand it well enough. ;o) I really appreciate that you have stuck with this and are trying to understand my concerns.
Pasted here from my last comment: "I was saying that the metarobots/nofollows were for blog posts, but in looking again, I am realizing these are blog post Comments and Replies, so I understand why WP would automatically put the noindex/nofollow on those. I typo-ed and put "robots" instead of "index". Sorry!"
So, in other words, I found that the noindex/nofollows that SEOMoz is reporting are for the blog comments which means all is well on those. I don't want Google to index comments and my replies to comments.
I'm going to see if I can ask my other question more clearly:
What I am still trying to determine is how to cut down on the number of notices and warnings by fixing or changing the conditions that are causing them.
I do not know what to do programming-wise to either create meta descriptions since they are "missing" and fix title tags that are too long for the archive and author type pages that are generating those notices and warnings. I don't know whether to use noindex, nofollow or block robots so that they won't matter.
I also don't know how/where the 301s were generated as we did not implement those manually or knowingly.
I hope this is better said and more understandable. Crossed fingers as I push "Post Reply".
-
I don't completely understand where you are saying the noindex/nofollow is located. If both are in the head, it applies to the whole page; however, "nofollow" can be used specifically for links (in most cases blog comments).
The easiest thing you can do is ask yourself, "Do I want this page to be indexed by Google?" If no, then you want to use the noindex directive; however, if you want the page indexed, you will want to make sure you are not using the noindex directive.
As far as nofollows are concerned, those can/should be used for blog comments. Nofollow can be used in other instances, but it generally isn't a tag that you throw around much.
This Matt Cutts article talks about how the nofollow directive works in relation to link juice... it is worth a read.
Hope this answers your question Gina.
Mike
-
Thanks much, John! And Mike!
404s:
These are now fixed. Thanks, Mike, for finding them. I tried to subscribe to Screaming Frog awhile back and had a roadblock due to my system. (older MacBook Pro and I can't update the OS any further)Blog Archives:
I have wanted to use archive pages for alternate ways a user can find posts. I tend to like those on other blogs. But thank you for the article link. I look forward to reading that.I am happy to hear the duplicated descriptions on archive pages is ok. I'm guessing you mean the post excerpt with the thumbnails? But I don't quite understand why SEOMoz is telling me that I am missing descriptions then AND I don't know how to access archive pages to insert meta descriptions onto them. Or author pages for that matter.
301's:
We did not implement 301s and I don't have a clue as to why they are there except that I change the name of the Gina Fiedel page. So I guess WP automatically created a 301?? That seems odd. And for the others, I have no idea. They are author pages generated from the User page in the admin and one is our website contact page with an inquiry form.Noindex/nofollow: "These are blog articles but only some of the posts. Don’t know why we are generating this for some and not all. And half of them are for the exact same page so there are really only 4 originals on this list. The others are dupes."
What the heck did I mean by that? Just kidding- I figured it out. I was saying that the metarobots/nofollows were for blog posts, but in looking again, I am realizing these are blog post Comments and Replies, so I understand why WP would automatically put the noindex/nofollow on those. I typo-ed and put "robots" instead of "index". Sorry!
Mike- I am still wondering which tag(s) is/are recommended for the notices and warnings. I'm not sure what to request from our programming team on this.
Again! Thank you both for all the time you've spent on this. So grateful.
-
Screaming Frog - I usually wait for SEOmoz or Webmaster Tools to identify issues, then use Screaming Frog to verify that I have fixed them. It is a great tool and FREE if your site is under 500 pages.
Here are the SEOmoz definitions of the other warnings you are talking about:
"Meta Robots Nofollow - When the meta robots tag for a page includes 'nofollow', no link juice is passed on through the links on that page.
Blocked by Meta Robots - This page is being kept out of the search engine indexes by meta-robots."
I am guessing someone put the following in the head of those blog posts:
It is just telling Google to not index the page and to not pass page rank or anchor text for any links on that page.
Typically the "nofollow" is used in blog comments, so commenters cannot provide links back to their personal websites.
"noindex" shouldn't have any affect on rankings. It is just telling Google that certain pages are not worth putting in their index (copyright, terms of use, etc.).
"nofollow" links if not implemented correctly can look kind of spammy to Google, but in most cases you should be fine.
Does that help?
Mike
-
Hi Gina -
Great questions here. Some of these you should worry about, others are just notices and not necessarily an issue.
Fix the 4XX errors if those pages have links, or have a 404 page that redirects users. 404s are not always bad, but if the user isn't supposed to end up there (ie your product page is expired), then redirect.
Don't worry about the duplicated meta descriptions on archive pages, but do think about if these pages are needed. Ayima had a good post on pagination recently - http://www.ayima.com/seo-knowledge/conquering-pagination-guide.html
Same as above with the title tags on paginated archives.
Rel-canonicals are fine. Once again, just notices that they are there.
Did you implement those 301s? Moz notifies you of them because they might pass less link equity than straight links, but 301s are not bad.
What do you mean by "These are blog articles but only some of the posts. Don’t know why we are generating this for some and not all. And half of them are for the exact same page so there are really only 4 originals on this list. The others are dupes." It seems that this may have been implemented manually on your side, though I don't know how All In One SEO Pack handles it (I use Yoast).
-
Thanks, Mike.
I agree about 404s! Thank you for locating those. Interestingly, the 404s that SEOMoz is picking up are the ones I was guessing are cached because those were fixed within minutes of being created.What I didn't realize is that there were additional internal links to these pages within blog posts. How'd you find those?
I would like to fix to avoid the warnings and notices continuing to generate, can you please explain the norobots vs noindex and how I should set those?
Since there are 8 norobots, how will these effect rankings?
thanks again!
-
Hi Gina,
You should try to fix any errors. Errors can impact your users' experience, as well as interfere with web crawlers and even impact your rankings.
404 errors:
- /balancing-seo-with-your-website-design/ links to /on-target-web-design-santa-barbara/ using anchor text "It is best to get a custom design
- /5-steps-to-increase-traffic-to-your-website/ links to /we-create-websites-that-bring-you-more-business/ using anchor text "increasing traffic to your website,"
Warnings are more or less a "if you have time and can, you could fix these". They really do not impact your rankings, but if you are trying to be perfect, you could fix them.
Notices are just a "heads-up". They do not impact rankings, UNLESS you are blocking robots ; )
Long story short, fix Errors, work on Warnings when you have time, verify you already knew about the Notices.
Hope this helps.
Mike
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Using Weglot on wordpress (errors)
Good day to you all, Does anyone have experience of the errors being pulled up by Moz about the utility of the weglot plugin on Wordpress? Moz is pulling up URLs such as: https://www.ibizacc.com/es/chapparal-2/?wg-choose-original=false These are classified under "redirect issues" and 99% of the pages are with the ?wg-choose parameter in the URL. Is this having an actual negative impact on my search or is it something more Moz related being highlighted. Any advice be appreciated and a resolution .. Im thinking I could exclude this parameter.
Moz Pro | | alwaysbeseen0 -
Moz-Specific 404 Errors Jumped with URLs that don't exist
Hello, I'm going to try and be as specific as possible concerning this weird issue, but I'd rather not say specific info about the site unless you think it's pertinent. So to summarize, we have a website that's owned by a company that is a division of another company. For reference, we'll say that: OURSITE.com is owned by COMPANY1 which is owned by AGENCY1 This morning, we got about 7,000 new errors in MOZ only (these errors are not in Search Console) for URLs with the company name or the agency name at the end of the url. So, let's say one post is: OURSITE.com/the-article/ This morning we have an error in MOZ for URLs OURSITE.com/the-article/COMPANY1 OURSITE.com/the-article/AGENCY1 x 7000+ articles we have created. Every single post ever created is now an error in MOZ because of these two URL additions that seem to come out of nowhere. These URLs are not in our Sitemaps, they are not in Google... They simply don't exist and yet MOZ created an an error with them. Unless they exist and I don't see them. Obviously there's a link to each company and agency site on the site in the about us section, but that's it.
Moz Pro | | CJolicoeur0 -
403s: Are There Instances Where 403's Are Common & Acceptable?
Hey All, Both MOZ & Webmaster tools have identified 403 errors on an editorial site I work with (using Drupal CMS). I looked into the errors and the pages triggering the 403 are all articles in draft status that are not being indexed. If I am not logged into our drupal and I try to access an article in draft status I get the 403 forbidden error. Are these 403's typical for an editorial site where editors may be trying to access an article in draft status while they are not logged in? Webmaster tools is showing roughly 350 pages with the 'Access Denied' 403 status. Are these harmful to rank? Thanks!
Moz Pro | | JJLWeber1 -
Htaccess and robots.txt and 902 error
Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain. The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ). Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible. The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site. In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
Moz Pro | | SEOguy1
<ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<ifmodule mod_mime.c=""># DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:_gzip]
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} /$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
Disallow:
Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /WPnewsiteDee/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg0 -
How to Fix 404 Errors
Hey Moz'ers - I just added a new site to my Moz Pro account and when I got the crawl report back there was a ton of 404 errors (see attached). I realize the best way to fix these is to manually go through every single error and see what the issue is... I just don't have time right now, and I don't have a team member that can jump on this either, but realize this will be a huge boost to this client if/when I get these resolved... So my question is: Is there a quicker way to get these resolved? Is there an outsourcing company that can fix my clients errors correctly? Thanks for the help in advance:) wBhzEeV
Moz Pro | | 2Spurs0 -
What's the best way to eliminate "429 : Received HTTP status 429" errors?
My company website is built on WordPress. It receives very few crawl errors, but it do regularly receive a few (typically 1-2 per crawl) "429 : Received HTTP status 429" errors through Moz. Based on my research, my understand is that my server is essentially telling Moz to cool it with the requests. That means it could be doing the same for search engines' bots and even visitors, right? This creates two questions for me, which I would greatly appreciate your help with: Are "429 : Received HTTP status 429" errors harmful for my SEO? I imagine the answer is "yes" because Moz flags them as high priority issues in my crawl report. What can I do to eliminate "429 : Received HTTP status 429" errors? Any insight you can offer is greatly appreciated! Thanks,
Moz Pro | | ryanjcormier
Ryan0 -
Moz & Xenu Link Sleuth unable to crawl a website (403 error)
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this) Moz Result Title 403 : Error Meta Description 403 Forbidden Meta Robots_Not present/empty_ Meta Refresh_Not present/empty_ Xenu Link Sleuth Result Broken links, ordered by link: error code: 403 (forbidden request), linked from page(s): Thanks in advance!
Moz Pro | | ZaddleMarketing0 -
Error 403
I'm getting this message "We were unable to grade that page. We received a response code of 403. URL content not parseable" when using the On-Page Report Card. Does anyone know how to go about fixing this? I feel like I've tried everything.
Moz Pro | | Sean_McDonnell0