Error 403
-
I'm getting this message "We were unable to grade that page. We received a response code of 403. URL content not parseable" when using the On-Page Report Card. Does anyone know how to go about fixing this? I feel like I've tried everything.
-
I am getting 403 errors for this crazy url:
How do I get rid of this error?
I am also getting 404 errors for pages that do not exist anymore. How do I get rid of those?
-
Great answers Mike!
Jessica, if you're still having issues with the Crawl Test and it seems like a tool issue, let us know at help@seomoz.org - you'll get a faster response from our Help Team for your tool questions that way (unless, of course, a mozzer like Mike beats us to it!)
-
I will check that out. Thank you so much!
-
Is there another folder on your server called resources? If so that maybe the problem. See this thread...
http://wordpress.org/support/topic/suddenly-getting-403-forbiden-error-on-one-page-only
I did run Xenu on your site and experienced the 403 error on that page only. There were other 404s that need to be fixed as well FYI,,,
-
http://www.truckdriverschools.com/resources/
Thank you so much for your help!
-
Jessica,
Is the page(s) in question indexed by Google? I
I would recommend trying another site crawl tool like Xenu Link Sleuth, GSiteCrawler and see if they are able to crawl the site without issue. Could also be something to do with your hosting company trying to prevent Denial of Service (DOS) attacks... If you want to send me the URL I am happy to crawl it for you with one of these tools.
-
It is actually wordpress. Everything looks fine when visiting the URL and inside the wordpress but when I grade the SEO content it gives me the 403 error.
It happened after I added the SEO text to a page that had images within the same text box. Does that make a difference?
-
seems like your website is blocking access to the file. A few questions:
1. Are you blocking robots in your txt file from this url?
2. Do you get the 403 erri when you manually visit the page?
3. What CMS if any are you using? If it is Joomla we've seen some strange things happen with some of the security modules when using crawl tools such as GSiteCrawler.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Could my Crawl Error Report be wrong?
HI there, I am using Yoast SEO plugin on a wordpress website. I am currently showing 70 Med priority crawl errors 'missing meta description' on my Moz pro account. This number of missing meta descriptions has increased over the last 6 weeks. But every single page / post / tag / category has both the title and meta description completed via Yoast. I requested a google bot to crawl the site a few weeks ago as thought it perhaps wasn't been crawled and updated. Any idea what the issue might be? Could the Moz report be incorrect for some reason? Or could something be blocking Moz / Google from seeing the Yoast plugin?
Moz Pro | | skehoe0 -
Rogerbot crawls my site and causes error as it uses urls that don't exist
Whenever the rogerbot comes back to my site for a crawl it seems to want to crawl urls that dont exist and thus causes errors to be reported... Example:- The correct url is as follows: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/ But it seems to want to crawl the following: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/?id=10330 This format doesn't exist anywhere and never has so I have no idea where its getting this url format from The user agent details I get are as follows: IP ADDRESS: 107.22.107.114
Moz Pro | | spiralsites
USER AGENT: rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+pr1-crawler-17@moz.com)0 -
Error on duplicated content, but when checking shouldn't been possible
Dear all, Every week I look at the different crawl reports for our website, since the start of my SeoMoz membership the Errors for duplicated content and duplicated Title is rising. But if I take out the .csv file and look in more detail, and select a pages which is marked as duplicated content, a canonical is actually existing on this page. So it shouldn't be an warning, I have no idea what the issue could be. For example pagesare marked as duplicated content, <colgroup><col width="966"></colgroup>
Moz Pro | | Letty
| http://www.zylom.com/es/descargar-juegos/3-en-raya/?sortby=2 |
| http://www.zylom.com/es/descargar-juegos/3-en-raya/?startnumber=60&sortby=2 |
| http://www.zylom.com/es/descargar-juegos/3-en-raya/?startnumber=80&sortby=2 | the parameters after '?' (question mark) are necessary for our internal system. To overcome duplicated content we coded that a canonical tag onis placed on every page with parameters and the main page is http://www.zylom.com/es/descargar-juegos/3-en-raya/ but it doesn't seem to work, because my error warnings are still rising. Please advice me Kind regards, Ms Letty van Eembergen0 -
54 new 404 errors on my website?
Hi There In the latest report I have 54 404-errors. All last week, previously I had 2 404s that I fixed. In report say: Title404 : ErrorMeta DescriptionTraceback (most recent call last): File "build/bdist.linux- x86_64/egg/downpour/init.py", line 391, in _error failure.raiseException() File "/usr/local/lib/python2.7/site- packages/twisted/python/failure.py", line 370, in raiseException raise self.type, self.value, self.tb Error: 404 Not FoundMeta RobotsNot present/emptyMeta RefreshNot present/empty Are these normal 404 errors I have to look at and fix? Or is this some script that running on my server and causing errors? In general - what should I do to fix this? Thanks Dean
Moz Pro | | Passanger880 -
Where do these error 404 pages come from
Hi, I've got a list of about 12 url's in our 404 section on here which I'm confused about. The url's relate to Christmas so they have not been active for 9 months. Can anyone answer where the SeoMoz crawler found these url's as they are not linked to on the website. Thanks
Moz Pro | | SimmoSimmo0 -
SEOMoz reports and 404 errors
My SEOMoz report shows a 404 error, found today for this url: http://globalheavyhaul.com/google.com i do not have this anchor text anywhere on my website. How did Roger figure out that somebody looked for that page? Do I need to worry about 404 errors that are the result of user mistakes, instead of actual bad links?
Moz Pro | | FreightBoy0 -
Crawl reports, date/time error found
Hello! I need to filter out the crawl errors found before a certain date/time. I find the date and time the errors were discovered to be the same. It looks more like the time the report was generated. Fix?
Moz Pro | | AJPro0 -
Crawl Errors Confusing Me
The SEOMoz crawl tool is telling me that I have a slew of crawl errors on the blog of one domain. All are related to the MSNbot. And related to trackbacks (which we do want to block, right?) and attachments (makes sense to block those, too) ... any idea why these are crawl issues with MSNbot and not Google? My robots.txt is here: http://www.wevegotthekeys.com/robots.txt. Thanks, MJ
Moz Pro | | mjtaylor0