Unsolved 403 crawl error
-
Hi,
Moz( Also reported by GSC)have reported 403 crawl error on some of my pages. The pages are actually working fine when loaded and no visible issue at all. My web developer told me that some times error issues are reported on a working pages and there is nothing to worry about.
My question is, will the 403 error have bad consequences on my SEO/Page ranking etc.These are some of the pages that have been reported with 403 error but loading fine:
-
@ghrisa65 said in 403 crawl error:
Hi,
Moz( Also reported by GSC)have reported 403 crawl error on some of my pages. The pages are actually working fine when loaded and no visible issue at all. My web developer told me that some times error issues are reported on a working pages and there is nothing to worry about.
My question is, will the 403 error have bad consequences on my SEO/Page ranking etc.These are some of the pages that have been reported with 403 error but loading fine:
-
A 403 crawl error is an HTTP status code that indicates that the web server has understood the request, but it refuses to authorize access to the requested resource. In simpler terms, it means you don't have permission to access the web page or file you're trying to view. This error is often associated with issues like restricted access, authentication problems, or improper permissions on the server. (PMP Exam Prep) (Project Management Professional)
-
In essence, this error tells you that you're not authorized to view the content you're trying to access. It's like encountering a locked door without the right key. This could be due to various reasons, such as restricted areas, private documents, or the need for a login and password.
If you're encountering a 403 error, here's what you can do:
-
Double-Check the URL: Make sure you've entered the correct URL and path.
-
Check Permissions: If you're the website owner, ensure that the necessary permissions are set correctly on your server for the file or directory you're trying to access.
-
Authentication: If the content requires authentication, make sure you're providing valid credentials.
-
Contact the Website: If you're trying to access someone else's website and encountering the error, it could be a server-side issue. Contact the website's administrator to let them know about the problem.
-
Check for IP Blocking: If you suspect your IP might be blocked, you can try accessing the website from a different network or using a VPN.
The specific solution will depend on the context and cause of the error. If you're having trouble resolving the issue consult with your hosting provider or a web developer for assistance.
Warm Regards
Rahul Gupta
https://suviditacademy.com/ -
-
A "403 Forbidden" error is an HTTP status code that indicates that the server understood the request, but it refuses to authorize it. This typically occurs when a web server recognizes the user's request, but the server refuses to allow access due to lack of proper permissions or other security-related reasons.
In the context of a crawl report, a "403 Forbidden" error could indicate that the crawler (such as a search engine bot or web crawler) is being denied access to certain pages or resources on a website. This could be intentional, as the website owner might want to restrict access to certain parts of their site, or it could be unintentional, caused by misconfigured server settings or security measures.
Here are some common reasons for encountering a "403 Forbidden" error in a crawl report:
Permission Issues: The crawler may not have the necessary permissions to access certain parts of the website. This could be due to misconfigured file or directory permissions on the server.
IP Blocking: The website might have implemented IP blocking or rate limiting to prevent excessive crawling or to block specific IP addresses.
User Agent Restrictions: The website might restrict access to specific user agents (the identification string sent by the crawler), which can prevent certain crawlers from accessing the site.
Login Requirements: Some parts of the website might require user authentication or a valid session to access. If the crawler doesn't provide the necessary credentials, it could be denied access.
Security Measures: The website might have security measures in place that block access from known crawlers or bots to prevent scraping or other malicious activities.
URL Filtering: The server could be configured to deny access to specific URLs or patterns.
CAPTCHA Challenges: Some websites use CAPTCHA challenges to verify that the request is coming from a human user. Crawlers may not be able to solve these challenges.
To address a "403 Forbidden" error in a crawl report, you can take the following steps:
Check Permissions: Ensure that the files and directories being accessed by the crawler have the correct permissions set on the server.
IP Whitelisting: If you are the website owner, consider whitelisting the IP address of the crawler if you want it to have access.
User Agent: If you are the crawler operator, ensure that your crawler uses a legitimate and recognizable user agent. Some websites might block unidentified user agents.
Authentication: If the website requires authentication, provide the necessary credentials in the crawler's requests.
Respect robots.txt: Make sure your crawler follows the rules specified in the website's robots.txt file to avoid accessing restricted areas.
Contact Website Owner: If you are encountering "403 Forbidden" errors on someone else's website, consider reaching out to the website owner to clarify the access restrictions.
Remember to always follow ethical crawling practices and respect website terms of use when crawling or scraping content from the internet.
-
A "403 creep blunder" commonly alludes to a status code that is returned by a web server when a web crawler or a client is endeavoring to get to a specific page or asset, yet they don't have the important consents to do as such. The HTTP status code "403 Illegal" shows that the server grasped the solicitation, however it will not approve it.
There are a couple of normal purposes behind experiencing a "403 Prohibited" mistake while creeping a site:
Inadequate Authorizations: The web server might require legitimate confirmation or approval to get to specific pages or catalogs. On the off chance that the crawler's certifications are not legitimate or missing, a "403 Prohibited" blunder can happen.
IP Impeding or Rate Restricting: Assuming the server identifies extreme solicitations from a specific IP address in a brief timeframe, it could obstruct that IP address for a brief time or uphold rate restricting to forestall misuse. This can prompt a "403 Illegal" mistake for ensuing solicitations.
Misconfigured Server Authorizations: At times, the server's record or registry consents may be set inaccurately, prompting specific documents or indexes being blocked off. This can set off a "403 Prohibited" mistake while attempting to get to those assets.
Content Limitation: Sites could have specific regions that are intended to be confined to explicit clients or gatherings. On the off chance that the client or crawler doesn't have the important honors, they will get a "403 Illegal" mistake while attempting to get to these areas.
Web Application Firewall (WAF): A few sites use WAFs to safeguard against vindictive exercises. On the off chance that the WAF recognizes the slithering way of behaving as dubious or unapproved, it could obstruct the entrance with a "403 Taboo" mistake.
To investigate and determine a "403 slither mistake," you can attempt the accompanying advances:
Actually look at Consents: Guarantee that the client specialist or crawler you are utilizing has the fitting authorizations to get to the assets on the site.
Survey IP Obstructing and Rate Cutoff points: Assuming that you're being rate-restricted or hindered, you could have to change your creeping conduct or contact the site overseer to whitelist your IP address.
Look at URL and Boundaries: Twofold check that the URLs and any boundaries you are involving in your solicitations are accurately arranged and substantial.
Authentication: Assuming that the site requires validation, ensure you are giving the right qualifications in your solicitations.
Contact Site Chairman: Assuming you accept the issue is on the site's side, contacting the site executive or specialized help could help in settling the issue.
Rememb
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved 403 errors for assets which work fine
Hi,
Moz Tools | | Skites2
I am facing some issue with our moz pro account
We have images stored in a s3 buckets eg: https://assets2.hangrr.com/v7/s3/product/151/beige-derby-cotton-suit-mb-2.jpg
Hundreds of such images show up in link opportunities - Top pages tool - As 403 ... But all these images work fine and show status 200. Can't seem to solve this. Thanks.0 -
How get rid of 403 crawl error?
My wordpress website has 162 crawl 403 errors. Based on what I read it means that the server is denying crawlers to access the pages. The pages itself will load so guessing it's just an issue with crawlers only. How do I go about fixing this issue?
On-Page Optimization | | emrekeserr30 -
Unsolved URL Crawl Reports providing drastic differences: Is there something wrong?
A bit at a loss here. I ran a URL crawl report at the end of January on a website( https://www.welchforbes.com/ ). There were no major critical issues at the time. No updates were made on the website (that I'm aware of), but after running another crawl on March 14, the report was short about 90 pages on the site and suddenly had a ton of 403 errors. I ran a crawl again on March 15 to check if there was perhaps a discrepancy, and the report crawled even fewer pages and had completely different results again. Is there a reason the results are differing from report to report? Is there something about the reports that I'm not understanding or is there a serious issue within the website that needs to be addressed? Jan. 28 results:
Reporting & Analytics | | OliviaKantyka
Screen Shot 2022-03-16 at 3.00.52 PM.png March 14 results:
Screen Shot 2022-03-15 at 10.31.22 AM.png March 15 results:
Screen Shot 2022-03-15 at 4.06.42 PM.png0 -
403 Errors Issue
Hi, all! I've been working with a Wordpress site that I inherited that gets little to no organic traffic, despite being content rich, optimized, etc. I know there's something wrong on the backend but can't find a satisfactory culprit. When I emulate googlebot, most pages give me a 403 error. Also, google will not index many urls which makes sense and is a massive headache. All advice appreciated! The site is https://www.diamondit.pro/ It is specific to WP Engine, using GES (Global Edge Security) and WPWAF
Technical SEO | | SimpleSearch0 -
Solved Site Crawl Won't Complete
How can I start/restart a new site crawl? I requested one 2 days ago on one of my sites, and it won't complete. It's only 150 pages -
Product Support | | PaulBarrs0 -
Crawl Issue
Hi, We have 3 campaigns running for our websites in different territories. All was going well until April 11th when Moz reported that our .com site (sendmode.com) could not be crawled. I get this error "Your page redirects or links to a page that is outside of the scope of your campaign settings ..." I've been through the site a number of times but have been unable to get to the root of the problem. Robots.txt and 301's look fine. Is there any way I can find out which page is causing the issue? John
Product Support | | johnmc330 -
What is the difference between the "Crawl Issues" report and the "Crawl Test" report?
I've downloaded the CSV of the Crawl Diagnositcs report (which downloads as the "Crawl Issues" report) and the CSV from the Crawl Test Report, and pulled out the pages for a specific subdomain. The Crawl Test report gave me about 150 pages, where the Crawl Issues report gave 500 pages. Why would there be that difference in results? I've checked for duplicate URLs and there are none within the Crawl Issues report.
Product Support | | SBowen-Jive0 -
MOZ not accepting our recent changes it still showing us old Crawl Diagnostics report
Hi, 507 Temporary Redirect We made changes for 302 redirects which are listed in crawl diagnostics report. Now "Compare" and "Wishlist" links are already removed from our source code. All required changes are made but still your report listed Compare and wishlist links. We made changes on Friday (14/8/2015) and waiting for new updated report. Link: http://www.stopwobble.com/ Please let us know what is the exact issue. So that we can fix it.
Product Support | | torbett0