Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Unsolved 403 crawl error
-
Hi,
Moz( Also reported by GSC)have reported 403 crawl error on some of my pages. The pages are actually working fine when loaded and no visible issue at all. My web developer told me that some times error issues are reported on a working pages and there is nothing to worry about.
My question is, will the 403 error have bad consequences on my SEO/Page ranking etc.These are some of the pages that have been reported with 403 error but loading fine:
-
@ghrisa65 said in 403 crawl error:
Hi,
Moz( Also reported by GSC)have reported 403 crawl error on some of my pages. The pages are actually working fine when loaded and no visible issue at all. My web developer told me that some times error issues are reported on a working pages and there is nothing to worry about.
My question is, will the 403 error have bad consequences on my SEO/Page ranking etc.These are some of the pages that have been reported with 403 error but loading fine:
-
A 403 crawl error is an HTTP status code that indicates that the web server has understood the request, but it refuses to authorize access to the requested resource. In simpler terms, it means you don't have permission to access the web page or file you're trying to view. This error is often associated with issues like restricted access, authentication problems, or improper permissions on the server. (PMP Exam Prep) (Project Management Professional)
-
In essence, this error tells you that you're not authorized to view the content you're trying to access. It's like encountering a locked door without the right key. This could be due to various reasons, such as restricted areas, private documents, or the need for a login and password.
If you're encountering a 403 error, here's what you can do:
-
Double-Check the URL: Make sure you've entered the correct URL and path.
-
Check Permissions: If you're the website owner, ensure that the necessary permissions are set correctly on your server for the file or directory you're trying to access.
-
Authentication: If the content requires authentication, make sure you're providing valid credentials.
-
Contact the Website: If you're trying to access someone else's website and encountering the error, it could be a server-side issue. Contact the website's administrator to let them know about the problem.
-
Check for IP Blocking: If you suspect your IP might be blocked, you can try accessing the website from a different network or using a VPN.
The specific solution will depend on the context and cause of the error. If you're having trouble resolving the issue consult with your hosting provider or a web developer for assistance.
Warm Regards
Rahul Gupta
https://suviditacademy.com/ -
-
A "403 Forbidden" error is an HTTP status code that indicates that the server understood the request, but it refuses to authorize it. This typically occurs when a web server recognizes the user's request, but the server refuses to allow access due to lack of proper permissions or other security-related reasons.
In the context of a crawl report, a "403 Forbidden" error could indicate that the crawler (such as a search engine bot or web crawler) is being denied access to certain pages or resources on a website. This could be intentional, as the website owner might want to restrict access to certain parts of their site, or it could be unintentional, caused by misconfigured server settings or security measures.
Here are some common reasons for encountering a "403 Forbidden" error in a crawl report:
Permission Issues: The crawler may not have the necessary permissions to access certain parts of the website. This could be due to misconfigured file or directory permissions on the server.
IP Blocking: The website might have implemented IP blocking or rate limiting to prevent excessive crawling or to block specific IP addresses.
User Agent Restrictions: The website might restrict access to specific user agents (the identification string sent by the crawler), which can prevent certain crawlers from accessing the site.
Login Requirements: Some parts of the website might require user authentication or a valid session to access. If the crawler doesn't provide the necessary credentials, it could be denied access.
Security Measures: The website might have security measures in place that block access from known crawlers or bots to prevent scraping or other malicious activities.
URL Filtering: The server could be configured to deny access to specific URLs or patterns.
CAPTCHA Challenges: Some websites use CAPTCHA challenges to verify that the request is coming from a human user. Crawlers may not be able to solve these challenges.
To address a "403 Forbidden" error in a crawl report, you can take the following steps:
Check Permissions: Ensure that the files and directories being accessed by the crawler have the correct permissions set on the server.
IP Whitelisting: If you are the website owner, consider whitelisting the IP address of the crawler if you want it to have access.
User Agent: If you are the crawler operator, ensure that your crawler uses a legitimate and recognizable user agent. Some websites might block unidentified user agents.
Authentication: If the website requires authentication, provide the necessary credentials in the crawler's requests.
Respect robots.txt: Make sure your crawler follows the rules specified in the website's robots.txt file to avoid accessing restricted areas.
Contact Website Owner: If you are encountering "403 Forbidden" errors on someone else's website, consider reaching out to the website owner to clarify the access restrictions.
Remember to always follow ethical crawling practices and respect website terms of use when crawling or scraping content from the internet.
-
A "403 creep blunder" commonly alludes to a status code that is returned by a web server when a web crawler or a client is endeavoring to get to a specific page or asset, yet they don't have the important consents to do as such. The HTTP status code "403 Illegal" shows that the server grasped the solicitation, however it will not approve it.
There are a couple of normal purposes behind experiencing a "403 Prohibited" mistake while creeping a site:
Inadequate Authorizations: The web server might require legitimate confirmation or approval to get to specific pages or catalogs. On the off chance that the crawler's certifications are not legitimate or missing, a "403 Prohibited" blunder can happen.
IP Impeding or Rate Restricting: Assuming the server identifies extreme solicitations from a specific IP address in a brief timeframe, it could obstruct that IP address for a brief time or uphold rate restricting to forestall misuse. This can prompt a "403 Illegal" mistake for ensuing solicitations.
Misconfigured Server Authorizations: At times, the server's record or registry consents may be set inaccurately, prompting specific documents or indexes being blocked off. This can set off a "403 Prohibited" mistake while attempting to get to those assets.
Content Limitation: Sites could have specific regions that are intended to be confined to explicit clients or gatherings. On the off chance that the client or crawler doesn't have the important honors, they will get a "403 Illegal" mistake while attempting to get to these areas.
Web Application Firewall (WAF): A few sites use WAFs to safeguard against vindictive exercises. On the off chance that the WAF recognizes the slithering way of behaving as dubious or unapproved, it could obstruct the entrance with a "403 Taboo" mistake.
To investigate and determine a "403 slither mistake," you can attempt the accompanying advances:
Actually look at Consents: Guarantee that the client specialist or crawler you are utilizing has the fitting authorizations to get to the assets on the site.
Survey IP Obstructing and Rate Cutoff points: Assuming that you're being rate-restricted or hindered, you could have to change your creeping conduct or contact the site overseer to whitelist your IP address.
Look at URL and Boundaries: Twofold check that the URLs and any boundaries you are involving in your solicitations are accurately arranged and substantial.
Authentication: Assuming that the site requires validation, ensure you are giving the right qualifications in your solicitations.
Contact Site Chairman: Assuming you accept the issue is on the site's side, contacting the site executive or specialized help could help in settling the issue.
Rememb
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Crawling only the Home of my website
Hello,
Product Support | | Azurius
I don't understand why MOZ crawl only the homepage of our webiste https://www.modelos-de-curriculum.com We add the website correctly, and we asked for crawling all the pages. But the tool find only the homepage. Why? We are testing the tool before to suscribe. But we need to be sure that the tool is working for our website. If you can please help us.0 -
Solved Mozbar Chrome Extension 404 Error
Hello, I am trying to check my website https://whatnumberisiv.com Domain Authority but my Mozbar is not working, When I try to install Mozbar Chrome Extension; it is showing 404 Error. What is the solution of this error? whatnumberisiv.com
Product Support | | Sohail03730 -
Website can't be crawled
Hi there, One of our website can't be crawled. We did get the error emails from you (Moz) but we can't find the solution. Can you please help me? Thanks, Tamara
Product Support | | Yenlo0 -
Site Crawl Status code 430
Hello, In the site crawl report we have a few pages that are status 430 - but that's not a valid HTTP status code. What does this mean / refer to?
Product Support | | ianatkins
https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errors If I visit the URL from the report I get a 404 response code, is this a bug in the site crawl report? Thanks, Ian.0 -
False 5xx Errors for ColdFusion website
For several years month after month MOZ crawl reports 5xx errors on many pages. Almost every time all the pages work fine as fa as i could see. Google webmaster tools does not notice any errors. Could anyone explain how to fix this situation? Should i get a refund from MOZ?
Product Support | | Elchanan0 -
Crawl test
I used to use the crawl test tool to crawl websites and it presented the information in a really useful hierarchy of pages. The new on-demand crawl test doesn't seem to do this. Is there another tool I should be using to get the data?
Product Support | | Karen_Dauncey0 -
Crawl error robots.txt
Hello, when trying to access the site crawl to be able to analyze our page, the following error appears: **Moz was unable to crawl your site on Nov 15, 2017. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster. Can help us? Thanks!
Product Support | | Mandiram0 -
I have removed a subdomain from my main domain. We have stopped the subdomain completely. However the crawl still shows the error for that sub-domain. How to remove the same from crawl reports.
Earlier I had a forum as sub-domain and was mentioned in my main domain. However i have now discontinued the forum and have removed all the links and mention of the forum from my main domain. But the crawler still shows error for the sub-domain. How to make the crawler issues clean or delete the irrelevant crawl issues. I dont have the forum now and no links at the main site, bu still shows crawl errors for the forum which doesnt exist.
Product Support | | potterharry0