How to get rid of bot verification errors
-
I have a client who sells highly technical products and has lots and lots (a couple of hundred) pdf datasheets that can be downloaded from their website. But in order to download a datasheet, a user has to register on the site. Once they are registered, they can download whatever they want (I know this isn't a good idea but this wasn't set up by us and is historical). On doing a Moz crawl of the site, it came up with a couple of hundred 401 errors. When I investigated, they are all pages where there is a button to click through to get one of these downloads. The Moz error report calls the error "Bot verification".
My questions are:
Are these really errors?
If so, what can I do to fix them?
If not, can I just tell Moz to ignore them or will this cause bigger problems? -
@mfrgolfgti probably has to do with Moz bots being blocked due to a "user registration" requirement to be able to see the files.
-
@mfrgolfgti believe that is happening because Moz bots are being blocked (due to user registration need) to view the files. 401s code errors are usually related to a "lack of valid authentication credentials".
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Moz can't crawl my site
Moz is being blocked from crawling the following site - https://www.cleanchain.com. When looking at Robot.txt, the following is disallowing access but don't know whether this is preventing Moz from crawling too? User-agent: *
Moz Pro | | danhart2020
Disallow: /adeci/
Disallow: /core/
Disallow: /connectors/
Disallow: /assets/components/ Could something else be preventing the crawl?0 -
Google Search Console - Excluded Pages and Multiple Properties
I have used Moz to identify keywords that are ideal for my website and then I optimized different pages for those keywords, but unfortunately rankings for some of the pages have declined. Since I am working with an ecommerce site, I read that having a lot of Excluded pages on the Google Search Console was to be expected so I initially ignored them. However, some of the pages I was trying to optimize are listed there, especially under the 'Crawled - currently not indexed' and the 'Discovered - currently not indexed' sections. I have read this page (link: https://moz.com/blog/crawled-currently-not-indexed-coverage-status ) and plan on focusing on Steps 5 & 7, but wanted to ask if anyone else has had experience with these issues. Also, does anyone know if having multiple properties (https vs http, www vs no www) can negatively affect a site? For example, could a sitemap from one property overwrite another? Would removing one property from the Console have any negative impact on the site? I plan on asking these questions on a Google forum, but I wanted to add it to this post in case anyone here had any insights. Thank you very much for your time,
SEO Tactics | | ForestGT
Forest0 -
Unsolved Halkdiki Properties
Hello,
Moz Pro | | TheoVavdinoudis
I have a question about Site Crawl: Content Issues segment. I have an e-shop and moz showing me problem because my urls are too similar and my H1s are the same
<title>Halkdiki Properties
https://halkidikiproperties.com/en/properties?property_category=1com&property_subcategory=&price_min_range=&price_max_range=&municipality=&area=&sea_distance=&bedroom=&hotel_bedroom=&bathroom=&place=properties&pool=&sq_min=&sq_max=&year_default=&fetures=&sort=1&length=12&ids= <title>Halkdiki Properties
https://halkidikiproperties.com/en/properties?property_category=2&property_subcategory=&price_min_range=0&price_max_range=0&municipality=&area=&sea_distance=&bedroom=0&hotel_bedroom=0&bathroom=0&place=properties&pool=0&sq_min=&sq_max=&year_default=&fetures=&sort=1&length=12&ids= Can someone help, is a big problem or I ignore it?? thank you0 -
I have over 3000 4xx errors on my site for pages that don't exist! Please help!
Hello! I have a new blog that is only 1 month old and I already have over 3000 4xx errors which I've never had on my previous blogs. I ran a crawl on my site and it's showing as my social media links as being indexed as pages. For example, my blog post link is:
Technical SEO | | thebloggersi
https://www.thebloggersincentive.com/blogging/get-past-a-creative-block-in-blogging/
My site is then creating a link like the below:
https://www.thebloggersincentive.com/blogging/get-past-a-creative-block-in-blogging/twitter.com/aliciajthomps0n
But these are not real pages and I have no idea how they got created. I then paid someone to index the links because I was advised by Moz, but it's still not working. All the errors are the same, it's indexing my Twitter account and my Pinterest. Can someone please help, I'm really at a loss with it.
2f86c9fe-95b4-4df5-aeb4-73570881938c-image.png0 -
Massive Increase in 404 Errors in GWT
Last June, we transitioned our site to the Magento platform. When we did so, we naturally got an increase in 404 errors for URLs that were not redirected (for a variety of reasons: we hadn't carried the product for years, Google no longer got the same string when it did a "search" on the site, etc.). We knew these would be there and were completely fine with them. We also got many 404s due to the way Magento had implemented their site map (putting in products that were not visible to customers, including all the different file paths to get to a product even though we use a flat structure, etc.). These were frustrating but we did custom work on the site map and let Google resolve those many, many 440s on its own. Sure enough, a few months went by and GWT started to clear out the 404s. All the poor, nonexistent links from the site map and missing links from the old site - they started disappearing from the crawl notices and we slowly went from some 20k 404s to 4k 404s. Still a lot, but we were getting there. Then, in the last 2 weeks, all of those links started showing up again in GWT and reporting as 404s. Now we have 38k 404s (way more than ever reported). I confirmed that these bad links are not showing up in our site map or anything and I'm really not sure how Google found these again. I know, in general, these 404s don't hurt our site. But it just seems so odd. Is there any chance Google bots just randomly crawled a big ol' list of outdated links it hadn't tried for awhile? And does anyone have any advice for clearing them out?
Technical SEO | | Marketing.SCG0 -
What does this error mean?
We recently merged our Google + & Google Local pages and sent a request to Webmaster tools to connect the Google + page to our website. The message was successfully sent. However, when clicking the 'Approve or reject this request' link, the following error message appears: 'Can't find associate request' Anyone know what we are doing incorrectly? Thanks in advance for any insight.
Technical SEO | | SEOSponge0 -
How do crawl errors from SEOmoz tool set effect rankings?
Hello - The other day I presented the crawl diagnostic report to a client. We identified duplicate page title errors, missing meta description errors, and duplicate content errors. After reviewing the report we presented it to the clients web company who operates a closed source CMS. Their response was that these errors are not worthy of fixing and in fact they are not hurting the site. We are having issues getting the errors fixed and I would like your opinion on this matter. My question is, how bad are these errors? Should we not fix them? Should they be fixed? Will fixing the errors have an impact on our site's rankings? Personally, I think the question is silly. I mean, the errors were found using the SEOmoz tool kit, these errors have to be effecting SEO.....right? The attached image is the result of the Crawl Diagnostics that crawled 1,400 pages. NOTE: Most of the errors are coming from Pages like blog/archive/2011-07/page-2 /blog/category/xxxxx-xxxxxx-xxxxxxx/page-2 testimonials/147/xxxxx--xxxxx (xxxx represents information unique to the client) Thanks for your insight! c9Q33.png
Technical SEO | | Gabe0 -
Link API returns Error 500
http://lsapi.seomoz.com/linkscape/links/nz.yahoo.com?SourceCols=4&Limit=100&Sort=domain_authority&Scope=domain_to_domain&Filter=external+follow&LinkCols=4 Hi folks any idea why the above returns Err 500 ? Seems to pertain to the domain - it works on other sites just not nz.yahoo.com Thanks!
Technical SEO | | jimbo_kemp0