Unsolved Ooops. Our crawlers are unable to access that URL
-
hello
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ? -
I'm encountering the same problem with my website CFMS Bill Status. It seems that both my main website is totally inaccessible to web crawlers. I'm investigated all possible causes such as server configurations, robots.txt restrictions, and security measures. But still haven't found out any clue yet.
-
Have you tried those steps I've suggested earlier? Like checking out settings?
-
Make sure your website can be seen by everyone and isn't blocked by any security settings. Try opening your website from different devices and networks to see if it works. Also, check if your website's settings are stopping search engines from seeing it. Look for any rules that might be blocking search engines in a file called robots.txt. If you find any, make sure they're not stopping search engines from looking at your site.
-
I am getting same error on my website Apne TV
It's been 7 days I am getting same error again and again
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved CSV export does not work
Export to csv spam links or other links does not work. Can you help me? And I already posted this question but it seems it was deleted. Without this option I do not see why I should continue moz subscription.
Product Support | | netcomsia
alt text0 -
Unsolved Old Account Data
I've just got our account live again after the old payment card had expired. Now I'm back in the account I don't see my old set up and all the site / keywords I've previously had set up. Can you help?
Moz Pro | | Paul_Coupe0 -
Broken URL Links
Hi everyone, I have a question regarding broken URL links on my website. Late last year I move my site from an old platform to Shopify, and now have broken URL links giving out 4xx errors. When I look at Moz Pro>Campaigns>Insights>links, I can see the top broken URL links, however there is a difference if copy & paste URL directly from Moz Pro and by Export CSV file. For example below, If I copy and paste links direct from Moz Pro, it has the “http://” in front as below: http://www.thehairhub.com.au/WebRoot/ecshared01/Shops/thehairhub/57F3/1D8F/D244/C675/E27D/AC10/003F/35AD/manic-panic-colours.jpg But when I export the list of links as an CSV file, the http:// is removed. www.thehairhub.com.au/WebRoot/ecshared01/Shops/thehairhub/57F3/1D8F/D244/C675/E27D/AC10/003F/35AD/manic-panic-colours.jpg Another Example below: By copy & paste URL direct from Moz Pro
Technical SEO | | johnwall
http://thehairhub.com.au/Shop-Brands/Vitafive-CPR/CPR-Rescue By export CSV file.
thehairhub.com.au/Shop-Brands/Vitafive-CPR/CPR-Rescue Which one do I use to enter into the “Redirect From” field in Shopify URL Redirects? Do I need to have the http:// in front of the URL? Or is it not required for redirects to work? Kind Regards, John Wall
The Hair Hub0 -
Cancellation questions? Ask them here!
We handle cancellation concerns individually through our help team! Please reach out to them directly via email at help@moz.com! We also have a help article that walks through exactly how you can cancel or make changes to your accounts: https://moz.com/help/your-account/manage-subscriptions/cancel-moz-pro Thanks!
Product Support | | HayleyBowyer0 -
/essions/essions keeps appending to 1 url on our website
Moz keeps giving us an error showing URL too long, when I investigate the offending url, I get this in the crawl. We can't work out what /essions is or why it's appending to the end of the url. Is this a Moz or website issue? <colgroup><col width="841"></colgroup>
Moz Pro | | NickWillWright
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/ |
| https://www.mywebsite/singita-lebombo-lodge/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/essions/ |0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Order of urls in SEOMoz crawl report
Is there any rhyme or reason to the order of urls in the SEOMoz crawl report, or are the urls just listed in random order?
Moz Pro | | LynnMarie0 -
How to read Crawler downloaded report
I am trying to seperate the duplicate title and description URLs, by looking at the report i am not getting how to find all urls which contain same title and description. Is there any video link on the site which walk me through each part of the report. Thanks, Punam
Moz Pro | | nonlinearcreations0