Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Crawling issue
-
Hello,
I have added the campaign IJsfabriek Strombeek (ijsfabriekstrombeek.be) to my account. After the website had been crawled, it showed only 2 crawled pages, but this site has over 500 pages.It is divided into four versions: a Dutch, French, English and German version. I thought that could be the issue because I only filled in the root domain ijsfabriekstrombeek.be , so I created another campaign with the name ijsfabriekstrombeek with the url ijsfabriekstrombeek.be/nl . When MOZ crawled this one, I got the following remark:
**Moz was unable to crawl your site on Feb 21, 2018. **Your page redirects or links to a page that is outside of the scope of your campaign settings. Your campaign is limited to pages with ijsfabriekstrombeek.be/nl in the URL path, which prevents us from crawling through the redirect or the links on your page. To enable a full crawl of your site, you may need to create a new campaign with a broader scope, adjust your redirects, or add links to other pages that include ijsfabriekstrombeek.be/nl. Typically errors like this should be investigated and fixed by the site webmaster. I have checked the robots.txt and that is fine. There are also no robots meta tags in the code, so what can be the problem? I really need to see an overview of all the pages on the website, so I can use MOZ for the reason that I prescribed, being SEO improvement.Please come back to me soon. Is there a possibility that I can see someone sort out this issue through 'Join me'?
Thanks
-
Perfect. Thanks!
-
Hey!
I see that you have contacted us directly at help@moz.com. Thanks!
We will take a look at your Campaign and get back to you ASAP.
Eli
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Issue with API Results and Excessive Token Consumption
Dear Moz Support Team, I hope this message finds you well. I am writing to express my concerns regarding the performance of the Moz API, which I have been using for domain information retrieval through Zapier. Despite paying for your API service, I am encountering several significant issues that are affecting my ability to use the service effectively: Incorrect and Inconsistent Data: When I query domain information, I receive incorrect results for Page Authority (PA), Domain Authority (DA), and Spam Score (SS). Specifically, PA and DA are always returned as 1, while SS is consistently -1, which indicates that the data being provided is either incomplete or incorrect. This discrepancy is preventing me from relying on your service for accurate domain insights. Excessive Token Consumption: I have noticed that the API token usage is significantly higher than expected. After querying only 250 domains, my token consumption has already exceeded 6,000 tokens, which seems unusually high. I had understood that the token usage would be calculated per query, but it appears that the consumption is much higher than anticipated. Given that I have invested in this service, I am frustrated that I am not receiving the expected level of performance and data accuracy. Could you please investigate these issues and provide clarity on why this is happening? Additionally, I would appreciate any guidance on how to resolve these discrepancies and ensure that I am using the API in the most efficient way possible. I look forward to your prompt response and assistance in resolving these issues. thank you
Product Support | | tkddh13230 -
Solved Payment Issue in Subscription Upgrade
Hi, I’m trying to upgrade from the Starter to the Standard plan, but I keep encountering an error: "Failed to send subscription update to Stripe." Could you please assist me in making the switch?
Product Support | | Devesh_Chauhan0 -
Unsolved Crawling only the Home of my website
Hello,
Product Support | | Azurius
I don't understand why MOZ crawl only the homepage of our webiste https://www.modelos-de-curriculum.com We add the website correctly, and we asked for crawling all the pages. But the tool find only the homepage. Why? We are testing the tool before to suscribe. But we need to be sure that the tool is working for our website. If you can please help us.0 -
Website can't be crawled
Hi there, One of our website can't be crawled. We did get the error emails from you (Moz) but we can't find the solution. Can you please help me? Thanks, Tamara
Product Support | | Yenlo0 -
Site Crawl Status code 430
Hello, In the site crawl report we have a few pages that are status 430 - but that's not a valid HTTP status code. What does this mean / refer to?
Product Support | | ianatkins
https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errors If I visit the URL from the report I get a 404 response code, is this a bug in the site crawl report? Thanks, Ian.0 -
Crawl error robots.txt
Hello, when trying to access the site crawl to be able to analyze our page, the following error appears: **Moz was unable to crawl your site on Nov 15, 2017. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster. Can help us? Thanks!
Product Support | | Mandiram0 -
How to block Rogerbot From Crawling UTM URLs
I am trying to block roger from crawling some UTM urls we have created, but having no luck. My robots.txt file looks like: User-agent: rogerbot Disallow: /?utm_source* This does not seem to be working. Any ideas?
Product Support | | Firestarter-SEO0 -
I have removed a subdomain from my main domain. We have stopped the subdomain completely. However the crawl still shows the error for that sub-domain. How to remove the same from crawl reports.
Earlier I had a forum as sub-domain and was mentioned in my main domain. However i have now discontinued the forum and have removed all the links and mention of the forum from my main domain. But the crawler still shows error for the sub-domain. How to make the crawler issues clean or delete the irrelevant crawl issues. I dont have the forum now and no links at the main site, bu still shows crawl errors for the forum which doesnt exist.
Product Support | | potterharry0