Cannot crawl website with redirect intalled on subdomain url
-
Hi!
I want to crawl this website : http://www.car-moderne.ch.
I tried a got back the crawl just for that one url (not for all the pages of the website). This single line cvs says that the status of the http://www.car-moderne.ch is 200, but in fact it is a redirect 301 to http://www.car-moderne.ch/fr where the live home page is (actually the Moz bar sees the 301, not the 200 as the single-lined crawl does).
How can I proceed in this case (a 301 redirect being installed on the subdomain url) to still be able to have a full-fledged juicy cvs with all the broken links, duplicate content, etc.
Thank you for your help!
Pascal Hämmerli
-
So glad to help, Pascal!
-
Dear Chiaryn,
Thank you for your very helpful reply.
This website is hosted on a partner agency who create the website and I only act as a SEO consultant for them. What you say is very helpful because it means their home-made CMS should be corrected to provided better 301 redirection.
I wish you a good day,
Pascal
-
Hey Pascal,
Sorry for the confusion here! It looks like the subdomain, www.car-moderne.ch, returns a 200 HTTP status to our crawler and to other crawlers, such as the hurl.it tool. In the body of the screenshot I attached from the hurl.it tool, the only code there is the number 404, so basically the site is serving a page with no crawlable data. The page isn't redirecting and it doesn't return any real source code, so there is no data for us to include in the crawl. I would recommend working with your webmaster to resolve this issue and to get the page to correctly serve a 301 redirect to the /fr version of the site to all crawlers.
I can see that the site is correctly responding with a 301 redirect for some crawlers, such as this test I ran as googlebot, but the response doesn't seem to be consistent. One thing you will want to be sure to have your webmaster check is how the site responds to user-agents that are hosted on Amazon Web Services, as some of our crawlers and the hurl.it crawl are both hosted through AWS.
Once the issue of the HTTP response is resolved, you should be able to get much better data from the crawl test tool.
I hope this helps! Please let me know if I can help you with anything else.
Chiaryn
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there a way to export all your crawl errors for multiple Moz campaigns at once?
We're looking for a simple way to export all crawl errors for our Moz campaigns. More than likely we could use the API, but was wondering if there was any functionality already built into Moz for exporting all crawl errors.
Moz Bar | | ReunionMarketing0 -
4XX client error with email address in URL
I have an unusual situation I have never seen before and I did not set up the server for this client. The 4XX error is a string of about 74 URLs similar to this: http://www.websitename.com/about-us/info@websitename.com I will be contacting the server host as well to troubleshoot this issue. Any ideas? Thanks
Moz Bar | | EliteVenu0 -
My crawl report only shows 1 link
Hello, I've tried a crawl for the site www.doctify.co.uk and it's only returned 1 link in the report which is the homepage. Do you know what the issue could be? Thanks, Nina
Moz Bar | | Global_Blue0 -
Why does the moz crawl test lists page twice?
Hi, I'm running into an issue where some crawlers list my pages twice, once with a trailing slash, once without. I first saw it on a few pages with screaming frog, then saw it happen on all my pages with the moz crawler. The site is www.kidsandart.org and its on Squarespace. I grepped the sitemap.xml I submitted to google webmaster and got 167 distinct pages, all of them without a trailing slash. Any insights on why this is happening, and how to regard moz crawler results would be appreciated. thanks Tom
Moz Bar | | tpushpathadam0 -
Moz crawl issues: All pages keep resolving to our "cookies not enabled" page
Upon running the Moz Pro site crawler, I noticed that I received quite a bit of duplicate titles along with 302 redirects (which is our site creating a temporary 302 to our "cookies not enabled" page). How would I get around the crawler being redirected to this page? I've never ran across this issue before, despite using the crawler with sites that use the same framework as the one thats affected. Any ideas?
Moz Bar | | responsivelabs0 -
Correcting a 4xx on my crawl report
How can I correct a 4xx error on my crawl report. This page no longer exists. What can I do?
Moz Bar | | henne0 -
Is it possible to extend my crawling date in SEO Moz?
My web site was crawled by MOZ before week, next crawling date is tomorrow. Because of some reason I am not able to take any action on last week MOZ report.I want to extend MOZ next crawling date, Can I ?
Moz Bar | | ankit.rahevar0 -
Moz not crawling opencart product pages
hi, i have waited for over 2 weeks now and the crawler only got 8 pages, and is not getting all the open cart pages and products. any idea of what can be wrong? im using joomla 2.5.11 and mijoshop 2.0.5 (which uses opencart 1.5.5.1). thanks
Moz Bar | | marlvass10