"Moz encountered an error on one or more pages on your site" Error
-
I have been receiving this error for a while:
"Moz encountered an error on one or more pages on your site"
It's a Multi-Lingual Wordpress website, the robots.txt is set to allow crawlers on all links and I have followed the same process for other website I've done yet I'm receiving this error for this site.
-
If you want to be doubly sure, you can try Fetch as Google (under Crawl) in Search Console.
And you can also try crawling with a 3rd party tool such as Screaming Frog (https://www.screamingfrog.co.uk/seo-spider/)
If these respond appropriately, it may be worth trying a crawl again, later (as the Crawl Test suggests) - it may just be a temporary glitch in the matrix.
Update us if things don't settle down or you get unexpected results following above.
Don't forget the HELP! button's there if you need it (it's right next to the PANIC! button on your keyboard https://moz.com/help/contact
-
-
Can you elaborate? What Moz tool where you using at the time (It sounds like a crawl test?)
Can you expand the message to reveal the error?
Is your site hosted on a CMS such as SquareSpace, for example?
Is it crawlable using similar external tools?
Do you have any errors within Google Search Console? - Always good to start here.
Please get back to us with some more details and we'll be better positioned to help.
Cheers.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best way to handle URLs of the to-be-translated pages on a multilingual site
Dear Moz community, I have a multilingual site and there are pages with content that is supposed to be translated but for now is English only. The structure of the site is such that different languages have their virtual subdirs: domain.com/en/page1.html for English, domain.com/fr/page1.html for French and so on. Obviously, if the page1.html is not translated, the URLs point to the same content and I get warnings about duplicate content. I see two ways to handle this situation: Break the naming scheme and link to original English pages, i.e. instead of domain.com/fr/index.html linking to domain.com/fr/page1.html link to domain.com/en/page.html Leave the naming scheme intact and set up a 301 redirect so that /fr/page1.html redirects to /en/page1.html Is there any difference for the two methods from the SEO standpoint? Thanks.
Technical SEO | | Lomar0 -
Pages with a short life time... example Flash Sales Ecommerce Sites?
Hello everyone, I am managing an ecommerce website and I am not sure what policy I need to make for a lot of the product pages.The Product pages, example Givency Bag, go live on a specific date and go down in a few days... like Groupon. Please shed some light in this dark tunnel.
Technical SEO | | MTalhaImtiaz
Thanks and regards,0 -
My site was Not removed from google, but my most visited page was. what does that mean?
Help. My most important page http://hoodamath.com/games/ has disappeared from google, why the rest of my site still remains. i can't find anything about this type of ban. any help would be appreciated ( i would like to sleep tonight)
Technical SEO | | hoodamath0 -
Error: Missing Meta Description Tag on pages I can't find in order to correct
This seems silly, but I have errors on blog URLs in our WordPress site that I don't know how to access because they are not in our Dashboard. We are using All in One SEO. The errors are for blog archive dates, authors and just simply 'blog'. Here are samples: http://www.fateyes.com/2012/10/
Technical SEO | | gfiedel
http://www.fateyes.com/author/gina-fiedel/
http://www.fateyes.com/blog/ Does anyone know how to input descriptions for pages like these?
Thanks!!0 -
Mega Menus - Site Links - Bottom of the Page
Here are the questions: If you replace your top menu with a mega menu - like rei.com, target.com etc - that has dramatically more links and lots of non-optimized testimonials and calls for action, and locate the actual code of the mega menu at the bottom of the HTML , How will this affect your sitelinks? Will this now, make your on-page content more visible and indexable? Or does the Google bott dismiss this as just navigation content? In the past, I've have seen this technique work well, but that was before site links were easier to obtain. Looking at sites with virtually no navigation on their home pages and good authority, I've seen site links seemingly gleamed from alt attributes.
Technical SEO | | Runner20090 -
Is using a customer quote on multiple pages duplicate content?
Is there any risk with placing the same customer quote (3-4 sentences) on multiple pages on your site?
Technical SEO | | Charlessipe0 -
Why do I have one page showing as two url's?
My SEOMoz stats show that I have duplicate titles for the following two url's: http://www.rmtracking.com/products.php and http://www.rmtracking.com/products I have checked my server files, and I don't see a live page without the php. A while back, we converted our site from html to php, but the html pages have 301's and as you can see the page without the php is properly redirecting to the php page. Any ideas why this would show as two separate url's?
Technical SEO | | BradBorst0 -
Is robots.txt a must-have for 150 page well-structured site?
By looking in my logs I see dozens of 404 errors each day from different bots trying to load robots.txt. I have a small site (150 pages) with clean navigation that allows the bots to index the whole site (which they are doing). There are no secret areas I don't want the bots to find (the secret areas are behind a Login so the bots won't see them). I have used rel=nofollow for internal links that point to my Login page. Is there any reason to include a generic robots.txt file that contains "user-agent: *"? I have a minor reason: to stop getting 404 errors and clean up my error logs so I can find other issues that may exist. But I'm wondering if not having a robots.txt file is the same as some default blank file (or 1-line file giving all bots all access)?
Technical SEO | | scanlin0