Sufficient Words in Content error, despite having more than 300 words
-
My client has just moved to a new website, and I receive "Sufficient Words in Content" error over all website pages , although there are much more than 300 words in those pages. for example:
- https://www.assuta.co.il/category/assuta_sperm_bank/
- https://www.assuta.co.il/category/international_bank_sperm_donor/
I also see warnings for "Exact Keyword Used in Document at Least Once" although there is use of them in the pages..
The question is why can't MOZ crawler see the pages contents?
-
Well, shoot. Apparently, I'm not quite is familiar with UTF-8 characters and that distinction as I thought. Totally my mistake.
The line for our tools falls closer to english character sets. Even seemingly small modifiers like accents can throw off our ability to accurately count or detect matches on pages. So, when it comes to Hebrew characters, we simply don't have a good way to handle it.
-
But the page is utf-8.. Take a look at the source code: view-source:https://www.assuta.co.il/category/assuta_sperm_bank/
Moz did a pretty good job with the old website - before it was changed, but now all my grades are 50
although the site is utf-8 (on a .net framework if that matters)
Can you re-check that please?
I think
Michal
-
Hey Michal!Â
I'm really sorry you're running into this issue! Unfortunately, it looks to be cropping up because of the non-latin characters being used on the page. Our crawler has a very difficult time interpreting non-UTF-8 characters, and often reports counts and matches poorly when looking at pages comprised of them.I'm terribly sorry for the inconvenience! It's definitely something that we're looking to address down the road, but I'm afraid we don't have the resources to improve that functionality at the moment.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Why my website is giving 4xx error
I was analyzing the website link, my website is giving me 4xx error. Google search console in not giving such error 27b42282-a5f6-4ad0-956a-91838633d5ad-image.png Any suggestion will be helpful. The site is on wordpress
Link Explorer | | VS-Gary0 -
Strange error in MOZ report
I get the following warning about our domain name in Link Explorer Moz tool You entered the URL debtacademy.com which redirects to www.hugedomains.com/domain_profile.cfm?d=debtacademy&e=com. Click here to analyze www.hugedomains.com/domain_profile.cfm?d=debtacademy&e=com instead. Please advice me. How I can fix it.
Link Explorer | | jeffreyjohnson0 -
612 : Page banned by error response for robots.txt
Hi all,
Link Explorer | | ME5OTU
I ran a crawl on my site https://www.drbillsukala.com.au  and received the following error  "612 : Page banned by error response for robots.txt." Before anyone mentions it, yes, I have been through all the other threads but they did not help me resolve this issue. I am able to view my robots.txt file in a browser https://www.drbillsukala.com.au/robots.txt.Â
The permissions are set to 644 on the robots.txt file so it should be accessible
My Google Search Console does not show any issues with my robots.txt file
I am running my site through StackPath CDN but I'm not inclined to think that's the culprit One thing I did find odd is that even though I put in my website with https protocol (I double checked), on the Moz spreadsheet it listed my site with http protocol. I'd welcome any feedback you might have. Â Thanks in advance for your help.
Kind regards0 -
Sudden Spike in 404 Pages Not Found in Moz Crawl But No Errors in WMT
Recently I received a spike in errors from the Moz crawler. When I looked into the matter I noticed that all the URI's looked right but then I looked a little closer and there was a /page/2 and /page/3 in front of the URI's. I'm running a WordPress website. Immediately I thought to myself this must be some kind of caching or permalinks error. So I disabled all my plugins including W3 Total Cache and ran the Integrity Link Crawler for the Mac and found that the errors were still popping up. 404-errors-ncworkercomp.png?dl=0
Link Explorer | | NCCompLawyer0 -
Learn how to use Open Site Explorer's Top Pages report to help inform your content marketing efforts. Get your Daily SEO Fix!
With the Top Pages report, you can see the pages on your site (and your competitorsâ) that are top performers. The pages are sorted by Page Authority - a prediction of how well a specific page will rank in search engines - and also metrics for linking root domains, inbound links, HTTP status and social shares. Be sure to watch today's Daily SEO Fix video tutorial to learn how to use Open Site Explorer's Top Pages report to analyze the competitions' content marketing efforts and to inform your own. This video is part of The Moz Daily SEO Fix tutorial series--Moz tool tips and tricks in under 2 minutes. To watch all of our videos so far, and to subscribe to future ones, make sure to visit the Daily SEO Fix channel on YouTube.
Link Explorer | | kellyjcoop3 -
Really slow to load results on Open Explorer and "error fetching your data" messages
Hi, I am new to Moz and I'm finding using Open Explorer a real pain and frustrating, not only is it really slow, but for the past couple of days I've been getting "error fetching your data" messages when trying to show links etc. Is this just generally what its like or is there another problem?
Link Explorer | | Dave_B0 -
Duplicated content detected with MOZ crawl with canonical applied
Hi there! I have a slight problem.
Link Explorer | | Eurasmus.com
I have a site with Joomla 3.3 that we recently migrated from 2.5. Joomla, for some reason that I don´t really get, creates hundreds of weird urls for the site like
mydomain.com/en -> joomla creates en/home/149-xxx-xxx/xxxxxx-xxxxxx that links to the first one.
The new version 3.3 knows this bug and applies a rel=canonical to the ones created "artificially", so they should not be identified as duplicated. Sample piece of code: Â en/home/149-all-en/xxxxxxx-xxxxxx" rel="canonical" / MOZ crawler identifies this as duplicated and like this I have thousands of pages duplicated all with titles, content etc... all the ones created by joomla. Still my site has good SEO results and I can not see any penalties but I am a bit concerned they may come in the future.... Can anyone explain me what is happening? Thank you in advance for your time,0 -
How do you create a content "stream"?
Is there a way to stack up content in MOZ that's ready to post on social media? Much thanks!
Link Explorer | | kcampbell990