Error after scanning with browseo.net
-
Good day!
I have done a scan on my site with browseo.net ( and a few other similar scanners ) and got the mess seen in the screenshot.
I've tried deleting all the files in the website folder, replace it with a single image file, but it still shows the same error.
What could this mean and should i be worried?
P.S
Found my answer after contacting the helpful support of browseo.net :
It took me some time to figure out what was going on, but it seems as if you are mixing content types. Browsers are quite smart when it comes to interpreting the contents, so they are much more forgiving than we are.
Browseo crawls your website and detects that you are setting utf-8 as part of the meta information. By doing so, it converts the content in a different character encoding then what they are supposed to be. In a quick test, I tried to fetch the content type based on the response object, but without any success. So I am suspecting that in reality your content is not utf-8 encoded when you parse it into joomla. The wrong character type is then carried over for the body (which explains why we can still read the header information). All of this explains the error.
In order for it to work in browseo, you’d have to set the content type correctly, or convert your own content into utf-8 before parsing. It may be that you are either storing this incorrectly in the database (check your db settings for a different content type other than utf-8) or that other settings are a bit messed up. The good news is, that google is probably interpreting your websites correctly, so you won’t be punished for this, but perhaps something to look into…
From Paul Piper
-
Is the link to an image? If so, that is the cause. Try uploading a simple html page--should work.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Getting a ton of "not found" errors in Webmaster tools stemming from /plugins/feedback.php
So recently Webmaster tools showed a million "not found" errors with the url "plugins/feedback.php/blah blah blah." A little googling helped me find that this comes from the Facebook comment box plugin. Apparently some changes recently have made this start happening. The question is, what's the right fix? The thread I was reading suggested adding "Disallow: /plugins/feedback.php" to the robots.txt file and marking them all fixed. Any ideas?
Technical SEO | | cbrant7770 -
4XX client error
I am a bit confused...my recent site crawl told me I had 1 4XX Client error, (high priority). This is the page...
Technical SEO | | sdwellers
http://www.seadwellers.com/wp-content/uploads/2014/06/367679d2+0+277-SD.mp4 This link below is listed as the "linking page"....I guess that the link comes from?
http://www.seadwellers.com/category/dive-travel/ I'm just not getting this...where did the page of the first link above come from...and what is the deal with the catagory/dive-travel/ page? And how do I fix? Any guidance would be greatly appreciated...0 -
How to solve this merchant error?
Hello All, In my google merchant suddenly lots of warning appeared i.e. 1) Automatic item updates: Missing schema.org microdata price information 2) Missing microdata for condition Can you please tell me how to solve this errors? Thanks!
Technical SEO | | varo
John0 -
Weird, long URLS returning crawl error
Hi everyone, I'm getting a crawl error "URL too long" for some really strange urls that I'm not sure where they are being generated from or how to resolve it. It's all with one page, our request info. Here are some examples: http://studyabroad.bridge.edu/request-info/?program=request info > ?program=request info > ?program=request info > ?program=request info > ?program=programs > ?country=country?type=internships&term=short%25 http://studyabroad.bridge.edu/request-info/?program=request info > ?program=blog > notes from the field tefl student elaina h in chile > ?utm_source=newsletter&utm_medium=article&utm_campaign=notes%2Bfrom%2Bthe%2Bf Has anyone seen anything like this before or have an idea of what may be causing it? Thanks so much!
Technical SEO | | Bridge_Education_Group0 -
Website of only circa 20 pages drawing 1,000s of errors?
Hi, One of the websites I run is getting 1,000s of errors for duplicate title / content even though there are only approximately 20 pages. SEOMoz seems to be finding pages that seem to have duplicated themselves. For example a blog page (/blog) is appearing as /blog/blog then blog/blog/blog and so on. Anyone shed some light on why this is occurring? Thanks.
Technical SEO | | TheCarnage0 -
My pages says it has 16 errors, need help
My pages says it has 16 errors, and all of them are due to duplicate content. How do I fix this? I believe its only due to my meta tag description.
Technical SEO | | gaji0 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0 -
Link API returns Error 500
http://lsapi.seomoz.com/linkscape/links/nz.yahoo.com?SourceCols=4&Limit=100&Sort=domain_authority&Scope=domain_to_domain&Filter=external+follow&LinkCols=4 Hi folks any idea why the above returns Err 500 ? Seems to pertain to the domain - it works on other sites just not nz.yahoo.com Thanks!
Technical SEO | | jimbo_kemp0