Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How much impact does bad html coding really have on SEO?
-
My client has a site that we are trying to optimise. However the code is really pretty bad.
There are 205 errors showing when W3C validating. The >title>, , <keywords> tags are appearing twice. There is truly excessive javascript. And everything has been put in tables.</keywords>
How much do you think this is really impacting the opportunity to rank? There has been quite a bit of discussion recently along the lines of is on-page SEO impacting anymore.
I just want to be sure before I recommend a whole heap of code changes that could cost her a lot - especially if the impact/return could be miniscule.
Should it all be cleaned up?
Many thanks
-
Hi Chammy,
I inherited a site that reported 3,184 crawl errors in MOZ and a significant number of them (nearly 600) were duplicate titles and content. I have that down to under 1,000 total errors and only 86 critical errors. I have seen my ranking grow pretty substantially and in one week had 6 pages increase over 20 positions in rank. I can share the MOZ Rank Report if you would like to see it.
So yes, it does have an impact.
-
I'm sorry, I don't have any evidence from the user experience point of view,. although I would also be interested to see the results of any studies.
I will say that from a site management/maintenance point of view it makes sense to try and keep the code as clean as possible. I've been involved in project were a considerable chunk of the cost was incurred due to the amount of time and effort that was required to unravel the mess even before any new changes were made!
-
Thanks very much everyone - very helpful.
Good point re page speed - the pages are certainly slow to load so this could well be due to the huge amount of js and bad code.
And yes, think the duplicate tags should be sorted - this shouldn't be difficult.
Has anyone got any tangible results that they've seen as a result of cleaning up js and code?
-
If you've got things like duplicate title and meta-description's going on then I'd certainly take a look at fixing those. Being able to manage these two tags is vital to managing the way your pages will appear in the search results. (And your title tag is an important ranking factor).
Normally, if your page doesn't validate then it's not a major problem and search engines won't penalise you for it. If however, your page is so badly crafted that the html errors, and general page structure makes it difficult for the search engines (and humans) to read your page then you're going to suffer.
The key is to make sure that your site/page content is accessible. How accessible is your page to someone with disabilities, using a screen reader etc.
You've got to make sure that the search engines can understand what your page is about or your page won't be seen as a relevant page for any search terms...
How bad is it? How does google render the page in it's instant previews (you can check this is Google Webmaster tools)
-
I personally don't worry about bad code unless it slows down my page or can possibly make things confusing for search engines or readers.
If the title and meta are appearing twice this could be confusing for search engines, so I would change this. But, if you've got things like an unclosed
here and there I personally don't think that's going to be much of a factor.
-
Invalid code has a small effect on ranking. However, if the invalid code causes usability issues such as load time and or causes a high bounce rate then it can lower your rankings and of course cut back on conversions.
Some of it is a higher priority than others. I would say defo remove the meta keywords.
Combine JS pages. The tables while out of date is not a big issue.
If you have the time and resources then yes it should be cleaned up. If not then clean up major problems
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Shopify SEO - Double Filter Pages
Hi Experts, Single filter page: /collections/dining-chairs/black
Technical SEO | | williamhuynh
-- currently, canonical the same: /collections/dining-chairs/black
-- currently, index, follow Double filter page: /collections/dining-chairs/black+fabric
-- currently, canonical the same: /collections/dining-chairs/black+fabric
-- currently, noindex, follow My question is about double filter page above:
if noindexing is the better option OR should I change the canonical to /collections/dining-chairs/black Thank you0 -
Google is indexing bad URLS
Hi All, The site I am working on is built on Wordpress. The plugin Revolution Slider was downloaded. While no longer utilized, it still remained on the site for some time. This plugin began creating hundreds of URLs containing nothing but code on the page. I noticed these URLs were being indexed by Google. The URLs follow the structure: www.mysite.com/wp-content/uploads/revslider/templates/this-part-changes/ I have done the following to prevent these URLs from being created & indexed: 1. Added a directive in my Htaccess to 404 all of these URLs 2. Blocked /wp-content/uploads/revslider/ in my robots.txt 3. Manually de-inedex each URL using the GSC tool 4. Deleted the plugin However, new URLs still appear in Google's index, despite being blocked by robots.txt and resolving to a 404. Can anyone suggest any next steps? I Thanks!
Technical SEO | | Tom3_150 -
Why add .html to WordPress pages?
A site I may take over has a plugin that adds .html to the pages. I searched online but I’ve only found how to add it rather than why to add it. Is it needed? If I remove it, I’ll have to be careful with SEO / indexed pages and redirects. The site is running 3.x.x and 90% of the plugins have not been updated in over 5 years including this one. Before I update to 4.7.x, I am trying to understand the landscape (pros / cons) on why something could be used and if I need to find a suitable replacement for it.
Technical SEO | | acktivate2 -
Changing menus regularly - will this impact SEO
We are working on an internal project, where the website owner is thinking of making regular changes to one or two items on the top level menu. Assuming they redirect the original pages or navigate to them in other ways, is there any other impact on SEO to changing the menu structure? I assume they'd submit new sitemaps after each change. Many thanks Fiona
Technical SEO | | fionah0 -
SEO-impact of mouseover text on header pictures
Hi, what do you reckon of taking away the mouseover effect on the header pictures seen on www.viventura.de/reisen/peru?
Technical SEO | | viventuraSEO
We are thinking of eliminating the mouseover text to make User Experience even better but are worrying that our ranking might go down when doing so. Any experiences, any help is highly appreciated!
Thanks, Benno0 -
SEO value of InDesign pages?
Hi there, my company is exploring creating an online magazine built with Adobe's InDesign toolset. If we proceeded with this, could we make these pages "as spiderable" as normal html/css webpages? Or are we limited to them being less spiderable, or not at all spiderable?
Technical SEO | | TheaterMania1 -
Are 404 Errors a bad thing?
Good Morning... I am trying to clean up my e-commerce site and i created a lot of new categories for my parts... I've made the old category pages (which have had their content removed) "hidden" to anyone who visits the site and starts browsing. The only way you could get to those "hidden" pages is either by knowing the URLS that I used to use or if for some reason one of them is spidering in Google. Since I'm trying to clean up the site and get rid of any duplicate content issues, would i be better served by adding those "hidden" pages that don't have much or any content to the Robots.txt file or should i just De-activate them so now even if you type the old URL you will get a 404 page... In this case, are 404 pages bad? You're typically not going to find those pages in the SERPS so the only way you'd land on these 404 pages is to know the old url i was using that has been disabled. Please let me know if you guys think i should be 404'ing them or adding them to Robots.txt Thanks
Technical SEO | | Prime850 -
How to SEO a Website Built off Godaddy?
I have a client whose website is built off Godaddy services. I know Godaddy is not the right choice for building a website, but what's done is done. The client has already bought the Godaddy services and there's no way I can tell him to go rebuild his website before we could optimize it for SEO. I'm already facing a lot of challenges while optimizing on-page elements. When I wanted to verify the ownership for Google Analytics and Webmaster Tool via his Godaddy account. the process failed many times. it looks like Godaddy is using some kind of caching not allowing us to modify the codes. For example, I'd applied the site verification codes for Webmasters Tool 48 hours ago, and the metatag for google site verification is not yet updated in the frontend. It's quite frustrating. What would you suggest?
Technical SEO | | suskanchan1