Using "Div's" to place content at top of HTML
-
Is it still a good practice to use "div's" to place content at the top of the HTML code, if your content is at the bottom of the web page?
-
Thanks guys. That was what I thought. But, it is always good to get some conformation.
-
Hey
I agree with Dirk, just don't do it. It's not a good practice at all and usually your HTML should already be structured in such a way that the content is positioned at the right place so they can easily recognise where it should be.
Martijn.
-
Don't think it's still good practice. Google bot has evolved from a simple txt browser to a kind of mini version of Chrome.
It is perfectly capable to assess where the content is located on your page. Trying to trick Google to think that content at the bottom is in fact on the top because it's appearing first in the HTML is not going to help.No real hard evidence - just my opinion.
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content: using the robots meta tag in conjunction with the canonical tag?
We have a WordPress instance on an Apache subdomain (let's say it's blog.website.com) alongside our main website, which is built in Angular. The tech team is using Akamai to do URL rewrites so that the blog posts appear under the main domain (website.com/more-keywords/here). However, due to the way they configured the WordPress install, they can't do a wildcard redirect under htaccess to force all the subdomain URLs to appear as subdirectories, so as you might have guessed, we're dealing with duplicate content issues. They could in theory do manual 301s for each blog post, but that's laborious and a real hassle given our IT structure (we're a financial services firm, so lots of bureaucracy and regulation). In addition, due to internal limitations (they seem mostly political in nature), a robots.txt file is out of the question. I'm thinking the next best alternative is the combined use of the robots meta tag (no index, follow) alongside the canonical tag to try to point the bot to the subdirectory URLs. I don't think this would be unethical use of either feature, but I'm trying to figure out if the two would conflict in some way? Or maybe there's a better approach with which we're unfamiliar or that we haven't considered?
Technical SEO | | prasadpathapati0 -
IT's Hurt My Rank?HELP!!!
hi,guys,john here, i just began use the MOZ service several days ago, recently i noticed one thing that one keyword on the first google search result page, but when i done some external links,the rank down from 1 to 8, i think may be the bad quality external links caused the rank down. so my question,should i delete the bad quality links or build more better quality links? which is better for me. easy to delete the bad links and hard to build high quality links. so what's your better opinion,guys? thanks John
Technical SEO | | smokstore0 -
Keyword use in city specific "homepages"
My company, RightFit Personal Training, is a marketplace for people to find independent personal trainers based on preference. I am currently in the process of expanding nationally, and each city essentially has it's own homepage. Currently, the url of each city page ends in the name of the city only. For example, the url for the Houston page is www.rightfitpersonaltraining.com/houston/. The issue here is that I actually wanted my contracted developer to add the state abbreviation as well as the words "personal trainers" to the end of each city page url. So what I really wanted to see out of the Houston personal training page was www.rightfitpersonaltraining.com/houston/tx/personal/trainers. Do you think it is worth it for me have my developer go back and change the URL structure of the city homepages to reflect the latter? This should also benefit the structure of the personal trainer profiles, because they could all fall under their specific city homepages. For example, I think it would be to my benefit if each trainer profile url ended in /city/state/personal/trainer/trainername. Thoughts?
Technical SEO | | mkornbl20 -
Yoast's Magento Guide "Nofollowing unnecessary link" is that really a good idea?
I have been following Yoast's Magento guide here: https://yoast.com/articles/magento-seo/ Under section 3.2 it says: Nofollowing unnecessary links Another easy step to increase your Magento SEO is to stop linking to your login, checkout, wishlist, and all other non-content pages. The same goes for your RSS feeds, layered navigation, add to wishlist, add to compare etc. I always thought that nofollowing internal links is a bad idea as it just throwing link juice out the window. Why would Yoast recommend to do this? To me they are suggesting link sculpting via nofollowing but that has not worked since 2009!
Technical SEO | | PaddyDisplays0 -
No confirmation page on Google's Disavow links tool?
I've been going through and doing some spring cleaning on some spammy links to my site. I used Google's Disavow links tool, but after I submit my text file, nothing happens. Should I be getting some sort of confirmation page? After I upload my file, I don't get any notifications telling me Google has received my file or anything like that. It just takes me back to this page: http://cl.ly/image/0S320q46321R/Image 2013-04-26 at 11.15.25 AM.png Am I doing something wrong or is this what everyone else is seeing too?
Technical SEO | | shawn810 -
Duplicate content with "no results found" search result pages
We have a motorcycle classifieds section that lets users search for motorcycles for sale using various drop down menus to pick year-make-type-model-trim, etc.. These search results create urls such as:
Technical SEO | | seoninjaz
www.example.com/classifieds/search.php?vehicle_manufacturer=Triumph&vehicle_category=On-Off Road&vehicle_model=Tiger&vehicle_trim=800 XC ABS We understand that all of these URL varieties are considered unique URLs by Google. The issue is that we are getting duplicate content errors on the pages that have no results as they have no content to distinguish themselves from each other. A URL like:
www.example.com/classifieds/search.php?vehicle_manufacturer=Triumph&vehicle_category=Sportbike
and
www.example.com/classifieds/search.php?vehicle_manufacturer=Honda&vehicle_category=Streetbike Will have a results page that says "0 results found". I'm wondering how we can distinguish these "unique" pages better? Some thoughts:
-make sure <title>reflects what was search<br />-add a heading that may say "0 results found for Triumph On-Off Road Tiger 800 XC ABS"<br /><br />Can anyone please help out and lend some ideas in solving this? <br /><br />Thank you.</p></title>0 -
Using a Feedburner RSS link in your blog's header tag
It was suggested in Quick Sprout's Advanced SEO guide that it's good form to place your Feedburner RSS link into the header tag of your blog. Anyone know if this needs to be done for every page header of the blog, or just the home/main/index page? Thanks
Technical SEO | | Martin_S0 -
What's the best free tool for checking for broken links?
I'm trying to find the best tool to check for broken links on our site. We have over 11k pages and I'm looking for something fast and thorough! I've tried Xenu and LinkChecker. Any other ideas?
Technical SEO | | CIEEwebTeam0