Having Content be the First thing the bots see
-
If you have all of your homepage content in a tab set at the bottom of the page, but really would want that to be the first thing Google reads when it crawls your site, is there something you can implement where Google reads your content first before it reads the rest of your site? Does this cause any violations or are there any red flags that get raised from doing this? The goal here would just be to get Google to read the content first, not hide any content
-
it should only be the first line as h1, not the content. We styled it all the same so it didn't look silly. WE did make local cities h2....not sure if that's good or bad...but it stinks to serve so many cities and only rank at your physical location. Especially when there are 20 cities with in 20 miles here in DC metro.
Not sure if local "city pages" will work or how that changes the landing page experience verse a very interactive home page...Google didn't think about all of that!
-
Just checked how you have done it and I see what you mean - it's a bit tricky. One thing I noticed is that all that text is wrapped in a h1. I would take it out and put it in as standard content.
Also if you could take the text that is in your slideshow images and convert it to readable text that would provide you with a bit more relevant content on the site that may help.
Best of luck with it!
-
well....darn...its on the footer pretty much. Check out imageworksstudio.com
(about tab, lower left)
Thing is...you don't really want to spam up your site with content on a home page, as a branding firm we prefer short clear messaging that is focused on customer pain points, value props etc. Of course these are images and not really seo relevant anyways. Grrr - double edged sword.
Thanks again. I appreciate your comments.
-
It is done using CSS, but it needs to be clarified if the content is down far due to other content on the page or if it is down low due to HTML tags (perhaps from a navigation). The former might make a difference, but I think G can detect that trick anyway. The latter is irrelevant in my opinion, as the tags will be discounted.
-
There's been a bit of dicussion about this before and I seem to remember that using CSS to push content up the page actually had a slightly beneficial effect on rankings.
It's mainly going to be an issue if your content is really low down on the page due to things like intrusive banner ads or lots of adverts.
-
That's what I thought too....but I'm old school SEO and have no idea if this has changed! Thanks.
-
This can be done via CSS, but I'm not sure doing so has value any more. It used to be a practice a couple of years back, but I don't think it is necessary anymore.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
301 Redirects... Redirect all content at once or in increments?
Hello, I have been reading a lot about site migration and 301s and sometimes get confused with conflicting suggestions from different sources... So, in a site migration. Should I 301 redirect all old URLs to the news at once or little by little? I've see this Google handout that suggests doing it all at once (minute 13)
Intermediate & Advanced SEO | | Koki.Mourao
https://plus.google.com/u/0/events/cfco632lor7bl55j3tg1g8332l0 But also have read the opposite in other forums...0 -
Many pages small unique content vs 1 page with big content
Dear all, I am redesigning some areas of our website, eurasmus.com and we do not have clear what is the best
Intermediate & Advanced SEO | | Eurasmus.com
option to follow. In our site, we have a city area i.e: www.eurasmus.com/en/erasmus-sevilla which we are going
to redesign and a guide area where we explain about the city, etc...http://eurasmus.com/en/erasmus-sevilla/guide/
all with unique content. The thing is that at this point due to lack of resources, our guide is not really deep and we believe like this it does not
add extra value for users creating a page with 500 characters text for every area (transport...). It is not also really user friendly.
On the other hand, this pages, in long tail are getting some results though is not our keyword target (i.e. transport in sevilla)
our keyword target would be (erasmus sevilla). When redesigning the city, we have to choose between:
a)www.eurasmus.com/en/erasmus-sevilla -> with all the content one one page about 2500 characters unique.
b)www.eurasmus.com/en/erasmus-sevilla -> With better amount of content and a nice redesign but keeping
the guide pages. What would you choose? Let me know what you think. Thanks!0 -
Can Google read content/see links on subscription sites?
If an article is published on The Times (for example), can Google by-pass the subscription sign-in to read the content and index the links in the article? Example: http://www.thetimes.co.uk/tto/life/property/overseas/article4245346.ece In the above article there is a link to the resort's website but you can't see this unless you subscribe. I checked the source code of the page with the subscription prompt present and the link isn't there. Is there a way that these sites deal with search engines differently to other user agents to allow the content to be crawled and indexed?
Intermediate & Advanced SEO | | CustardOnlineMarketing0 -
Faceted Navigation and Dupe Content
Hi, We have a Magento website using layered navigation - it has created a lot of duplicate content and I did ask Google in GWT to "No URLS" most of the querystrings except the "p" which is for pagination. After reading how to tackle this issue, I tried to tackle it using a combination of Meta Noindex, Robots, Canonical but still it was a snowball I was trying to control. In the end, I opted for using Ajax for the layered navigation - no matter what option is selected there is no parameters latched on to the url, so no dupe/near dupe URL's created. So please correct me if I am wrong, but no new links flow to those extra URL's now so presumably in due course Google will remove them from the index? Am I correct in thinking that? Plus these extra URL's have Meta Noindex on them too - I still have tens of thousands of pages indexed in Google. How long will it take for Google to remove them from index? Will having Meta No Index on the pages that need to be removed help? Any other way of removing thousands of URLS from GWT? Thanks again, B
Intermediate & Advanced SEO | | bjs20100 -
Category Content Duplication
Does indexing category archive page for a blog cause duplications? http://www.seomoz.org/blog/setup-wordpress-for-seo-success After reading this article I am unsure.
Intermediate & Advanced SEO | | SEODinosaur0 -
Ajax Content Indexed
I used the following guide to implement the endless scroll https://developers.google.com/webmasters/ajax-crawling/docs/getting-started crawlers and correctly reads all URLs the command "site:" show me all indexed Url with #!key=value I want it to be indexed only the first URL, for the other Urls I would be scanned but not indexed like if there were the robots meta tag "noindex, follow" how I can do?
Intermediate & Advanced SEO | | wwmind1 -
Keyword/Content Consistency
My question is: If you have a keyword that is searched more when it's spelled wrong then when it's spelled right - what do you do? Do you do the misspelled word or keep true to the spelling and say oh well to SEO? Also - Along the same lines of that question: What if you have a keyword that has a - in the middle of it. For instance: website and web-site (this isn't the keyword just an example). and drupal website is searched more then drupal web-site but wordpress web-site is searched more then wordpress website. Technically website is the correct spelling and way to write it, but people put web-site (again not the case in reality - just an example).
Intermediate & Advanced SEO | | blackrino0 -
Duplicate Content from Article Directories
I have a small client with a website PR2, 268 links from 21 root domains with mozTrusts 5.5, MozRank 4.5 However whenever I check in google for the amount of link: Google always give the response none. My client has a blog and many articles on the blog. However they have submitted their blog article every time to article directories as well, plain and simle creating duplicate and content. Is this the reason why their link: is coming up as none? Is there something to correct the situation?
Intermediate & Advanced SEO | | danielkamen0