Content too buried in source code?
-
Our team is working on a refresh/redesign and am wondering if there's a quantifiable way of determining how high our meta data, H1 and paragraph should be in the source code. Or even whether I should be concerned with that. Our navigation will likely have dozens of links (we're going to keep it to under 100), and this doesn't even factor in the design elements. I am concerned about the content being buried.
Are these the kind of concerns I should be having? Is there a measurable way to avoid it?
-
Great thread, Chris! Thank you for posting.
-
Thank you David. These tips are much appreciated!
-
Hey seosarah,
Here's a fun thread with 3 of Moz's heaviest hitters each weighing in on your topic--each with a bit of a different take on it...
-
I wouldn't be too concerned, Google is clever enough to identify page elements, like header, nav and content serparately.
Once the redesign is live, do a crawl as Googlebot within Google Webmaster Tools to make sure it can crawl the entire page.
The other consideration you might look into is site speed, have a look at how the redesign impacts load time, you can compare within Google Analytics (Content > Site Speed report), and/or use an external tool like http://www.webpagetest.org.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our original content is being outranked on search engines by smaller sites republishing our content.
We a media site, www.hope1032.com.au that publishes daily content on the WordPress platform using the Yoast SEO plugin. We allow smaller media sites to republish some of our content with canonical field using our URL. We have discovered some of our content is now ranking below Or not visible on some search engines when searching for the article heading. Any thoughts as to why? Have we got an SEO proble? An interesting point is the small amount of content we have republished is not ranking against the original author on search engines.
Technical SEO | | Hope-Media0 -
Purchasing duplicate content
Morning all, I have a client who is planning to expand their product range (online dictionary sites) to new markets and are considering the acquisition of data sets from low ranked competitors to supplement their own original data. They are quite large content sets and would mean a very high percentage of the site (hosted on a new sub domain) would be made up of duplicate content. Just to clarify, the competitor's content would stay online as well. I need to lay out the pros and cons of taking this approach so that they can move forward knowing the full facts. As I see it, this approach would mean forgoing ranking for most of the site and would need a heavy dose of original content as well as supplementing the data on page to build around the data. My main concern would be that launching with this level of duplicate data would end up damaging the authority of the site and subsequently the overall domain. I'd love to hear your thoughts!
Technical SEO | | BackPack851 -
Duplicate Content - Different URLs and Content on each
Seeing a lot of duplicate content instances of seemingly unrelated pages. For instance, http://www.rushimprint.com/custom-bluetooth-speakers.html?from=topnav3 is being tracked as a duplicate of http://www.rushimprint.com/custom-planners-diaries.html?resultsperpg=viewall. Does anyone else see this issue? Is there a solution anyone is aware of?
Technical SEO | | ClaytonKendall0 -
Duplicate Content Reports
Hi Dupe content reports for a new client are sjhowing very high numbers (8000+) main of them seem to be for sign in, register, & login type pages, is this a scenario where best course of action to resolve is likely to be via the parameter handling tool in GWT ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Duplicate Content - Reverse Phone Directory
Hi, Until a few months ago, my client's site had about 600 pages. He decided to implement what is essentially a reverse phone directory/lookup tool. There are now about 10,000 reverse directory/lookup pages (.html), all with short and duplicate content except for the phone number and the caller name. Needless to say, I'm getting thousands of duplicate content errors. Are there tricks of the trade to deal with this? In nosing around, I've discovered that the pages are showing up in Google search results (when searching for a specific phone number), usually in the first or second position. Ideally, each page would have unique content, but that's next to impossible with 10,000 pages. One potential solution I've come up with is incorporating user-generated content into each page (maybe via Disqus?), which over time would make each page unique. I've also thought about suggesting that he move those pages onto a different domain. I'd appreciate any advice/suggestions, as well as any insights into the long-term repercussions of having so many dupes on the ranking of the 600 solidly unique pages on the site. Thanks in advance for your help!
Technical SEO | | sally580 -
Similar Content vs Duplicate Content
We have articles written for how to setup pop3 and imap. The topics are technically different but the settings within those are very similar and thus the inital content was similar. SEOMoz reports these pages as duplicate content. It's not optimal for our users to have them merged into one page. What is the best way to handle similar content, while not getting tagged for duplicate content?
Technical SEO | | Izoox0 -
Duplicate Content Issue
Hi Everyone, I ran into a problem I didn't know I had (Thanks to the seomoz tool) regarding duplicate content. my site is oxford ms homes.net and when I built the site, the web developer used php to build it. After he was done I saw that the URL's looking like this "/blake_listings.php?page=0" and I wanted them like this "/blakes-listings" He changed them with no problem and he did the same with all 300 pages or so that I have on the site. I just found using the crawl diagnostics tool that I have like 3,000 duplicate content issues. Is there an easy fix to this at all or does he have to go in and 301 Redirect EVERY SINGLE URL? Thanks for any help you can give.
Technical SEO | | blake-766240