Nesting tag within tables?
-
I am working with a site that is using tables to display data. My question is should I nest header tags within the table header , or tags. For example, the beginning of a new row could have a "row header" that is an
tag, or the caption is an
.
-
tags are not that big of a ranking factor, if at all. They are important only insofar that they help Google determine what a page is about, which tags should do as well.
If you really want to see if they make a difference, try having your page with
tags one month, then without the next, and see if rankings change. Then write a blog post about it on YouMoz for a nice backlink.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
NoIndex tag, canonical tag or automatically generated H1's for automatically generated enquiry pages?
What would be better for automatically generated accommodation enquiry pages for a travel company? NoIndex tag, canonical tag, automatically generated H1's or another solution? This is the homepage: https://www.discoverqueensland.com.au/ You would enquire from a page like this: https://www.discoverqueensland.com.au/accommodation/sunshine-coast/twin-waters/the-sebel-twin-waters This is the enquiry form: https://www.discoverqueensland.com.au/accommodation-enquiry.php?name=The+Sebel+Twin+Waters®ion_name=Sunshine+Coast
Technical SEO | | Kim_Lazaro0 -
No index tag robots.txt
Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
Technical SEO | | WeAreDigital_BE
Jens0 -
Removing a canonical tag from Pagination pages
Hello, Currently on our site we have the rel=prev/next markup for pagination along with a self pointing canonical via the Yoast Plugin. However, on page 2 of our paginated series, (there's only 2 pages currently), the canonical points to page one, rather than page 2. My understanding is that if you use a canonical on paginated pages it should point to a viewall page as opposed to page one. I also believe that you don't need to use both a canonical and the rel=prev/next markup, one or the other will do. As we use the markup I wanted to get rid of the canonical, would this be correct? For those who use the Yoast Plugin have you managed to get that to work? Thanks!
Technical SEO | | jessicarcf0 -
Is it a good idea to use the rel canonical tag to refer to the original source?
Sometimes we place our blog post also on a external site. In this case this post is duplicated. Via the post we link to the original source but is it also possible to use the rel canonical tag on the external site? For example: The original blogpost is published on http://www.original.com/post The same blogpost is published on http:///www.duplicate.com/post. In this case is it wise to put a rel canonical on http://www.duplicate.com/post like this: ? What do you think? Thanks for help! Robert
Technical SEO | | Searchresult0 -
Why I am a seeing an error for duplicate content for any categories and tags on my Wordpress blog?
When I look under "Crawl Diagnostics" I see I have 12 errors for duplicate content and there are all from tags and categories. I am assuming that search engines are reading the content in the tags and categories as duplicate. Should I set my categories to "no-index?"
Technical SEO | | brytewire0 -
One H1 tag Dead Long Live multiple H1 tags?
Good afternoon from 9 degrees C mostly cloudy Wetherby UK, Ive been holding on to the mantra of one h1 tag per page but a developer has challenged me on this by stating you can have multiple h1 tags on the condition the page is HTML 5 & each h1 tag is within its own section or article tag. So the question is do i need to change my tune? Thanks in advance, David
Technical SEO | | Nightwing0 -
Implementing Schema within Existing CSS tags
In implementing Schema with a site using CSS and containing existing tags, I want to be sure that we are (#1) using the tags effectively when used within a product detail template and (#2) not actually harming ourselves by telling Google that all products are named or described by the SS tag and not actually the product name or description (which obviously could be disasterous). An example of what we are looking at implementing is the following: Old: <ss:value source="$product.name"></ss:value> New: <ss:value source="$product.name"></ss:value> Old: <ss:value source="$product.description">New: <ss:value source="$product.description"></ss:value> Basically, is Schema at the point where the SS tag be replaced (in the eyes of the search engines) with the actual text and not the tag itself?</ss:value>
Technical SEO | | TechMama0 -
Why do I get duplicate content errors just for tags I place on blog entries?
I the SEO MOZ crawl diagnostics for my site, www.heartspm.com, I am getting over 100 duplicate content errors on links built from tags on blog entries. I do have the original base blog entry in my site map not referencing the tags. Similarly, I am getting almost 200 duplicate meta description errors in Google Webmaster Tools associated with links automatically generated from tags on my blog. I have more understanding that I could get these errors from my forum, since the forum entries are not in the sitemap, but the blog entries are there in the site map. I thought the tags were only there to help people search by category. I don't understand why every tag becomes its' own link. I can see how this falsely creates the impression of a lot of duplicate data. As seen in GWT: Pages with duplicate meta descriptions Pages [
Technical SEO | | GerryWeitzCustomer concerns about the use of home water by pest control companies.](javascript:dropInfo('zip_0div', 'none', document.getElementById('zip_0zipimg'), 'none', null);)
/category/job-site-requirements
/tag/cost-of-water
/tag/irrigation-usage
/tag/save-water
/tag/standard-industry-practice
/tag/water-use 6 [
Pest control operator draws analogy between Children's Day and the state of the pest control industr](javascript:dropInfo('zip_1div', 'none', document.getElementById('zip_1zipimg'), 'none', null);)
/tag/children-in-modern-world
/tag/children
/tag/childrens-day
/tag/conservation-medicine
/tag/ecowise-certified
/tag/estonia
/tag/extermination-service
/tag/exterminator
/tag/green-thumb
/tag/hearts-pest-management
/tag/higher-certification
/tag/higher-education
/tag/tartu
/tag/united-states
0