You may use more than one of ANY Hx tag (even 1), there has been some argument about risking penalties for using more then one H1, but with the way HTML5 sites are going it is starting to wain towards more people doing it.. Before this debate, I don't recall much conflict about using H2 tags more then once. I would just be careful about it and use them appropriately.
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Best posts made by donford
-
RE: H2 Tags- Can you have more than 1 H2 tag
-
RE: Can white text over images hurt your SEO?
Hi Thomas,
The last I heard Google / Bing / MSN /Yahoo has no automatic way to know if you were obfuscating text. The way sites are built now, layers on layers or divs inside divs it would pretty difficult to decipher all the code to just check if there is hidden text. However, if a competitor catches you doing it, reports it, and then the Search Engines do a manual check you're likely going to get dinged.
I haven't seen anything new on this subject in a year or more but looking at your site I don't think this is your problem. In fact our corporate site uses white text on an image on every single page and we have no issues.
-
RE: How does a search engine bot navigate past a .PDF link?
Hi Dana
I think your question has been dodged a tad. I ways lead to understand that a .pdf or any page that opens in a new tab and does not link back to the original site, (dangling page) is not a problem. The reason being is that crawlers don't really care how a page is opened. Because the crawler will fork at every link and crawls each new page/link from each fork, when it finds a orphan or dangling page it just stops. This of course is not an issue since if the crawler has forked at each link.
So the question is how a SE treats .pdf's rather how does it treat orphan page. Maybe somebody who works with crawlers can confirm or educate us both on they work.
Don
-
RE: 804 HTTPS (SSL) error
Hello Happy,
Okay so you have content being served as http on your https page. When you reference an image or script you need to make sure it is a relative reference or a https reference, otherwise you will get these types of warnings.
See Mozilla Facts here
Also see the image attached.
Also the SSL isn't misconfigured it is missing. To configure one properly you need to contact your host and ask them to install a SSL cert (most host will not allow users to do this themselves). If you have not yet purchased a SSL you will need to do so. SSL certs also require dedicated IP addresses which most host also charge for.
In summary if you purchase a dedicated IP and a SSL certification you're problem should go away unless you specifically declare content as http.
Hope this helps,
Don
-
RE: 804 HTTPS (SSL) error
I just ran a crawl and did not see any 804's
You can view the results here:
You may want to contact Moz directly to see if one of the Moz staff can help you further.
-
RE: Duplicate Content Issue: Mobile vs. Desktop View
HI Dino,
Before I said to much I had to look at Visual Composer. Spent about 10 minutes there and didn't really see how the code turns out. Perhaps if you like to post a link to the webpage or just message me if you don't want it public. I'll be happy to review the source and offer a thumbs up or any suggestions I can.
Good luck,
Don
-
RE: 804 HTTPS (SSL) error
Yes, you need to work with Moz support to get the issue fixed.
-
RE: Block Domain in robots.txt
Hi Philipp,
I have not heard of Google going rogue like this before, however I have seen it with other search engines (Baidu).
I would first verify that the robots.txt is configured correctly, and verify there is no links anywhere to the domain. The reason I mentioned this prior, was due to this official notification on Google: https://support.google.com/webmasters/answer/156449?rd=1
While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web. As a result, the URL of the page and, potentially, other publicly available information such as anchor text in links to the site, or the title from the Open Directory Project (www.dmoz.org), can appear in Google search results.
My next thought would be, did Google start crawling the site before the robots.txt blocked them from doing so? This may have caused Google to start the indexing process which is not instantaneous, then you have the new urls appear after the robots.txt went into effect. The solution is add the meta tag noindex, or block put an explicit block on the server as I mention above.
If you are worried about duplicate content issues you maybe able to at least canonical the subdomain urls to the correct url.
Hope that helps and good luck
-
RE: Duplicate Content Issue: Mobile vs. Desktop View
HI Dino,
I don't see any issues. It is okay to use multiple H1 tags for reasons such as this. Google has confirmed multiple H1 tags are okay.
My example above was probably more alarming to you then I could have realized. My effort was to point out a simple case of how to use css for multiple device types. In your case having different text is for the benefit of the user which is exactly as it should be.
Good job,
Don
-
RE: Open site explorer is giving me strange redirect message.
Hello,
Sorry for not getting back to you sooner. Weekend and all..
Okay the problem is still there. You can check the header response codes yourself here:
http://tools.seobook.com/server-header-checker
The URL http://www.a-fotografy.co.uk/ 302 redirects to https://www.a-fotografy.co.uk/ which 301 redirects to https://a-fotografy.co.uk/
There are 2 possible problems I can think of. 1 the code to redirect http://www.a-fotografy.co.uk/ is still in the htaccess file and before the code I gave you. Or 2 the host has a domain redirect in place that is executing on the server before the htaccess is read.
For me to help you further please post the contents of your htaccess file and I'll see if there is something I can pick up on.
Don
-
RE: Duplicate Content Issue: Mobile vs. Desktop View
Hi Dino,
Is your code something (basic) like this.
I love lamp!
I love lamp!
Then you use a switch to determine which view to show?
If so, the correct way would be to use the switch to select which CSS to load instead. Thus you can use the same class but it will show up different based off of the users view.
I love lamp!
Here is a nice article about switching CSS based on views.
Hope that helps,
Don
-
RE: Are pages not included in navigation given less "weight"
Great answer Dirk and I completely agree.
-
RE: Hyphens vs Underscores
Hi Logan,
I was faced with the similar question a couple years ago when I started with my current company.
The short answer is no, do not change a url that is currently using underscores to hyphens if it is well indexed.
If you're making a new page, then you should probably use hyphens instead of underscores.
-
RE: ADA, WCAG, Section 508 Accessibility and hidden text
Wow, interesting question. I am with you I would definitely worry about obfuscated text penalties (keyword stuffing) employing that particular method. I have no experience with these guidelines but I am interested in what others have to say about the matter.
My initial though would be something like
Directions
Under the assumption that a speech reader would read alt text since users wouldn't see the image. And of course the image could be something completely simple like an arrow or bullet point.
I will wait to see what others may say,
Good luck,
Don
-
RE: Is it reasonable to not give an SEO access to our CMS?
I kind of have the feeling that there is something missing in the story. This is one of the challenges that happen when dealing with multiple hands in the kitty.
I wouldn't really buy the SEO's excuse of what they do is secret. Rather what they really mean is, we don't want to train another web company how to do what we do.
As a web developer I would understand why an SEO would want access, it could make things easier and faster. Having to go through company A to submit to get company C a change may not be exactly the service company A purchased.
As an SEO I understand why a web developer would be skittish about giving access to a company they had no hand in hiring. The web devs do a lot of hard work, some of which can actually be proprietary. I would of course in an SEO perspective be willing to work within those constraints if need be.
In the end it becomes Company's A issue. They need to find a compromise between company b and c. If it takes extra money to get the service for the SEO, or figure out which one they feel is the most valuable and fire the other and find a replacement that is willing to work within the constraints they lay out.
My thoughts..