Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Can hidden backlinks ever be ok?
-
Hi all,
I'm very new to SEO and still learning a lot.
Is it considered a black hat tactic to wrap a link in a DIV tag, with display set to none (hidden div), and what can the repercussions be?
From what I've learnt so far, is that this is a very unethical thing to be doing, and that the site hosting these links can end up being removed from Google/Bing/etc indexes completely. Is this true?
The site hosting these links is a group/parent site for a brand, and each hidden link points to one of the child sites (similar sites, but different companies in different areas).
Thanks in advance!
-
Hi Ryan,
Thanks for the quick feedback.
This clears up things for me a bit.Thanks,
Stephen -
The separation between black hat and white hat tactics is generally a clear line. The simple question is, does the code exist for the benefit of your site's visitors or solely to manipulate search engines?
DIV tags are used to apply CSS rules to specific pieces of code. If you have a link contained in a DIV and the display set to none, that link would clearly never be seen by the site's visitors. It is apparent the link exists solely to manipulate search engine results, and therefore is a black hat tactic.
When Google and other search engines discover black hat tactics being used on a site, they will take action. The action can be relatively minor such as ignoring the link. The action could be mid-range such as removing the page containing the link from the index. At the extreme end, they can remove the entire site from the index.
Each search engine has their own internal guidelines on how to handle these issues. Some issues are handled automatically via algorithms, while other issues are handled by manual review. There are no published standards on exactly which punishments will be handed out for a given violation. It is simply best to completely avoid anything black hat.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Hidden H1 Tag on Image
Hi, In the page I'm working on, I encountered an tag in an image, rather than in a text form. Do you think it's an issue when it comes to SEO?
Technical SEO | | nerdieb
What do you suggest I should do if there is an issue? Keen to hear from you!0 -
Tools/Software that can crawl all image URLs in a site
Excluding Screaming Frog, what other tools/software to use in order to crawl all image URLs in a site? Because in Screaming Frog, they don't crawl image URLs which are not under the site domain. Example of an image URL outside the client site: http://cdn.shopify.com/images/this-is-just-a-sample.png If the client is: http://www.example.com, Screaming Frog only crawls images under it like, http://www.example.com/images/this-is-just-a-sample.png
Technical SEO | | jayoliverwright0 -
Can too many pages hurt crawling and ranking?
Hi, I work for local yellow pages in Belgium, over the last months we introduced a succesfull technique to boost SEO traffic: we have created over 150k of new pages, all targeting specific keywords and all containing unique content, a site architecture to enable google to find these pages through crawling, xml sitemaps, .... All signs (traffic, indexation of xml sitemaps, rankings, ...) are positive. So far so good. We are able to quickly build more unique pages, and I wonder how google will react to this type of "large scale operation": can it hurt crawling and ranking if google notices big volumes of content (unique content)? Please advice
Technical SEO | | TruvoDirectories0 -
Can you 301 redirect a page to an already existing/old page ?
If you delete a page (say a sub department/category page on an ecommerce store) should you 301 redirect its url to the nearest equivalent page still on the site or just delete and forget about it ? Generally should you try and 301 redirect any old pages your deleting if you can find suitable page with similar content to redirect to. Wont G consider it weird if you say a page has moved permenantly to such and such an address if that page/address existed before ? I presume its fine since say in the scenario of consolidating departments on your store you want to redirect the department page your going to delete to the existing pages/department you are consolidating old departments products into ?
Technical SEO | | Dan-Lawrence0 -
Oh no googlebot can not access my robots.txt file
I just receive a n error message from google webmaster Wonder it was something to do with Yoast plugin. Could somebody help me with troubleshooting this? Here's original message Over the last 24 hours, Googlebot encountered 189 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%. Recommended action If the site error rate is 100%: Using a web browser, attempt to access http://www.soobumimphotography.com//robots.txt. If you are able to access it from your browser, then your site may be configured to deny access to googlebot. Check the configuration of your firewall and site to ensure that you are not denying access to googlebot. If your robots.txt is a static page, verify that your web service has proper permissions to access the file. If your robots.txt is dynamically generated, verify that the scripts that generate the robots.txt are properly configured and have permission to run. Check the logs for your website to see if your scripts are failing, and if so attempt to diagnose the cause of the failure. If the site error rate is less than 100%: Using Webmaster Tools, find a day with a high error rate and examine the logs for your web server for that day. Look for errors accessing robots.txt in the logs for that day and fix the causes of those errors. The most likely explanation is that your site is overloaded. Contact your hosting provider and discuss reconfiguring your web server or adding more resources to your website. After you think you've fixed the problem, use Fetch as Google to fetch http://www.soobumimphotography.com//robots.txt to verify that Googlebot can properly access your site.
Technical SEO | | BistosAmerica0 -
Thoughts about stub pages - 200 & noindex ok, or 404?
With large database/template driven websites it is often possible to get a lot of pages with no content on them. What are the current thoughts regarding these pages with no content, options; Return a 200 header code with noindex meta tag Return a 404 page & header code Something else? Thanks
Technical SEO | | slingshot0 -
Is there such thing as a good text/code ratio? Can it effect SERPs?
As it says on the tin; Is there such thing as a good text/code ratio? And can it effect SERPs? I'm currently looking at a 20% ratio whereas some competitors are closer to 40%+. Best regards,
Technical SEO | | ARMofficial
Sam.0 -
How Can I Block Archive Pages in Blogger when I am not using classic/default template
Hi, I am trying to block all the archive pages of my blog as Google is indexing them. This could lead to duplicate content issue. I am not using default blogger theme or classic theme and therefore, I cannot use this code therein: Please suggest me how I can instruct Google not to index archive pages of my blog? Looking for quick response.
Technical SEO | | SoftzSolutions0