If I use links intag instead of "ahref" tag can Google read links inside div tag?
-
Hi All,
Need a suggestion on it. For buttons, I am using links in
tag instead of "ahref". Do you know that can Google read links inside "div" tag? Does it pass rank juice?
It will be great if you can provide any reference if possible.
-
Thanks a lot Alick300 for your kind response and reference.
-
Hi,
According to @JohnMu Google might spot the URLs mentioned, but Google don't treat them as links, and don't forward any signals (anchor text, pagerank, etc.). Check below post for details.
http://www.thesempost.com/google-googlebot-processes-javascript-links-site/
Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate title tags due to lightbox use
I am looking at a site and am pulling up duplicate title tags because of their lightbox use so... So they have a page: http://www.website.com/page and then a duplicate of that page: http://www.website.com/page?width=500&height=600 on a huge number of pages (using Drupal)... that kind of thing - what would be the best / cleanest solution?
Intermediate & Advanced SEO | | McTaggart0 -
Can anyone see any issues with the canonical tags on this web site?
The main domain is: http://www.eumom.ie/ And these would be some of the core pages: http://www.eumom.ie/pregnancy/ http://www.eumom.ie/getting-pregnant/ Any help from the Moz community is much appreciated!
Intermediate & Advanced SEO | | IcanAgency0 -
Google lost 9,000 inbound links
I logged into GWT today. It typically reports that our page has about 9,000 inbound links, but today it said we have 225. There are no warnings or messages about any manual action. In fact, this site has never received any penalties or warnings. Since November I have used the disavow tool to remove porn links that a competitor has pointed at my site. Typically I go through Google's "new" links list and add any of the obviously bad links to the disavow list. That effort seemed to be going ok, and my site does not seem to be penalized for this negative seo. The 225 links that remain are scattered back through 2010. This site has been online since 2005 and has accumulated lots of links over the years. Some great, some not so great. Queries are down about 1500 per day since last week, about a 15% drop. There is a slight drop in some keyword rankings, but nothing huge, and most of the SERPS I track do not seem to be impacted. The PR is steady at 4, where it's been for years. So... what do you think I should do? Just watch it for a week and see if they come back? Try removing the "disavow" list just in case it some how messed things up (looks ok on the surface to me). Has this ever happened to anyone else?
Intermediate & Advanced SEO | | DarrenX0 -
Google+ Pages on Google SERP
Do you think that a Google+ Page (not profile) could appear on the Google SERP as a Rich Snippet Author? Thanks
Intermediate & Advanced SEO | | overalia0 -
Meta description Tag who should it be used and what should it describe?
Hi when using the meta description tag how is it best used? what should it describe? should it describe the topic or the services offered i.e should it be a sales message for the services. For example: a page that promotes the benefits of acupuncture whilst pregnant should it describe the page content or the service provided? i.e. acupuncture for pregnancy in Chester or acupuncture can benefit pregnancy because.... thanks
Intermediate & Advanced SEO | | Bristolweb0 -
To "Rel canon" or not to "Rel canon" that is the question
Looking for some input on a SEO situation that I'm struggling with. I guess you could say it's a usability vs Google situation. The situation is as follows: On a specific shop (lets say it's selling t-shirts). The products are sorted as follows each t-shit have a master and x number of variants (a color). we have a product listing in this listing all the different colors (variants) are shown. When you click one of the t-shirts (eg: blue) you get redirected to the product master, where some code on the page tells the master that it should change the color selectors to the blue color. This information the page gets from a query string in the URL. Now I could let Google index each URL for each color, and sort it out that way. except for the fact that the text doesn't change at all. Only thing that changes is the product image and that is changed with ajax in such a way that Google, most likely, won't notice that fact. ergo producing "duplicate content" problems. Ok! So I could sort this problem with a "rel canon" but then we are in a situation where the only thing that tells Google that we are talking about a blue t-shirt is the link to the master from the product listing. We end up in a situation where the master is the only one getting indexed, not a problem except for when people come from google directly to the product, I have no way of telling what color the costumer is looking for and hence won't know what image to serve her. Now I could tell my client that they have to write a unique text for each varient but with 100 of thousands of variant combinations this is not realistic ir a real good solution. I kinda need a new idea, any input idea or brain wave would be very welcome. 🙂
Intermediate & Advanced SEO | | ReneReinholdt0 -
Maximum of 100 links on a page vs rel="nofollow"
All, I read within the SEOmoz blog that search engines consider 100 links on a page to be plenty, and we should try (where possible) to keep within the 100 limit. My question is; when a rel="nofollow" attribute is given to a link, does that link still count towards your maximum 100? Many thanks Guy
Intermediate & Advanced SEO | | Horizon0 -
Why my site is "STILL" violating the Google quality guidelines?
Hello, I had a site with two topics: Fashion & Technology. Due to the Panda Update I decided to change some things and one of those things was the separation of these two topics. So, on June 21, I redirected (301) all the Fashion pages to a new domain. The new domain performed well the first three days, but the rankings dropped later. Now, even the site doesn't rank for its own name. So, I thought the website was penalized for any reason, and I sent a reconsideration to Google. In fact, five days later, Google confirmed that my site is "still violating the quality guidelines". I don't understand. My original site was never penalized and the content is the same. And now when it is installed on the new domain becomes penalized just a few days later? Is this penalization only a sandbox for the new domain? Or just until the old URLs disappear from the index (due to the 301 redirect)? Maybe Google thinks my new site is duplicating my old site? Or just is a temporal prevention with new domains after a redirection in order to avoid spammers? Maybe this is not a real penalization and I only need a little patience? Or do you think my site is really violating the quality guidelines? (The domain is http://www.newclothing.co/) The original domain where the fashion section was installed before is http://www.myddnetwork.com/ (As you can see it is now a tech blog without fashion sections) The 301 redirect are working well. One example of redirected URLs: http://www.myddnetwork.com/clothing-shoes-accessories/ (this is the homepage, but each page was redirected to its corresponding URL in the new domain). I appreciate any advice. Basically my fashion pages have dropped totally. Both, the new and old URLs are not ranking. 😞
Intermediate & Advanced SEO | | omarinho0