We just can't figure out the right anchor text to use
-
We have been trying everything we can with anchor text. We have read here that we should try naturalistic language. Our competitors who are above us in Google search results don't do any of this. They only use their names or a single term like "austin web design". Is what we are doing hurting our listings? We don't have any black hat links. Here's what we are doing now. We are going crazy trying to figure this out. We are afraid to do anything in fear it will damage our position.
Bob
| pallasart web design | 31 | 1,730 |
| website by pallasart a texas web design company in austin | 15 | 1,526 |
| website by the austin design company pallasart | 14 | 1,525 |
| created by pallasart a web design company in austin texas | 13 | 1,528 |
| created by an austin web design company pallasart | 12 | 1,499 |
| website by pallasart web design an austin web design company | 12 | 1,389 |
| website by pallasart an austin web design company | 11 | 1,463 |
| pallasart austin web design | 9 | 2,717 |
| website created by pallasart a web design company in austin texas | 9 | 1,369 |
| website by pallasart | 8 | 910 |
| austin web design | 5 | 63 |
| pallasart website design austin | -
Thank you both for helping us. We talked about what you wrote this morning and are making changes based on this advice.
-
What more can be said - nailed by EGOL
-Andy
-
website by pallasart a texas web design company in austin
I would keep it really really short. Get the name of your company in there and leave it at that. Why?
Pallasart Web Design is easy to read.
Pallasart Web Design is more memorable.
Pallasart Web Design, used on all of your designs, is a consistent branding message (I hope that is your domain name)
Pallasart Web Design is your brand name and Google doesn't like keyword-rich anchor text in my opinion.
People are going to click through based upon the quality of your work rather than where you are located (in ten years running many sites all of the people I have hired are very far from me because I hire based upon who does work that I respect).
People who click through this type of link are going to do so based upon how much they think you know about Google and I personally think that Google frowns on long keyword-rich anchors for an attribution link.
People are going to click through based upon how good you are at creating links that elicit clicks and I think that short, rather than keyword-rich is more effective at eliciting clicks.
A lot of people really dislike these types of links (search here for heated discussions about them) and they would allow Pallasart Web Design long before they would allow the long messages you provided as samples. Some will not want any attribution links.
Some people are going to check your code and see if you have nofollow on the link and will be more likely to allow the link if it is nofollowed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using # in parameters?
I am trying to understand why a website would use # instead of a ? for its parameters? I have put an example of the URL below: http://www.warehousestationery.co.nz/office-supplies/adhesives-tapes-and-fastenings#prefn1=brand&prefn2=colour&prefv1=Command&prefv2=Clear Any help would be much appreciated.
Technical SEO | | CaitlinDW1 -
Can you force Google to use meta description?
Is it possible to force Google to use only the Meta description put in place for a page and not gather additional text from the page?
Technical SEO | | A_Q0 -
Can an AJAX framework (using HTML5 + pushstate) on your site impact your ranking?
Hello everybody, I am currently investigating a website which is rendered by an AJAX Framework (Angularjs) using the HTML5 +API history - Pushstate methods.
Technical SEO | | Netsociety
Recently Google announced that they are able to execute Javascript and can therefore see the content and links to discover all pages in the structure. However it seems that it doesn't run the Javascript at ALL times. (after some internal testing) So technically it is possible it arrives on a page without seeing any content and links, while another time he can arrive, run Javascript and read/discover the content and links generated by AJAX.
The fact that Google can't always interpret or read the website correctly can therefore have negative SEO impact? (not the indexation process but ranking) We are aware that is better to create a snapshot of the page but in the announcement of Google they state that the method that is currently used, should be sufficient. Does anybody have any experience with this AND what is the impact on the ranking process? Thanks!0 -
Test site got indexed in Google - What's the best way of getting the pages removed from the SERP's?
Hi Mozzers, I'd like your feedback on the following: the test/development domain where our sitebuilder works on got indexed, despite all warnings and advice. The content on these pages is in active use by our new site. Thus to prevent duplicate content penalties we have put a noindex in our robots.txt. However off course the pages are currently visible in the SERP's. What's the best way of dealing with this? I did not find related questions although I think this is a mistake that is often made. Perhaps the answer will also be relevant for others beside me. Thank you in advance, greetings, Folko
Technical SEO | | Yarden_Uitvaartorganisatie0 -
What's Moz's Strategy behind their blog main categories?
I've only just noticed that the Moz' blog categories have been moved within a pull down menu. See it underneath : 'Explore Posts by Category' on any blog page. This means that the whole list of categories under that pull-down is not crawlable by bots, and therefore no link-juice flows down onto those category pages. I imagine that the main drive behind that move is to sculpt page rank so that the business/money pages or areas of the website get greater link equity as opposed to just wasting it all throwing it down to the many categories ? it'd be good to hear about more from Rand or anyone in his team as to how they came onto engineering this and why. One of the things I wonder is: with the sheer amount of content that Moz produces, is it possible to contemplate an effective technical architecture such as that? I know they do a great job at interlinking content from one post onto another, so effectively one can argue that that kind of supersedes the need for hierarchical page rank distribution via categories... but I wonder : "is it working better this way vs having crawlable blog category links on the blog section? have they performed tests" some insights or further info on this from Moz would be very welcome. thanks in advance
Technical SEO | | carralon
David0 -
Anything I'm missing as my page just donst seem to rank
I am wandering if anyone can offer any suggestions, we have a page on our site https://www.wilsonfield.co.uk/insolvency-advice/liquidation/ this page is optimised to rank for liquidation however no matter how many links or how optimised the page is it just will not show in the SERPS. Moz gives it a page score of A we have built relevant links directly to the page using appropriate anchor text, have social likes and concentrated of getting more google+ likes. We run a detailed Moz SERP report comparing the above url to the top 10 ranked pages and we are looking competitive if not better on all ranking factors. This is now really frustrating that we arnt even in the top 100 and cant understand why. we have the https version of the site also submitted to webmaster tools and www is set to be the prefered. Has anyone got any ideas as to why google just dosnt like our site, we have no crawl errors we use all best practices.
Technical SEO | | Wilson_Field0 -
SEOMoz is indicating I have 40 pages with duplicate content, yet it doesn't list the URL's of the pages???
When I look at the Errors and Warnings on my Campaign Overview, I have a lot of "duplicate content" errors. When I view the errors/warnings SEOMoz indicates the number of pages with duplicate content, yet when I go to view them the subsequent page says no pages were found... Any ideas are greatly welcomed! Thanks Marty K.
Technical SEO | | MartinKlausmeier0 -
I just found something weird I can't explain, so maybe you guys can help me out.
I just found something weird I can't explain, so maybe you guys can help me out. In Google http://www.google.nl/#hl=nl&q=internet. The number 3 result is a big telecom provider in the Netherland called Ziggo. The ranking URL is https://www.ziggo.nl/producten/internet/. However if you click on it you'll be directed to https://www.ziggo.nl/#producten/internet/ HttpFox in FF however is not showing any redirects. Just a 200 status code. The URL https://www.ziggo.nl/#producten/internet/ contains a hash, so the canonical URL should be https://www.ziggo.nl/. I can understand that. But why is Google showing the title and description of https://www.ziggo.nl/producten/internet/, when the canonical URL clearly is https://www.ziggo.nl/? Can anyone confirm my guess that Google is using the bulk SEO value (link juice/authority) of the homepage at https://www.ziggo.nl/ because of the hash, but it's using the relevant content of https://www.ziggo.nl/producten/internet/ resulting in a top position for the keyword "internet".
Technical SEO | | NEWCRAFT0