Dex Landing Pages!
-
Hello Moz-ers!
Have a question for the community that I'd like some feedback on, here is the situation
- My client owns a local business, which has a website that we built for them last year.
- Rankings and traffic have been great, and they're doing well in a competitive market.
- This client also works with Dex Media, who manages some of the marketing for this business.
- In this effort Dex has purchased a few variations of the branded domain names, and has placed landing pages at these URL's with basic information, phone number, etc. Seems like a standard page for one of these sites.
- These sites have different phone numbers, used for tracking leads from those particular pages.
To me, these seem totally unnecessary and also as though it could be something pulling visitors away from the main website, which details all of their services, etc. It also seems like it could pretty easily confuse or frustrate potential visitors with the 4 similar domain names, each with different numbers of the same business.
I told the client that I thought this might be the case, and at the very least they should ask Dex about acquiring ownership of those extra domains containing their brand in addition to seeing if they would be able to just redirect those URLs to the main domain.
They responded by saying that Google doesn't care about any of this, citing this article:
https://support.google.com/adwordspolicy/answer/2643759?hl=en#
To me, that support link appears to not address the issue at hand, as it focuses mainly on proxy URL's being used giving the example of proxy.example.com
Can anyone weigh in here with an opinion? Is there any reason to think that these extra domains would not have a potential negative impact on client traffic/rank for the main domain?
Thanks!
Tyler
-
Hey Colin,
Thanks for your reply, glad to know I'm not totally alone on this. The client has been pretty frustrated with Dex in general. Dex set up all of their PPC ads to link to these extraneous domains instead of the one actually owned by my client, who wasn't even aware of those extra sites until she found one by accident.
Dex and other similar websites seem desperate, saying pretty much anything to retain clients.
Thanks!
Tyler
-
Hey Tyler, I deal with this same type of thing quite often. and I'm with you, I wish companies like dex would leave my clients alone. IMO you are absolutely right, and I always recommend having 1 single domain - unless there's a darn good reason to do otherwise.
It will hurt traffic/rank to the main domain, or in a best case scenario just be absolutely pointless. Your client's a local biz - which site will they link to from all their listings, social profiles, and all that? Let's say someone writes an article about them in a local blog - which site should they link to? Multiple domains for this type of business will either dilute your SEO efforts or be a waste of resources. Not to mention be confusing for visitors and install a lack of trust like you were saying.
A website is not inherently an owned asset...you have to own the domain name, the content, etc. for it to actually be valuable to a business. So you're definitely advising correctly
and yea, that link they sent you has nothing to do with organic search.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My site got hacked and now i have 1000s of 404 pages and backlinks. Should i transfer my site to a new domain name and start again?
My site was hacked and I had 1000s of pages that should not exist created and has had 1000s of backlinks put in. Now i have the same pages and backlinks redirecting to 404 pages. Is this why my site crashed out of google and my SEO fixes since have made no progress to my problem?
Industry News | | KeithWarbyUK0 -
Backlink Query. Unranked pages of High Ranking sites.
Hi, So I was just wondering if someone with more knowledge than myself can answer this question for me. I have a site - currently sat on page 2 of google. On-site optimisation is done, however I am struggling to get backlinks that are from highranking pages. I am new to SEO so need a hand. My understanding of backlinks is that the higher the PR of the site that links to your 'money' site, the better that link is, and that these links are very hard to come by. (something that I am finding). Many times I have found sites that have a high rank and offer for a free listing, only for me to fill in details and get listed on a sub-page that has no ranking whatsoever. So my question is, are these kind of links worth the effort? Do they actually have any effect on rankings? And generally would anyone have any tips on the best sites to get links? Thanks
Industry News | | Chstphrjohn0 -
My Recent Drop from the first page
Hello, I have a website design operation in Akron, Ohio. I have been ranking on the first page for the last year or so. Just recently (within the last 5 -6 weeks) I have fell back to page 2 on google. I have changed up my content a little bit on my home page, adding more references to the "Web Design Akron Ohio" keyword. I have looked at the sites ahead of me and I noticed that most of the highest ranking sites are using Meta-Keywords, which moz suggest not to. I added Meta-Keywords about 10 days ago just testing if there would be any ranking change and so far no change at all. Would someone be kind enough to look at my site and throw out some suggestions of what I might do to get back to the first page? I'm trying to rank for "web design akron ohio" and my URL is http://www.uswebproducts.com Any help would be appreciated.
Industry News | | Scott-Jones0 -
Effect of changes on the Content page on search ranking
Hi, I have a question related to my ranking on Google Search On the content pages of my website, there is a section where our content keeps on changing. Whenever a visitor enters new information, Old information will be removed from the page. Therefore our content page is dynamic. Does this make us difficult to rank for particular keyword / overall.
Industry News | | adiez12340 -
If I have a Google+ Business page, do I need a Google Places page as well?
It seems like the two are redundant? Any official word on this? I'm fairly OCD about things being tidy and I dont want to split my reviews / shares / etc between two profiles. Are they not the same thing? I searched for my company, both my plus business page and my places page came up. I attached a SS of the situation. placesvplus.png
Industry News | | jonnyholt1 -
Google Changes Up The Search Results Page
Hi Guys, As you Google has made changes on search results page. I have two points two discuss here : 1. Are we going to see more ads on left sidebar in future ? 2. I think it will also affect the CTR of top three ads in SERP ? Waiting for you guys opinion on it ? Reference: http://www.webpronews.com/google-changes-up-the-search-results-page-2012-11
Industry News | | SanketPatel1 -
Google: 7 Results on First page now (Good or Bad)?
Hi, So what are your thoughts on only 7 results on the first page of a search? Looks like its only in play for brands at the moment with increased sitelinks for the first result. I actually really like it, looks a lot better and its easier to take the overall information in on the page. I hope they role this out onto more generic terms as well. Should mean a higher CTR for those who have 'made it' to the first page. I suppose Google will pickup on a higher CTR on the ads well.
Industry News | | activitysuper0 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690