Hi all. Should you use rel sponsored on internal links? Here is the scenario: a company accepts money from one of their partners to place a prominent link on their home page. That link goes to an internal page on the company's website that contains information about that partner's service. If this was an external link that the partner was paying for, then you would obviously use rel="sponsored" but since this is a link that goes from awebsite.com to awebsite.com/some-page/, it seems odd to qualify that link in this way. Does this change if the link contains a "sponsored" label in the text (not in the rel qualifier)? Does this change if this link looks more like an ad (i.e. a banner image) vs. regular text (i.e. a link in a paragraph)? Thanks for any and all guidance or examples you can share!
Posts made by Matthew_Edgar
-
Rel Sponsored on Internal Links
-
RE: Does redirecting a duplicate page NOT in Google‘s index pass link juice? (External links not showing in search console)
@lewald1 Hey. A few questions for you...
Does the other page on your website canonical to the page Google is saying is a duplicate? If there is a canonical in place, that may explain why you aren't seeing the duplicate page in the links report. Google groups those by the canonical URL. If that is the case, though, you should see backlinks pointing to the duplicate page under the other page in the links report.
As well, how well is the other page on the site performing? Is it performing better than you'd expect based on that page's specific backlinks? If so, that might be a sign that Google is already collapsing the page they've said is a duplicate into that other page. If that is what is happening, the redirect wouldn't be a problem to add but you may not see much impact by adding that redirect.
As for the redirect, I think that could make sense to add it and generally, redirects are the best solution for duplication. A redirect is a stronger signal than the canonical and, in this case, the redirect would reinforce the signals Google is already seeing about the duplication. Of course, is there any reason you can't redirect this page into the one Google has selected as the canonical? Do you need that page for something else on your site (like as a landing page for ads)? If not, then adding the redirect would be a good way to resolve the duplication.
I hope that helps!
-
RE: Sitemap.xml strategy for site with thousands of pages
@jerrico1 Only including some pages in the sitemap won't hurt your SEO performance at all. I've done this on a number of sites for exactly the same reasons you are facing.
The XML sitemap simply gives Google one more way to find your pages. Ideally, you could use it to give Google a way to find all of your pages but you want to at least use it for the pages you want to be sure Google finds. However, there is no penalty if the page isn't in the sitemap.
That said - you may want to check if you need the XML sitemap at all as a point of discovery. If you have lots of links (internal or external) to the pages on your website, then odds are good that Google is already finding those pages. The XML sitemap wouldn't hurt to have but if there already links to these pages, you likely don't have a big problem to solve here.
The best way to check this is within your log file - pull a unique list of all the URLs that Google has crawled over the last few weeks. You may not be able to open up your log files (sometimes you can't easily on large sites and you aren't using an enterprise log analyzer). If that is the case, then you could check to see how many of your pages are Google organic landing pages in your analytics tool--if the page is getting traffic from Google, then Google clearly found the page.
Hope that helps!
-
Hiding ad code from bots
Hi. I have a client who is about to deploy ads on their site. To avoid bots clicking on those ads and skewing data, the company would like to prevent any bots from seeing any ads and, of course, that includes Googlebot. This seems like it could be cloaking and I'd rather not have a different version of the sites for bots. However, knowing that this will likely happen, I'm wondering how big of a problem it could be if they do this. This change isn't done to manipulate Googlebot's understanding of the page (ads don't affect rankings, etc.) and it will only be a very minimal impact on the page overall.
So, if they go down this road and hide ads from bots, I'm trying to determine how big of a risk this could be. I found some old articles discussing this with some suggesting it was a problem and others saying it might be okay in some cases (links below). But I couldn't find any recent articles about this. Wondering if anybody has seen anything new or has a new perspective to share on this issue? Is it a problem if all bots (including Googlebot) are unable to see ads?
https://moz.com/blog/white-hat-cloaking-it-exists-its-permitted-its-useful
https://www.webmasterworld.com/google/4535445.htm
https://www.youtube.com/watch?v=wBO-1ETf_dY -
RE: Hreflang on non 1:1 websites
Thanks for the reply! Yep, it is the same company (though each country's website is managed by a different group). Good idea on just doing the home page for the brand terms. Could maybe extend that logic for top-level pages too - target similar topics even if the content is a bit different per site.
-
Hreflang on non 1:1 websites
Hi. I have a client with international websites targeting several different countries. Currently, the US (.com) website outranks the country-specific domain when conducting a search within that country (i.e. US outranks the UK website in the UK). This sounds like a classic case for hrelang. However, the websites are largely not 1:1. They offer different content with a different design and a different URL structure. Each country is on a country-specific domain (.com, .co.uk, .com.au, etc.). As well, the country-specific domains have lower domain authority than the US/.com website - fewer links, lower quality content, poorer UX, etc.
Would hreflang still help in this scenario if we were to map it the closest possible matching page? Do the websites not sharing content 1:1 add any risks? The client is worried the US/.com website will lose ranking in the country but the country-specific domain won't gain that ranking.
Thanks for any help or examples you can offer!
-
RE: Do you need Metadata on Password Protect Pages?
Hi Judd. You wouldn't need the title or description tags for Google's sake (or Bing's sake) on pages behind a login since they couldn't crawl those pages. That said, title tags aren't just used by search engines. That is also the title that shows for the page in the browser window and in the bookmark name (if people bookmark those pages), so it can be worthwhile to add a good title tag to help your visitors. Obviously, beyond the meta description tag, there are other meta tags that you might need (like meta viewport or metra reresh, etc.).
-
RE: Do I need to add the actual language for meta tags and description for different languages? cited for duplicate content for different language
Hi. If you are getting duplicate content warnings between the Mandarin version of your website and a website in a different language, the first thing I'd suggest looking into is to see if your code correctly uses the alternate/hreflang attributes to specify the different versions of the pages in different languages. This tag is intended for sites that have translated content from one language to another and gives you a way to tell Google that the Mandarin version of the page is at this URL, the Spanish version is this URL, the English site is at this other URL, and so on.
In terms of title, descriptions, and all other content, you do want to make that specific to the language of the page. So, your Mandarin website would have titles, descriptions, etc. written in Mandarin and your English website would have titles, descriptions, etc. in English, and so on. After all, your Mandarin website is intended for people speaking (and searching in) Mandarin so using that language throughout will increase your chances of ranking for that audience.
-
RE: Do you outreach to blogs with no recent activity?
Hey. I agree with what Martijn said that generally, you want to request links from more active sites. Those sites tend to have better overall quality, search performance, and a more active audience. The one thing I'll add is that sometimes you can find blogs that have great evergreen content (good rankings, good links, seem to have decent traffic, etc.) even though they don't have a lot of recent content. In these cases, a link to a relevant page on your site could enhance that evergreen content and sometimes site runners are willing to update old articles (even if they aren't adding anything new). For example, maybe an old blog post on a currently inactive site has a great recap of the best ways to do X (where X is a subject that hasn't changed much in years). You have a page that offers another way to do X or is somehow tightly connected, so including it in that older blog post could be of use to the people who find that old article.
-
What is the impact of an off-topic page to other pages on the site?
We are working with a client who has one irrelevant, off-topic post ranking incredibly well and driving a lot of traffic. However, none of the other pages on the site, that are relevant to this client's business, are ranking. Links are good and in-line with competitors for the various terms. Oddly, very few external links reference this off-topic post, most are to the home page. Local profile is also in-line with competitors, including reviews, categorization, geo-targeting, pictures, etc. No spam issues exist and no warnings in Google Search Console. The only thing that seems weird is this off-topic post but that could affect rankings on other pages of the site? Would removing that off-topic post potentially help increase traffic and rankings for the other more relevant pages of the site? Appreciate any and all help or ideas of where to go from here. Thanks!
-
RE: Should I use sessions or unique visitors to work out my ecommerce conversion rate?
I'd argue there is value in looking at and benchmarking both numbers, though you might not get an accurate picture of both through Google Analytics. You want to know how many sessions ended up in an order, regardless of how many repeat customers there were that converted. As you said, every visit could end up in an order (you could get a little more detailed and segment to clarify just how many sessions qualify) and you want to know just how true that is. At the same time, you want to know how many unique people placed an order as well and repeat order rates.
Here is the tricky part. Google Analytics is pretty good at telling you how many sessions resulted in an order (the conversion rate you see in goal reports is goals per session). With the Time to Purchase report, you can get a fairly decent idea of sessions it took for those higher margin products. Now the other side: unique users. Users is wonky in how it is calculated (for instance one customer uses different browsers/devices or your customer deleted their cookies) so knowing how many users converted won't always give you the number you are after and, in my mind, it isn't reliable enough to benchmark.
What I do to get at the number of unique customers and orders per customer is use other tools (CRM, order system, etc.) to track that number--those systems are designed around people not sessions, so you are going to get a far more accurate picture of how many unique people placed an order. That is your benchmark, but you can map order dates and/or transaction IDs to GA so that you can understand traffic patterns for repeat customers and how they might differ.
Hope that helps.
-
RE: Duplicate URL's in Sitemap? Is that a problem?
Generally speaking, this isn't the worst problem you can have with your XML sitemap. In an ideal world, you'll be able to remove duplicate URLs from the sitemap and only submit a single URL for each page. In reality, most larger sites I've encountered have some amount of duplicate content in their XML sitemap with no real major problems.
Duplicate content is really only a major problem if it is "deceptive" in nature. So long as this is just a normal consequence of your CMS, or similar, vs. an attempt to game the rankings you are probably fine. For more about that check out this support article.
The other problem you may encounter is with your search results for those duplicate pages. That article makes mention that Google will pick the URL they think is best (more about that here as well) and the URL they deem the best will be the URL that surfaces in the search results. That may or may not be the same URL you or your visitors would deem best. So, what you might find is Google picked a not great URL (like one with extra parameters) and with the not great URL appearing in the SERPs, your search result isn't as compelling to click on as some other version of the URL may be.
-
RE: Merging B2B site with B2C site
Yeah, that would probably make the most sense. For sites where I've done something like this before, it is usually a single page (or maybe a few pages, depending) that talks about the trade program, answered common questions, allowed login, and encouraged signup. You could then promote that page(s), optimize it for appropriate terms, get links going to it, etc.
-
RE: Merging B2B site with B2C site
Hey Lewis,
I would imagine it is possible to merge the two sites with limited implications. From a pure SEO standpoint the biggest consideration factor is if there is any negative "baggage" associated with the B2B site--especially spammy backlinks or any wonky technology that would impact the B2C site (speed, redirect issues, etc.). Given that, I'd do a careful review of the B2B site first to evaluate backlinks to that site and any errors reported about that site (using something like Moz's crawl tool and/or in Google Search Console).
The other potential risk is if there is any duplicate content that would be created as a result of the merger of the two sites. For instance, if you have a page on the B2B and B2C sites about Product X, after the merger you'd potentially end up with two pages about Product X. From what you said, it seems like the B2B's site content would be hidden behind a login and wouldn't be accessibile to bots. If that is the case, then maybe this isn't much of an issue. But where this is an issue, I'd also work to figure out the sitemap of the merged site and map out any link changes or redirects that are required to implement that new sitemap.
Along with SEO considerations, there are of course non-SEO factors to consider that could have indirect implications on SEO. Would there be any harm in the customers of the B2C side knowing the B2B trade side exists? If so, that could potentially harm the visitor experience and the brand which could negatively impact clicks, social shares, link building, etc. in the future. Likely not a risk, but I've seen a handful of companies who've merged their two sites only to find that they should have kept their audiences more segmented.
Hope that helps!
any duplicate content that would be created as a result of the merger
-
RE: Breaking a Big Website into Multiple Unrelated Sites
Really great question and there really isn't any one right answer here. In general, from an SEO perspective, I lean toward one consolidated website that is really well organized. That way all of your SEO efforts benefit one domain. For example, instead of building links to three sites, you can concentrate on building links to one website. Even if the content for patients gets more links (let's say), if everything is on one domain those patient oriented links will still help the content for therapists earn rankings because they'll contribute to overall domain strength.
Along with the SEO though, my other question would be if there is any great harm in having one website serve multiple audiences? There are numerous examples of companies who are able to do exactly that with their content. Doing so requires a strong information architecture to clearly define what each section is, who it is for, how sections are labeled, how you navigate to various sections, etc. Totally doable, and good IA tends to also be good for SEO too. That said, in some cases one audience group might be distracted/offended/annoyed by content that is intended for another audience group or maybe there is just one set of content you'd rather one group not see. Do you have any situations like that? Have you surveyed users for their opinions about the content to identify these pitfalls?
Of course, the other question to ask here is if there is a strong business case for dividing the sites apart? It doesn't sound like it based on your question, but I want to throw that idea out there. I've worked with some organizations where they have one department focused on a certain audience group. To simplify dev and maintenance, the business case is pretty compelling to split the sites apart. Still though, in a lot of cases it is easier to have one website because then all dev, design, branding, etc. budgets (of time and money) can be focused on the one domain vs. divided across multiple domains.
-
RE: Clarification on Analytics Goals & Funnel Logic
Yeah, you could delete that home page just for cleaner reporting if no other reason. That said, if you already have data you need to keep that is in there for historical purposes, might be easier to create a new funnel/goal that excludes the home page and then just use that new goal moving forward.
-
RE: Clarification on Analytics Goals & Funnel Logic
-
Ah, gotcha. You could do that as a two step funnel. You can also look at total sessions on the first page and see how many people from there complete a goal.
-
Cool!
-
Then, in that case, you should be able to add GA on there vs worrying about cross domain (phew!). Looks like Dmitrii suggested the same thing.
-
In that case, I'd use event tracking on the buttons and links so that you know what people are clicking that leads them to the form. UTMs could lead to messy reports since these are internal pages. You'd just have to be careful how you set it up and walk through all the various edge cases to make sure it will work, but it should.
-
-
RE: Clarification on Analytics Goals & Funnel Logic
Hey,
-
It sounds like you don't have a funnel, you have a destination goal. A funnel would be something like a form that was several pages - users go from page 1 to page 2 to page 3 and then thanks (think of a cart). So, your site (in the funnel sense) is not a step toward thanks.
-
To know which pages are performing better getting people to thank you page, you can use segments to look at converters and non-converters. That sounds like what you want to know - how do those groups compare? Here is a Google support article about this:
https://support.google.com/analytics/answer/3125360?hl=en -
For the iframe issue, you should check out cross domain tracking. Without that, yes, your stats will be skewed weird.
https://support.google.com/analytics/answer/1034342?hl=en -
For CTAs, are you talking about buttons or links? For instance, a linked image that says "Contact Us Now!"? Do the links lead to the form.html page? If it is something like that, event tracking can give you an idea of where people are clicking on the page. If, instead, you have the form itself on multiple locations of a page (instead of links) you can get creative - one way I've pulled this off is to have the forms send to a different thank you page so that you can track as multiple goals.
Hope that helps!
-
-
RE: Is there a limit to Internal Redirect?
I've not seen any instances of a limit to how many redirects you can have pointing to your website. I have some clients who have thousands of redirects in place (lots of old pages being moved to a new version of that product). Those sites haven't had any issues with rankings at all. In fact, many of the links pointing to the sites still reference the URLs that are redirected and those pages that are redirected to are ranking perfectly fine.
The biggest limit I've seen is on chaining. I've seen issues where chained redirects simply aren't followed. However, if you can keep it to a 1 step redirect, or 2, then things should be okay. It doesn't sound like that is what you are asking about though. More from Matt Cutts on this:
http://www.searchenginejournal.com/matt-cutts-discusses-301-permanent-redirects-limits-on-websites/46611/In terms of managing those redirects, you can't usually keep this many on an htaccess file without going a little bit nuts (or risking some future dev deleting those in an effort to clean up the htaccess file - ug). If you are using WordPress, the 301 redirects plugin works quite well: https://wordpress.org/plugins/301-redirects/
Unfortunately, I've also run into sites that aren't in a CMS where you can use a plugin. In those cases, I usually put these redirects in a database table. On the 404 file, I then have the code check the would-be error URL to see if we need to redirect that URL somewhere else. If a redirect is place, it redirects instead of throwing the 404 error. If no redirect is in place, the code then throws a 404 error.
Hope that helps.
-
RE: Search Analytics update in Google Webmasters Tools? Where can we find search queries bringing traffic to website?
For what it is worth, I'm in the US and seeing the 999 limit on all my accounts and domains. So, I guess that means the clock is ticking and if you don't have that limit yet, then you better get downloading fast!
-
RE: Search Analytics update in Google Webmasters Tools? Where can we find search queries bringing traffic to website?
You are correct that the Search Analytics queries report does limit down to 999 queries. However, when you load the Acquisition -> Search Engine Optimization -> Queries report in Google Analytics, you may be able to see more than 999 terms. I just double checked in a dozen or so accounts and all reports in Google Analytics were plus 999, though that may or may not be changing based on Google's new updated to the Analytics report. Also, keep in mind though that there is a limit to how long the data would stick around - for clients I work with ongoing, I export the reports monthly just so I don't lose the limited data you do have.
As Charles said, though, this data about queries isn't always reliable and it is limited in what you can get out of this data. That said, if you are using it for a general idea of what terms people found you for and what pages were related to which term, it can be helpful. Certainly not as good as the pre-"not provided" days and you obviously can't rely on it to calculate ROI, but it at least gives you an idea of terms and a relative idea of performance.
As for other ways to get around not provided and find those queries, here are some articles with some good tips and tricks:
https://moz.com/blog/landing-pages-report-in-moz-analytics
https://blog.kissmetrics.com/unlock-keyword-not-provided/
http://searchenginewatch.com/sew/how-to/2297674/google-not-provided-keywords-10-ways-to-get-organic-search-data -
RE: Condensing content for web site redesign
Hey Vanessa. I'd ask a few additional questions about the pages before making a decision...
-
If you were to implement redirects, would the redirect go from Treatment (the page) -> Treatment (the Ajax-loaded content)? Or, would it go from Treatment (the page) -> topic page (and people would have to click a link to view treatment content)? If the redirect goes from the page, to the related content of the page then maybe this isn't too terrible an idea. That would mean the Ajax-loaded page section for treatment would have some unique kind of URL associated with it (like /topic-name#treatment).
-
Next question, though, is how much traffic does this affect? Of the traffic those pages get individually right now, how much of that traffic enters the site on those pages (from any source - direct, referral, social, organic, paid)? If right now almost everybody comes into the site via an overview page and then clicks to Symptoms or Treatment, then probably okay to consolidate those into a single page. That said, if all three pages are landing pages for a reasonable amount of visitors I'd be reluctant to make this kind of change to disrupt the traffic...esp. if the answer to question #1 is no.
-
What about links? Do you have a lot of links pointing to the individual pages within each section? Yes, redirects will help retain the link equity, but with any redirect you lose some. So, if a large percent of the links to your site are to these pages, I'd be hesitant to make any kind of change without further testing/research around the weight and importance of those links.
Along with those questions, I'm also wondering why the agency thinks this would help with load time. Why can't they improve load time on the individual pages? Are they talking about the load time from clicking to the Treatment page from Symptoms? If so, there are probably better ways to address that vs. removing pages from the site. When you run a speed test, what is slowing down the page load? Is it something with the server or content that can be tweaked? I'd start there before trying to consolidate pages and running the risk of disrupting any existing traffic.
I hope that helps as you work toward a decision.
-
-
RE: High Bounce Rate
Why don't you sell those products? Is it because those other products are inferior in quality, too expensive for value, etc.? If there are specific reasons like that, it can some times be helpful to state that specifically as a means of helping to educate your customers and potentially sway them into seeing the value of what you do sell.
The other thing to look at is what keywords are driving traffic to the page. Granted, this data is hard to get but you can get a good sense of this in Search Console. Figure out why you are ranking for the things you don't sell. For example, if you have a lot of links pointing to you referring people your way referencing something you don't carry, see if there is a way to change those links to something more specific
Alternatively, if you are getting enough people coming to the site interested in products you don't carry but there isn't a great reason your company doesn't carry them, maybe see if there is a way you can carry the products (or resell them or something like that). I've worked with more than one client who didn't realize so many people may be interested in Product X until they saw the search traffic and that traffic helped them decide to change products offered.
-
RE: High Bounce Rate
Without looking at the site or this one page, but based on what you are describing, I would suspect the bounce rate is high because of an expectation gap. The people coming to this article sound like they are in the early stages of the buying cycle--they want information about the types of products available and how to use this product. Maybe many of these people aren't quite ready to look at category pages or product pages because it is simply too early in the buying cycle for that to happen. Your links expect one thing (products/shop/buy) but visitors expect another (more info).
In these cases to lower the bounce rate, you need some kind of in-between content. Something that moves people to the next step of the buying cycle. Basically, you need to solve for X in this equation: early pre-purchase research -> X -> product page. To start solving for X, I want to know what questions people have after reading this article. There are several ways to get that, but user testing with targeted customers can usually be a good place to start.
Another place to look is in regards to trust factors. Do the people coming to that informational article trust your brand? It seems like they are because of how long they spend on the page. However, it could be people trust the company enough for information and insight, but maybe not enough to purchase. Again, user testing this page to gauge trust factors could be helpful to clarify what more you can do here to get people to stick around.
Final thought - it may simply be that this page will always have a high bounce rate. This is where re-targeting ads can be helpful. You know people are interested in your proudct but aren't quite ready to look at products just yet. So, after they visit this page you should re-target an ad to them as they continue their research building up to purchasing the product.
-
RE: Submitting an 'HTTPS' sitemap.xml to Bing
Hey Iain. If it were me, I'd probably just accept that Bing can't crawl the sitemap and let it go. XML sitemaps are important, but not something that will generally make a huge life altering difference for your website's performance.
Now, I say "probably" because I'm wondering if you are having indexing problems with Bing. Are there pages you want Bing to index that maybe they can't reach easily (or at all) without an XML sitemap? If that is the case, then maybe it is worth the 3 hours of dev time to get the XML sitemap in place. Alternatively, you could find other ways to link to those pages Bing isn't currently indexing (on your site or others) to get those pages noticed.
-
RE: Would you consider this to be thin content
Well good, I'm glad you've not gotten a manual action.
When you say feedback, do you mean user feedback or marketer/designer/developer feedback? If it were me, I'd pay more attention to user feedback. If it is what you said in your initial question that users are getting what they want (just the words, and they are clicking from the Letter H page to the HA, HAE, HAAF, etc. pages), then it would seem to me the page is valuable and useful. I wouldn't worry about Google's view of the page unless I started to see a dip in rankings, traffic, etc.
Speaking of feedback, have you surveyed your users to ask about alternative content for these pages? You could ask your users what other content they may want here to make the page more valuable or unique or authentic for those users during their visit. But I wouldn't put in words or content blocks just to try to make Google happy for fear of the page being "thin" because that could create new problems on its own.
-
RE: Would you consider this to be thin content
The "thin content" question can be tricky. Google's support article about this says that thin content is a page that doesn't provide users with "substantially unique or valuable content". Their support article about original content talks about the need for "authentic content".
Together, I take to mean you should err on the side of what is good for your users. Content is important, but what is really important is useful content. In you case, it sounds like you are giving visitors what they want - get in, get what you need, get out. That seems like there is value and authenticity there for your users. So long as you continue to see higher rankings, more/steady traffic from Google, then I wouldn't think you should worry.
As well, the other question to ask here if if you have received any manual actions about thin content in Search Console? I'm assuming not since you didn't mention that. But, just wanted to double to make sure you were checking for that.
-
RE: Lost Rankings Late April Even Though We Have A Mobile Site
If it is a drop in organic desktop traffic, then that won't be caused by the mobile change (as Andy said). Out of curiosity, what are your percentages of organic traffic by device? Does mobile account for a sizable amount of your organic traffic? I'm wondering if mobile is being mis-reported as desktop?
While I'm doubtful that mobile usability issues would cause a drop in desktop traffic (assuming it is legitimately desktop traffic not being mis-reported), one easy way to test this would be to take a page that get lots of organic traffic that has mobile usability issues reported, fix those mobile usability issues. This ranking factor is real time and page by page (see Search Engine Land), so you should see some results quickly if that is the factor.
Another question for you - what exactly did you change about your product descriptions? Meta description, on-page text, both? Are those the pages that lost traffic? Maybe try reverting back and see what impact that has.
I'm guessing you don't want to share your domain here, but feel free to connect with me privately with the website URL and I'd be happy to take a look at your domain more specifically to see if I can spot anything else that might be causing this issue.
-
RE: Different content on different mobile browsers
Google talks about this as part of Dynamic Serving of content. Here is their article, which also includes tips on how to distinguish user agents (including how to signal this to Google):
https://developers.google.com/webmasters/mobile-sites/mobile-seo/configurations/dynamic-servingThis is an okay practice and I'd not had any Google penalties when I've utilized it. The big concern with doing this type of dynamic content shift for different devices is to avoid cloaking, which Google mentions in that article as well (be sure to click on the link in that Dynamic Serving article for more about cloaking). So long as you avoid cloaking, you should be in okay territory.
My advice is to test out your user agent matches thoroughly - I'd even go so far as to try this on one or two pages with some simple changes for each user agent and then make sure Google indexes those pages correctly before rolling this out to your entire site.
Hope that helps.
-
RE: Lost Rankings Late April Even Though We Have A Mobile Site
Sorry to hear about the drop in traffic. Like Hector said, it is really tough to know what caused the drop without more information. A few questions to clarify on this...
-
Like Andy said, are you referring to mobile organic traffic or overall organic traffic? What is the decrease in organic traffic per device (desktop, tablet, mobile)?
-
Let's go with it maybe having something to do with Google's mobile changes. Do you pass Google's mobile friendly test? I have seen a number of sites that had mobile sites, but they didn't fully pass Google's criteria. Do you see any errors listed in Google Search Console (Search Traffic -> Mobile Usability)?
-
If it isn't something with mobile, what other changes have you made to your site lately? Removed any pages? Removed any redirects? Blocked pages from Google's crawlers due to a code upgrade? Have you run a crawl test on your site - it can sometimes be the most basic of things that happened accidentally. Have you also checked for errors in Search Console?
-
Looking outside of your site, have any of your competitors made changes and/or gained rankings? Maybe the decrease in traffic doesn't have anything to do with you, just a competitor moving up quickly. I'd check the rankings report in Moz as well as the Search Analytics report in Google Search Console to see if you notice any changes.
Hope those questions give you some help in figuring out what is up.
-
-
RE: How to approach SEO for a national website that has multiple chapter/location websites all under different URLs
Hey Kat. I'd tend to agree with Andy's response that moving this into one site makes sense. You could then redirect the local pages into the appropriate page on the main site, creating a single authoritative domain. You'd want to make sure as you do this that each local page is truly unique...if the Chicago and Detroit local sites both contain similar pages about a dog safety campaign, you'd want to consolidate those (probably) into a single dog safety campaign page that could work for both locations (or find a way to distinguish the content for each location).
Now, having gone through a few projects like these, I know full well that what is best for SEO or UX isn't always the most popular solution. Internal politics play a role, and I'm guessing you might be in a spot where some of the local chapters don't want to relinquish control over their site. So, as an alternative solution you could look for some other ways to link these sites together. For instance, you could have a dog safety page on the main site that all the local chapters can link to and possibly, let the local chapters adjust the content slightly via the link. As in, if the url includes a query sting referencing Chicago (maincompanysite.net/dog-safety.php?location=Chicago), the content on that dog safety page could be programmed to show Chicago's phone number and address. That way you have a definitive page that can rank, but also a way for local chapters to share that content through their domain.
A lot of that alternative comes down to content governance rules, communication about who creates what page, a clear understanding of how people link to sites, and probably some clear design/brand standards. It is a bit of a mess, but not an unrealistic reality if you can't convince all the stakeholders in the value of consolidating to a single domain.
Hope that helps!
-
RE: Traffic on Keywords
I'm not sure how or if you can get traffic data from specific keywords in Moz Analytics. However, if you are looking for specific traffic on keywords you are already getting, you can find this in Google search console (google.com/webmasters). This also report shows you impressions so that you can find areas where you have lots of searchers, but few clicks. I hope that helps.
-
RE: Subdomain vs Subdirectory - does the content make a difference?
I agree with what John Cross said here - multiple domains means more work. If there is a business case to justify that increase in work, then that is an easier decision. If there isn't enough business case to justify the work, then maybe from an SEO standpoint you should keep it on the same domain to get the new content ranking more quickly.
Along with SEO considerations, though, there are a few other ways to break down this question...
First, what are the user expectations? Yes, the products are different and not highly related but are the customers different? In the Tesco example, would people who are interested in groceries also be interested in banking? Or, put another way, would people who are interested in groceries (but not in banking) be offended to see that this company also offers banking services? If the users are interconnected or are (at minimum) not put off by the variety of products, then why not have everything on one domain? That way you get the strong SEO benefit of using sub-directories. This isn't always a cheap investment though, as it requires a strong architecture to keep the directories and content types/voices distinct, but totally doable and a good solution from an SEO standpoint.
Second, I'd look at this from a brand perspective. Is this all the same company delivering these goods? Is it all Tesco or Sainsburys? If it is the same brand name, then why not have everything live on one authoritative domain name (assuming you aren't going to chase away customers by showing the breadth of products offered)? Google is an example of this - look at the wide variety of services they offer mail, analytics, drive, G+, search, etc. - it is all Google, even though they offer a wide range of products to a diverse range of customers. Now, if New Product A is a different brand and a really different thing from anything else being done by the company (in Google's case - Android), then that likely justifies a separate domain and a larger business investment (not just for SEO, but for design and other types of marketing too).
Finally, you do need to look at this technically I think. Chances are that Tesco Bank has to live on a different domain just because of security considerations. Some times the technology limitations have to dictate what we do with SEO. If those are great enough, then we may have to do the work to create two distinct domains and get those domains earning rankings/traffic. In that case, the business/technical needs justify the work required.
Hope that helps!
-
RE: Feedback needed on possible solutions to resolve indexing on ecommerce site
Hey,
It looks like you might have a duplicate content problem contributing here. For instance, you linked to: http://www.getpaper.com/find-paper/engineering-paper/bond-20-lb/430-20-lb-laser-bond-22-x-650-1-roll.html
And there is this duplicated page, that doesn't have the category directory structure for the URL.
http://www.getpaper.com/430-20-lb-laser-bond-22-x-650-1-roll.htmlThat duplicated page is indexed by Google. It also looks like the duplicated page is what is listed in your XML sitemap, not the page you have linked to from the paginated pages.
In spot checking some of the other product pages, it looks like there is a similar issue going on. I'd recommend altering your XML sitemap to reference the URL you want indexed. Or, since it looks like Google has already indexed the pages on your XML sitemap (some of them, at least), you may want to use the URLs that have been indexed (the ones without the category structure) instead of the URLs with the category structure.
In terms of your possible fixes, I think fix one makes more sense. The more direct links you can add to deeper pages of your site, the better. On fix two, moving the sidebar and header to the bottom of the code and controlling the design with CSS can present some problems in various browsers...in my experience, it usually is more pain than gain.
I hope that helps. Thanks!
Matthew -
RE: Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
I think the JavaScript implementation might still be able to be crawled by Google. Any more, I'm becoming convinced that Google can crawl just about anything. But, I'll be curious to see what the results are. Definitely update this thread with what ends up happening from that approach.
As for the robots.txt message, that would indicate that they are finding the link to the page but not crawling the page to get any content.
As for duplicated content concerns, just to take a step back, are the pages 100% the same or are you making alterations to the text? If you can do easy things that make that page different from the other sites (even if it is functionality), then the page isn't a true duplicate and there might be some good reasons why people could want to find those pages in the search results.
Ultimately, you have the same page, but you are making the page better than those other websites. If that is the case, then you should be safe letting those pages rank. Where having the same content as your competitor really hurts (in my experience, anyway) is when you aren't offering anything different than any other sites.
Hope that helps.
-
RE: Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hey,
This is definitely a complicated issue, and there is some risk in making a move in the wrong direction.
Here are my thoughts which might help you out. Feel free to private message me or shoot me an email (see my profile) and I'd be happy to talk more.
On the hash solution, would that require JavaScript be enabled in order to access those pages or would you have a fallback solution for those without JavaScript?
If you don't have a fallback solution for those without JavaScript, you might negatively affect visitors with disabilities. For instance, some types of Ajax are challenging for people with disabilities to access (see here to start digging into that: http://webaim.org/techniques/javascript/).
Thing is, if you have a fallback solution, Google could still access those. However, Google may still be able to access those pages with JavaScript as Google can execute some forms of JavaScript. Given that, the more appropriate solution would be to use the robots.txt file. You mentioned, though, that the command you put in didn't seem to work since Google kept indexing those pages. Couple questions:
First, did Google index those pages after the change or had those pages been indexed prior to the robots.txt change? Things take time, so I'm wondering if you didn't give them enough time to adjust.
The other question would be whether or not you tested the robots.txt file in Google Webmaster Tools? That just gives you an extra verification that it should work.
Also, you mentioned something interesting about the Vehicle Detail pages: "these pages are not meant for visitors to navigate to directly!" Given that is the case, is it possible for your developers to add some sort of server-side check to see if people are accessing the detail pages from the listing pages?
For instance, on some sites I've worked a cookie is set when you've reached the listing page that says "this person is okay to reach the detail page" and then the visitor can only reach the detail page if that cookie is set. Without that cookie, the visitor is redirected back to a listing page. Not sure how exactly that would work on your site, but it might be a way to keep visitors who find those pages in a Google search result from seeing the incorrectly styled page.
I hope that helps. Like I said, feel free to email me or private message me if you'd like me to take a look at your site or chat with you about more particulars.
Thanks!
-
RE: How complicated would it be to optimize our current site for the Safari browser?
Hey Dana,
I've had this problem with Safari being slower as well. In fact, I just checked a handful of sites in Google Analytics and Safari is almost consistently the slowest browser. One question to look into is whether or not mobile traffic on Safari is slow. For the sites I'm looking at and for the sites I've worked on in the past, the culprit is usually Safari's desktop browser. Phones are usually within normal loading parameters (by phone standards).
Unfortunately, in most cases, the sluggish performance on Safari is due to Safari's DNS prefetching. Generally, prefetching does the opposite, but apparently can slow you down in Safari. You can read more about that here. http://macs.about.com/od/MacTroubleshootingTips/qt/Troubleshooting-Safari-Slow-Page-Loads-Caused-By-Dns-Prefetching.htm and this is another good one http://computers.tutsplus.com/tutorials/how-to-fix-slow-and-non-loading-webpages-in-safari--mac-51338 (Lots more if you Google Safari prefetching.)
I do think it is worth it to try to fix it. Here's what I would do, if I were you. Feel free to PM me if you want more help...
The way you fix this is on the client's machine, at least according to those articles (and others I've found while Google'ing around just now). However, you can try these steps out on your computer (if you've got a Mac) and see if you have an improved performance in Safari. It shouldn't take that long to test, and is probably worth it given the huge loss you are seeing due to this issue.
The next question is, well, how do I do this for everybody? The answer is to ask your visitors to use Chrome! No, seriously, one that worked for me in the past, was to detect the Safari user agent and load a slightly different version of the page that required fewer requests, meaning there will be less for Safari to prefetch. For instance, you might get rid of JavaScript, images, etc. that isn't essential. Obviously, keep the core content so that the pages are basically the same (kind of like you would for mobile detection).
This is obviously much more complicated to setup as it requires adjustments to the design as well as the code structure. But, generally simpler than trying to redo your entire website.
Before you make any such changes, it might be worth running a simple test on a key landing page that gets a lot of visits from Safari. Remove what elements you can for all user agents and see if this changes anything substantially with the site speed in Safari.
I hope that helps. Thanks,
Matthew -
RE: Images Sitemap GWT - not indexed?
I've had issues getting images indexed as well. A few thoughts:
--> It could just take time. It has been a week since you posted, so if you are still having this problem, then clearly something isn't right.
--> Along with checking GWT, have you been searching for the image in a Google search? Sometimes GWT doesn't show the most accurate number.
--> Have you verified that your image sitemap is valid? Sometimes a simple mistake can prevent indexation. If you want to send me the link via private message, I'd be happy to take a look.
--> Obvious question, but have you checked things like robots.txt to ensure images can be accessed by Google's bots?
-
RE: What structured data would you recommend marking up for a companies 'service'?
Hey Mark,
I think using the Service one probably is the most accurate. I'd add in an image (maybe a picture of the physician or their office or maybe even a logo) because if Google ever starts using that (like with rel author) that could be a good enhancement to your result. The serviceAudience and produces properties would add some more depth to the snippet's information.
You may also want to take a look at this discussion on Stack Overflow about using Good Relations markup for a service. It is a different approach and gets slightly different information out there.
http://stackoverflow.com/questions/19121886/schema-org-businessfunction-goodrelationsI hope that helps. I'd be curious what markup you end up with and how it affects things for your client's sites.
Thanks,
Matthew -
RE: Potential sexaual harrassement issues in adding home address to website
Hi Catherine,
I'm with you on publishing my home address. Like you, my business is home-based. I opted to get a UPS store address instead of publishing my home address. However, despite my using that address for the last 11 years, this is apparently no longer acceptable practice. Same goes for PO Box addresses. (See http://www.billhartzer.com/pages/google-bans-ups-store-locations-for-google-maps-listings/ for example).
The other route some people have taken is to get a shared office space, somewhere where you do physically work (at least some of the time) and where you can receive mail. That way you can use that address instead of your home address. This can be expensive depending on what shared office spaces are available in your city. But, if getting into Google Local is important, it might be worth considering so that you have an address to use that isn't your home.
Good luck!
Matthew -
RE: Where is Schema, Twitter cards and OpenGraph code need?
Hi Adrian,
You want OG tags and Twitter card tags to be specific to each page on your site. For instance, pagea.html's tags should relate to that page's content and pageb.html's tags should contain information related to pageb.html's content.
On Schema, that is going to be specific to the content...so if you've got a review on your home page and the same review is on a second page, then the same Schema markup might be contained on the home page and on the second page.
Hope that helps. Let me know if you need more clarification.
Thanks,
Matthew -
RE: Show wordpress "archive links" on blog?
Hi Charles,
I think this is definitely one of those areas where there really isn't a solid yes/no answer. My personal preference for archive links is to block those directories via robots.txt so that they do not get indexed. Archive pages can lead to duplicate content issues so blocking those for SEO purposes can be helpful. However, I would leave the links on the page for users so long as your visitors are using those pages. I'd recommend monitoring traffic to those pages. If none of your visitors are accessing those archive pages then I'd remove them completely from your site.
That is my two cents anyway. Hope that helps!
Matthew -
RE: .htaccess newby
Yeah, that looks to be the right file. So, in this case, I'd put the new redirects right after the current redirect.
redirect permanent /index.html http://www.global-lingo.com/index.php
PUT NEW REDIRECTS HERE
BEGIN WordPress
Let me know if that works!
-
RE: .htaccess newby
Ah, I see what you are saying. You want to download the one from the site root for redirects. Typically, that is going to be something like a /public_html or /httpdocs folder. Do you have an htaccess file there?
-
RE: .htaccess newby
Hey -
Glad you got everything in the right format!
Can you download your current .htaccess file via FTP? I'd download that and then add the new redirects at the bottom of the file and then upload the file. Be sure to save a backup of the htaccess file before you make changes.
Let me know if that works? Thanks,
Matthew -
RE: .htaccess newby
Hey Richard,
You are really close! In column A, do a find for "http://www.global-lingo.com" and replace with nothing. That way you are left with:
A = /news/2009/12/news-internet-translations/
B = http://www.global-lingo.com/blog/
C = redirect 301 /news/2009/12/news-internet-translations/ http://www.global-lingo.com/blog/
Thanks,
Matthew -
RE: .htaccess newby
Hi Richard,
Updating htaccess files can be tricky, especially with that risk of wrecking your site if you do something wrong. So, not a dumb question at all.
WordPress is a great way to add redirects, but if you are dealing with hundreds that can equal a lot of time.
So, if you are looking for a way to quickly build the htaccess file vs. doing everything manually in WordPress, my trick is use to a concatenation function in Excel. That way I can write hundreds of redirects at once and then copy the resulting text into the htaccess file.
The general idea is you have your old URL path in col A, the new URL (full URL) in column B, and then in column C, you'd have a formula that looks something like this (obviously line numbers might change):
="redirect 301 " & A2 & " " & B2
You could then copy column C and add that to your .htaccess file. Of course, back up the htacecss file first. The only word of caution would be to make sure you have valid paths in column A (no special characters, no full URLs, etc.).
Hope that helps.
-
RE: Links in body text
Hi Rahul,
I'll be curious to hear what others think about this. My reaction is that there is zero benefit to looking exclusively at a link on your site from an SEO/link-juice perspective. Taking that approach is going to get you into trouble.
The reason I say that is any link that falls in the category of "I have this link for SEO purposes!" and any tricks used to make links unique (like using hash tags or query strings) for link juice purposes, will probably be ignored or penalized by Google. If it isn't ignored or penalized today, chances are it will be in the future as Google moves more in the direction of quality-related metrics. Any link that is not put in specifically for great UX really isn't worth having from any perspective, including link-juice/SEO perspectives.
In regards to your other question, you would put your primary links in the top navigation and the body text because that is what users expect. If all of my users want to see my "Cool New Widgets" page, then I better have that link in the navigation. I probably should have that in the body of the page as well and maybe even in the footer. Will all those locations pass link juice? Probably not. But what is my main concern? Getting link juice or getting customers to the right page? I'll err on the side of getting customers to my page and having less than perfect link juice optimization because, in general, that is going to have bigger SEO wins.
So, that is my two cents anyway. I'll be curious to see other replies about this question.
Matthew -
RE: Suggestion for Link Directory Script?
Not sure what your problems are with PHPLD and why you want to change, but for an alternative to PHPLD (but staying within PHP), you may want to check out phpmydirectory.com. I've only worked a little with both, but from what I've read and heard from other developers, phpMyDirectory has better options when it comes to plugins. Hope that helps. Thanks.