If they insist on having brand information, then yes, the alternative is to have a small portion of brand information, with a link to the full brand text on its own page.
Posts made by AlanBleiweiss
-
RE: How to handle brand description on product pages?
-
RE: IP Change
if the domain name, and URLs are identical to what they were, and if there are no security issues with the server the site is now located on, changing an IP alone should not ever cause a loss in rankings. Something else is going on.
-
RE: How to handle brand description on product pages?
Nitin
200 words - what is the point / value of having that repeated on thousands of pages? It's not unique, and regardless of what some people think about it being okay because "lots of sites do it" or because "Major brand that's able to get away with lots of bad SEO because they own a market" can do it.
If there are two hundred words based on non-product specific information, this is not a best practice. Instead, that information should be contained on just one page, and if you believe, from a user experience perspective, providing a link to that from each product page is helpful, that's what I recommend.
-
RE: Competitor outranking us despite all SEO metrics in our favour
EGOL,
Thank you for emphasizing the quality (helpfulness/human value). I only briefly mentioned it in my response, yet it really does need to be a top priority.
-
RE: Competitor outranking us despite all SEO metrics in our favour
Lou,
"I just wanted to throw a few factors out there in order to encourage a response like yours - packed full of useful next steps for me to evalaute this further."
THAT is priceless
Pagination:
Loading all content on one page and using a "more" button to "reveal" it, is not a best practice. Individual pages need to exist for individual sub-topic based content. This is especially true since it now appears that Google, while indexing content initially hidden to users, is likely giving less value to that hidden content than content immediately seen.
Pagination is important IF it is executed properly. If you have tens of thousands of results in paginated lists, is that one paginated group, or are they split out into separate groups based on similarity of content? If it's all just one massive group, that's likely another problem to look into, since pagination is meant to be used to say "these pages all contain links to other content where the entire group comprises very similar content around one primary topic".
Internal linking should always point more to main category page destinations than individual pieces of content. It would be unnatural from a usability perspective to link more to individual pieces of content, and thus it would be bad for SEO.
5,000 or so average crawl errors - what is causing those? Are they 404s? Were they previously valid pages? If so, those typically need to not generate 404 but instead be a direct 301 to a highly relevant live page (and where internal links within the site are updated accordingly).
So many more issues to consider...
-
RE: Competitor outranking us despite all SEO metrics in our favour
You are asking some very challenging questions, and using some very limited metric comparisons to try to figure it all out. SEO is not so easy. If it was, many sites would be in a continual state of leap-frog as they out-do each other in similar ways.
Here are just a few questions / considerations to add to your process:
1. Regardless of the number of instances of one or more keywords on a page, what is the total volume of highly relevant content on a given page? How helpful is that information in answering questions your specific target visitors are needing to have answered? How helpful is it in being able to allow visitors to achieve a goal they came to your site to achieve?
2. How well organized is your content in regard to very similar pages being grouped together in both navigation and URL structure? Since my reading of your question implies the competitor site is much more "tight" in its singular focus, this is a critical factor for your site to evaluate.
3. If their site is much more 'tight' in it's singular focus, how much is dilution a factor on the other pages of your site regarding topical focus and goal intent? If there is any serious dilution happening, you'd likely need even more content within that section you are comparing, to overcome that site's strength in refined singular focus.
4. What technical issues may exist on your site that you may not have considered? Crawl efficiency, page processing speed, canonical or duplicate content confusion? There are many other questions I could list with just this one consideration. Even if the competitor site has some worse signals among these, if any of yours are problematic enough, that alone can be a contributing factor.
5. How much higher is the quality of the inbound link footprint for your competitor in comparison to your inbound links footprint? Just having more links isn't at all a valid consideration if you don't dig deep into the quality issue? If they have 10% of the inbound link volume, yet half or most of their inbound links are from very highly authoritative sites and you have less of those, that is another massive consideration.
Those are just starting point considerations.
-
RE: Old school SEO tools / software / websites
Wordtracker for keyword volume and Overture PPC for keyword value were my two go-to resources. And WebTrends for the painful process of attempting to figure out what was happening on-site.
-
RE: Getting links on old blog posts
EGOL,
As always, you infuse wisdom into this discussion. I have always been an advocate of "content first, content last". Yet in 2015, search engines are only one piece of the puzzle, and until and unless other efforts for brand visibility / authority / trust are made, the overwhelming majority of sites on the web will leave way too much money on the table.
I happen to believe links need to be generated through our own efforts yet it's not the "traditional" link building. Instead, it's more about advocacy of brand, community service, and participation in the community in which our prospective/existing clients/customers live.
If we are not active in those ways, we build a house on sand.
Just my take on it.
-
RE: Getting links on old blog posts
The reason for the skepticism is the scale of spam out there, and the volume of ways spam efforts attempt to trick search algorithms. Google, even now, all these years into it, still does a very poor job of trapping some of that noise, and so the index remains polluted.
Of course, just building great content is never enough and won't ever be enough. So we just need to check the boxes regarding the potential for Google to think "this isn't legitimate".
-
RE: Delay between being indexed and ranking for new pages.
This is one of the million questions we face dealing with a less than clear message from Google on what they do.
Generally speaking, just one scenario is that they need to get confirmation signals when new content is discovered. Unfortunately that's not something that always happens, yet it does happen. The stronger a site is long-term, the less likely that will happen, yet even then it can.
-
RE: Getting links on old blog posts
Are they legitimate placement? Meaning - are the posts you seek links from real, quality, and relevant posts, and not on sites that are created for spam purposes?
Are you asking for a link and NOT specific anchor text, and NOT the wording they would use?
If the above scenario is what's happening, it's valid to reach out this way. As long as you leave it up to them to decide whether to include your content or not, and decide what they write, and what anchor text to use, and there is no reciprocal exchange, and no paid aspect, you "should" be fine.
Of course, it's impossible to know what some poorly trained manual reviewer might think about them, however that's the only scenario where I'd be concerned in this situation.
And if all of the above criteria are met, then those links would be helpful to readers of those sites, and thus have a chance of bringing actual human users to your site. Which makes them valuable for many reasons, one of which is SEO.
-
RE: New Website Old Domain - Still Poor Rankings after 1 Year - Tagging & Content the culprit?
SEO has become much more complex over the years, especially given how aggressive Google has gotten.
Unfortunately, it MAY be at least PARTLY the case where the bad links were weakening the overall trust of the site in a way that until the next Penguin update, you may not see value from that clean-up work. And even then, if other on-site issues, or not enough off-site truly high quality and highly relevant link and citation trust exist on a large enough scale, you may still be stuck in the weeds.
I poked around and here's my very initial take:
1. Critical Page processing inefficiency issues. Even though my one-time quick check speed test showed your home page rapidly loaded, Google's Page Speed Insights tool came back with your home page scoring a dismal 43 out of 100 points for desktop users, and 53 out of 100 points for mobile users. One-time actual time-based speed data is not enough to trust speed considerations. So scores below 85 in GPSI are a big red flag that you may very well have intermittent speed problems. And speed problems are a proven Panda contributing factor.
So I ran a SECOND speed test with a different tool and in THAT test, your home page took 29 seconds to process in a DSL emulator. Any time a page takes 20 or more seconds, that is an absolute, confirmed by Matt Cutts, ranking killer.
The fact that you have over 5 megabytes of content/resources and file sizes combined just for the home page is only one of potentially several factors why that is a very bad problem.
2. You don't have a traditional "services" silo(funnel) on your main navigation. You offer services, and yet that information is buried on pages that are not dedicated to any specific service type, such as "Boston Wedding Band" or "Boston cover band". So even though your page Titles on main nav pages use those words, the pages themselves are not refined enough in focus for those phrases - they're more broad in the content focus.
3. Your blog posts are fully included in the main blog index page view, so that causes duplication of content between that page and the actual individual post pages.
4. You have the business address in your page footers, but that info is not wrapped in Schema.org markup for local business info. Schema is now critical as one part of overall SEO (this was confirmed just this week by Duane Forrester from Bing during Pubcon - he said "you need to use Schema, you do NOT want us having to figure it out".
5. Have you checked your local listings consistency? Moz Local is very good at that. It's yet one more piece of the puzzle.
6. Regarding the old content - generally speaking, yes, very old, thin or low value content is also another consideration from a Panda perspective. Does it make sense to just kill those entirely? Maybe. Maybe not. Maybe there's a way to salvage those - through consolidation and 301 redirects, perhaps. It's not a simple, absolute process to just kill them off without understanding the complete picture.
SO...
I've only scratched the surface here, so while your specific initial question may be a factor, you have many other critical flaws in the site specific to the main pages of the site itself, and those are the most important pages.
As painful as it is to have been burned/disappointed by past SEO "professional" services, and where you may be able to muddle through getting back on track, I'm very happy to see you are reaching out here at Moz. It's a great community and several people are very willing to help where we can, when we can right here.
-
RE: Google index new data from my website page
If you have your sitemap xml file(s) set up properly, you can resubmit them each time you update that specific content. If the site is very large, I would suggest having a separate sitemap file just for those review pages within the site, and resubmit that one specifically. That can help motivate Google to recrawl that content sooner.
Also, do you have "last-modified" meta tags set up? That can help as well.
Depending on how high the quality of the content is, it can also help to send other signals
-
- Update the home page with a link to the newly updated review right in the upper portion of the home page's main content area.
-
- Consider a quality, not over-optimized, press release you distribute through a trustworthy release site - where you issue a press release describing the full review and only linking ONCE in the body of the release, directly to that review.
-
- Tweet a link to the review page on the day you post the review as well. Now that Google is integrating Twitter more, that can further help visibility.
-
-
RE: Is using outbrains legal in Googles eyes?
It's not necessarily a matter of whether Outbrain is "legal" according to Google as a single consideration.
If the code is implemented in a way that doesn't redirect, and if that linking is not nofollowed, then that is in violation of Google policies. That shouldn't happen though.
Where the problem becomes more complex is in how Google's algorithms might process a site that uses Outbrain or Taboola or other similar services, and where the end result is that site's ranking signals decline.
Several scenarios exist that can cause this.
1. 3rd party "hey, here's a bunch of links to other places" widgets can often add heavy page processing delays - especially when there's code bloat, or when at the code level, several server calls go out to that 3rd party server network (and often to multiple different servers in that network), and where bottlenecks can come up over the web eco-system.
2. 3rd party widgets of this type can make it that much more difficult for search algorithms to separate out the on-site content (both visible and within code that isn't seen) from 3rd party, irrelevant, and often absolutely garbage-quality content contained in those widgets. This doesn't always happen, yet it can - and sometimes does cause topical focus confusion, leading to misunderstood topical dilution.
3. Users often click on 3rd party widget links of this type, yet many other users hate it - find it insulting, and downright obnoxious when the quality of those links, and the images they stick in the user's face are grotesque or near-porn in quality. That can sometimes then impact overall User Experience and weaken site quality and trust signals.
It's Outbrain and Taboola who are among the leading causes of ad-blocking now being a major problem for publishers and revenue. The lowest quality ads, especially those disguised as "related content" get geeks and nerds and intellectual site visitors boiling mad. In some ways they aren't as obnoxious as auto-play video ads, or fly-over ads that block reading, yet in quality terms, they are much worse. If the advertising industry doesn't clean up its act with quality, and if publishers don't do the same thing, the battle is only going to grow.
-
RE: Is there a way to map your on-page SEO changes with the organic growth?
I'm only going to add to all of these great responses by saying this:
1. Even if you make a change today, it does NOT mean you will be able to know EXACTLY when that change is acknowledged by Google. This is especially true on larger sites. It can take days, weeks, even months for Google to properly recrawl the entire site (even when they crawl every day, some of those URIs were just crawled the day before or three days ago, while only a portion of today's crawl will be other, not as recently crawled URIs). And then it can take weeks for all of Google's algorithms to catch up. Along the way, those algorithms may even evaluate only a PARTIAL understanding of the change (while waiting for Googlebot to get to all the other pages).
2. One additional suggestion is to look at in-page analytics within Google Analytics, or a 3rd party click tracking tool to get a better idea of whether people are even clicking on a given link on-page. Just be careful in setting up 3rd party click tracking - do it poorly, and you can cause massive duplicate URL problems. And in-page analytics in GA often aggregates all clicks on all of the individual links on a single page where several point to one common destination URI.
-
RE: Our web site lost ranking on google a couple of years ago. We have done lots of work on it but still can not improve our search ranking. Can anyone give us some advise
"On the category page - we are wondering whether we should also remove the right side menus ?"
Do you mean the left side menus?
If so, I can give you a simple answer and I can give you a more complex answer.
The simple answer is "if you link to categories and sub-categories that are not directly related to the category you are on at that time, it is at least somewhat of a distraction and dilution issue".
The more complex answer is "it depends, and without a full audit, I can't answer that because there are many other factors to consider, some of which are purely User Experience, some are SEO and User Experience, some are crawl allocation related, and some are pure technical considerations".
-
RE: Our web site lost ranking on google a couple of years ago. We have done lots of work on it but still can not improve our search ranking. Can anyone give us some advise
I am glad to hear you will work through the issues. Be aware that there is no guarantee that these things alone will do the job, however each is an important step in the correct direction.
-
RE: Panda penalty removal advice
I'm also curious to know whether you've monitored Bing/Yahoo value over the course of your work. While it's rarely anywhere near Google's potential volume, I've seen good value gained from those as clients have implemented recommendations, even when Panda was a prime issue (and the subsequent panda refresh was a problem).
Overall it does sound like you're on the right track though.
-
RE: SEO and dynamic content
yeah their tests were a big help in my own confidence level. Just understand that its still wise to test as soon after live-launch as possible. I hate nasty surprises that come from Google's system...
-
RE: SEO and dynamic content
Ah yes - that's a challenge. From experience, I know that Google is "mostly" good at seeing and factoring in dynamically generated content when a page is coded to ensure the dynamic content is inserted into the DOM (Document Object Model). There's even a good article about this concept over at SearchEngineLand - written by Adam Audette - someone I respect as an advanced expert in the industry.
However, even though that's the case in my experience and based on what Adam's team found, the only ultimate test I personally trust, really is Google's fetch and render system.
-
RE: SEO and dynamic content
Have you run Google's fetch and Render tool to test and see what Google actually sees through the render process? That's a quick way to often determine what's happening.
-
RE: Panda penalty removal advice
Regarding 404/301 issues. The numbers I gave were for a small partial crawl of a hundred URLs. So a full Screaming Frog crawl would help to determine if it's worse. Even if its not, think of the concept where a site might have a dozen core problems, and twenty problems that by themselves might seem insignificant. At a certain point, something becomes the straw that breaks the camels back.
Regarding content - how many courses offered are actually up against competitors that have entire sections devoted to the topic just a single course page has on that site? How many have entire sites devoted to that? Understanding content depth requires understanding the scale of real and perceived competition. And if it's a course page, it may not be a "main" landing page, yet it's important in its own right.
Regarding panda timing - the site took the big hit three years ago. Waiting for, and hoping that the next update is the one that will magically reflect whatever you've done to that point isn't, in my experience, a wise perspective.
While it's true that once Google has locked a data set to then be applied to a specific algorithmic update, not taking action at a high enough level, and with enough consistency is gambling. Since true best practices marketing as a whole needs to be ongoing, efforts to strengthen on-site signals and signal relationships also needs to be ongoing. Because even if Panda weren't a factor, the competitive landscape is ever marching forward.
-
RE: Our web site lost ranking on google a couple of years ago. We have done lots of work on it but still can not improve our search ranking. Can anyone give us some advise
I'm going to go out on a limb here and say you have problems that you either aren't aware of, or don't realize the impact of.
1. The bottom of the main content area of your home page and your footer are spam central for the keyword stuffing. The range of phrases you use that are obviously intended to boost rankings for specific phrase variations is off the charts severe.
2. The scale of topical dilution on the site is also a concern. Individual product descriptions are almost non-existent. Consider the impact of the left side "link to all the things" content, right side "related" and "recently viewed" widgets, and the lower page "Also viewed" content, (and again, that scary keyword stuffed footer), and the "uniqueness" of that page is seriously questionable and you're forcing Google to struggle to trust uniqueness.
3. The shear number of "categories", plus brand funnels, plus costume "Ideas" funnels, just reinforces to me that it looks like the site is trying to rank for "all the variations" (which that footer screams is actually what you are trying to do).
4. Your "Search" subdomain is typical of now proven to be failed "enhanced automated content" that was supposedly a way to get more search visibility a few years back, and yet where all it does is create more duplicate content mess, since it just links back to the main site and is not actual, best practices SEO.
Those are only the first few things I found in a quick check. There's almost certainly a full range of issues you will likely need to get cleaned up if you really hope to ever see a strong, sustainable organic presence.
-
RE: Panda penalty removal advice
Damon,
To start, let's be clear - Panda isn't a "penalty" - it's an algorithmic adjustment based on quality, uniqueness, relevance and trust signals.
Having audited many sites hit by the range of Panda updates, I have a pretty good understanding of what it usually takes. so having said that, I took a quick look at the site. While Andy may be correct in that you may only need to wait and hope the next or some future Panda update acknowledges the changes you've made to this point, that very well may not be enough.
1st obvious problem - your site's response times are toxic. - a crawl using Screaming Frog shows many of the pages have a response time of between 3 and 7 seconds. That's a major red flag - response times are the amount of time it takes to get to each URL. If it takes more than 2 seconds, that's typically an indicator that crawl efficiency is very weak. Crawl efficiency is a cornerstone of Panda because it reflects what is almost certainly a larger overall page processing time problem. Since Google sets a standard "ideal" page processing time of between one and three seconds, if it takes more than that just to ping the URL, the total processing time is likely going to be significantly worse.
While it's not required to always get a one to three second total process time, if too many pages are too slow across enough connection types for your visitors, that will definitely harm your site from a quality perspective.
And if too many pages have severely slow response times, Google will often abandon site crawl, which is another problem.
Next, I checked Google Page Speed Insights. Your home page scored a dismal 68 out of a possible 100 points for desktop users (85 is generally considered a good passing grade). That reinforces my concern about crawl inefficiency and poor page processing. It was even worse for mobile - scoring only 53 out of 100 points. In my second test, I got 63/100 for desktop and 49 for mobile. The different results for the two tests is due to the fact that speeds are worse at different times than others.
Just one of the issues GPSI lists is server response time (which confirms the very poor response times I saw in Screaming Frog).
Next, a partial crawl using Screaming Frog crawled 20 URLs that resulted in 404 (not found) status, which means you have internal links on your site pointing to dead ends - another quality hit. And SF found 25 internal URLs that redirect via 301 - further reinforcing crawl inefficiency. Since this was a partial crawl, those problems could be even bigger scaled across the site.
Then I poked around the site itself. http://www.workingvoices.com/courses/presentation-skills-training/keynote-speaker/ is indexed in google, as it's one of your courses. That page is possibly problematic due to the fact that there is hardly any content on that page overall. So while you may think you've dealt with thin content already, I don't think you fully grasp the need for strong, robust depth of content specific to each topic you consider important.
That's nowhere near a full audit, however the above are all examples of issues that absolutely relate to working toward a highly trusted site from Google's algorithmic perspective.
-
RE: In the U.S., how can I stop the European version of my site from outranking the U.S. version?
Gianluca
Thanks for jumping in on this one. So if I'm reading your answer correctly, the bottom line here that there really should be one site per country, regardless of language spoken, correct?
-
RE: In the U.S., how can I stop the European version of my site from outranking the U.S. version?
Yeah inheriting previous work can be a challenge.
Since you are already planning on rolling out content in different languages, you will have not only the opportunity to set the hreflang tags for each, but also it will be important to ensure all of the content within each section is actually in that section's primary language for consistency. That too will help address the confusion Google has.
-
RE: Improvement in Page Speed worth Compromise on HTML Validation?
Yeah sequence of load is also important when its time to go granular to find the true opportunities. Because the up-front evaluation time that can identify issues, can often result in faster-easier-more template-driven ways to speed up everything on a larger scale with less effort needed.
That doesn't mean its okay to ignore other bottlenecks. Just that the more clarity of understanding, the more likely real, sustainable success can be achieved.
-
RE: Improvement in Page Speed worth Compromise on HTML Validation?
Kingalan1
I'm not a programmer by trade - the way I begin even considering these things is by running tests on various tool platforms.
For example, put a page you think is slow into URIValet.com - test as Googlebot. The resulting report has a block of information in it regarding total size of files processed. It breaks that data down to file types. Look at the CSS/JS lines - if they are more than 50k to 100k total for either CSS or JSS, there is almost certainly inefficiency in there, and likely unnecessary bloat.
Go to WebPageTest.org and do the same - put in the URL you want to check - choose a server location and DSL (which gives a fair mid-range speed evaluation), and Chrome as the browser emulator. The resulting report gives you a lot of information, however the one page in that report that may be most helpful in this situation is the "Details" report - if you go there, and scroll down, you'll get to the section that lists, line by line, every single file, script, image and asset processed for that page, and all of the data on speed of processing each step of the way (such as First Byte Time, DNS lookup, SSL lookup, and more). Those can reveal several individual bottleneck points.
-
RE: Home Page Deindexed Only at Google after Recovering from Hack Attack
Domaintools domain report shows no more info that could be helpful. Leaving me at a complete loss as to what else to check.
-
RE: Home Page Deindexed Only at Google after Recovering from Hack Attack
More info.
Because Nitin was able to run a ping and traceroute without problem, I went to DomainTools.com - the worlds leading resource for forensic digital investigative research. I use it whenever I am doing investigations for expert witness work I do.
When I ran the domain there, it had a screen-capture of the home page from June. So I submitted a refresh, and it came back as not being able to provide a screen-shot of the home page.
While not a smoking gun issue, it further clouds my trust in regard to whether the domain is actually functioning properly in the hosting environment as I originally thought it might not be.
I will run a deeper test to see if I can get more information, however I wanted to post this update because I believe it relevant.
-
RE: Home Page Deindexed Only at Google after Recovering from Hack Attack
Nitin
Thanks for doing that - Now I'm stumped - I've never had Pingdom fail before with both ping and traceroute. And I now wonder if it's a non-issue, or part of the confused mess that Ankit referenced somehow.
-
RE: Should I use **tags or h1/h2 tags for article titles on my homepage**
having the clearer understanding about the concept of having multiple "titles" on a single page (an H1 headline is the in-content "title" for that page), David is correct - while HTML 5 allows multiple H1 tags on a single page, this is bad because the H1 communicates "this is the primary topical focus of this unique page".
Because of that, if you have headlines within the content area for content elsewhere on the site, and link to that other content, then those are absolutely best served with H2 headline tags, or if not , then at the very least, "strong" tags if the topic of each target page is significantly different than the primary topic of the page they're all listed on.
-
RE: Improvement in Page Speed worth Compromise on HTML Validation?
1. Taking shortcuts that are not sound sustainable based methods to gain value somewhere else is almost certainly going to become a problem when you least expect it at some future date and this is a great example. Moving CSS and or JS to below the proper location is a recipe for complete page display failure on any number of devices that may or may not current exist.
Have you tested your pages with Google's Fetch and Render to ensure they properly load, or where they may get a "partial" result? If they get a "partial" result, that's a red flag warning that you ignore at your own peril.
2. You haven't provided numbers - is the page speed improvement a case of going from 20 seconds to down to 5 seconds? Or is it going from 8 seconds to 6 seconds? Or what? This matters when evaluating what to care about and expend resources on.
3. If just moving those to their proper place in the page header section is causing speeds to slow down dramatically, you have bigger problems. First one that comes to mind is "why do those scripts / CSS files cause so much speed slowdown? Its likely they're bloated and need to be reduced in size, or they're housed on a pathetic cloud server that is itself doing you more harm than good.
-
RE: In the U.S., how can I stop the European version of my site from outranking the U.S. version?
Have you set the different hreflang tags appropriately across your content?
You said "US" and "European" - so does that mean you have just one set of content for all of Europe? If so, that can be more difficult to deal with, however if you set all of the US pages with an hreflang of "en-us" and the European pages with an hreflang of en-gb, you can at least help Google understand "this set is for the U.S. and this set is not".
What I always recommend if you're not targeting individual countries with your content (the "Europe" reference you made says you are not for that content), is to at the very least, split out content to two different domains. Have a .com domain for US content, and a separate .eu or .co.uk or .de or whatever other domain for your European content. That, while also setting up hreflang tagging, is really more helpful in communicating what should show up in which search results higher up.
You'll also need to accumulate inbound geo-relevant links to point to the appropriate content set to help reinforce this.
And if you split out domains, you can set country targeting more readily in Google Search Console.
For more info:
-
RE: Home Page Deindexed Only at Google after Recovering from Hack Attack
UPDATE TO MY ORIGINAL COMMENT
I initially found a problem doing a ping and traceroute test using Pingdom.com - both returned an "invalid host name" error, something I have not seen previously for both ping and traceroute simultaneously.
Nitin (see his comment below) did a similar test locally and found both to be okay. Though he has other thoughts.
I just wanted to clarify here now, that my original finding may not be a key to this issue, though I want to understand why my test came back that way...
-
RE: What's the best possible URL structure for a local search engine?
In regard to shorter URLs:
The goal is to find a proper balance for your needs. You want to group things into sub-groups based on proper hierarchy, however you also don't want to go too deep if you don't have enough pages/individual listings deep down the chain.
So the Moz post you point to refers to that - at a certain point, having too many layers can be a problem. However there is one one single correct answer.
The most important thing to be aware of and consider is your own research and evaluation process for your situation in your market.
However, as far as what you found most people search for, be aware that with location based search, many people don't actually type in a location when they are doing a search. Except Google DOES factor in the location when deciding what to present in results. So the location matters even though people don't always include it themselves.
The issue is not to become completely lost in making a decision either though - consider all the factors, make a business decision to move forward with what you come up with, and be consistent in applying that plan across the board.
What I mean in regard to URLs and Breadcrumbs:
If the URL is www.askme.com/dehli/saket/pizza/pizza-hut/ the breadcrumb should be:
Home > Dehli > Saket > Pizza > Pizza Hut
If the URL is www.askme.com/pizza-huts/saket-delhi/ the breadcrumb should be
Home > Pizza Hut > Saket-Delhi
-
RE: What's the best possible URL structure for a local search engine?
Proximity to root is not a valid best practice, especially in this instance.
Here's why:
More people search based on geo-location than actual business name when looking for location based businesses. So by putting "Pizza Hut" first, that contradicts this notion. It implies "more people look for Pizza Hut than the number of people looking for all the different businesses in this geo-location".
Also, by using the URL you suggest, that's blatant over-optimization - attempting to stuff exact match keywords into the URL. In reality, people use a very wide range of keyword variations, so that's another conflict that harms your overall focus needs.
All of the individual factors need to reinforce each other as much as is reasonable for human readability. So URL, breadcrumb both should match the sequence. If one has one sequence, and the other has a different sequence, that confuses search algorithms.
-
RE: What's the best possible URL structure for a local search engine?
Local pack exists, yet is far from complete or consistently helpful. Business directories thrive even in an age of local packs. It's all about finding the best way to provide value, and the internet is large enough that many players can play in the game.
-
RE: What's the best possible URL structure for a local search engine?
Business listing directory environments have a big challenge when it comes to URL structure / information architecture and content organization because:
- Many businesses are searched for based on geo-location
- Many of those require hyper-local referencing while many others can be "in the general vacinity"
- Many other businesses are not as relevant to geo-location
So what is a site to do?
The best path is to recognize that as mobile becomes more and more critical to searcher needs, hyper-local optimization becomes more critical. It becomes the most important focus for SEO.
As a result, URL structure needs to reflect hyper-local first and foremost. So:
- www.askme.com/delhi/
- www.askme.com/delhi/saket/
- www.askme.com/delhi/saket/pizza/
- www.askme.com/dehli/saket/pizza/pizza-hut/
This way, if someone searches for "Pizza Hut Dehli", all of the Dehli Pizza Huts will show up, regardless of neighborhood, while anyone searching for "Pizza Hut Saket" will get more micro-locally relevant results.
And for those businesses that serve a wider geo-area, even though they too will be assigned a hyper-local final destination page, they will still be related to their broader geo-area as well. So someone searching "plumbers in Dehli" will get the right results and then they can choose any of the plumbers in Dehli regardless of what neighborhood they are in.
Note how I removed /search/ from the URL structure as well. It's an irrelevant level.
-
RE: Is there any examples of comparing a competitors product to your own .
I the examples Kristina offered.
It's really important to understand that integrating competitor product information into your site needs to be done artfully as much as scientifically.
Get the message wrong, and you expose the company to DMCA takedown vulnerability, as well as potential lawsuits for business-damaging misinformation, or worse, having governmental agencies come after you.
In other words, there are several "right" ways to go about it, and several "wrong" ways.
Please be sure a professional is involved in this process, whatever the "method" or "style" of your approach might be - someone who understands the legal issues involved enough to avoid those vulnerabilities.
-
RE: Page Authority or Domain Auothority? Which one is more importan when having an external link pointing to your page?
I'm not going to go into the DA vs. PA issue. What I am going to do is focus on the issue Umar brings up. That's much more of a concern than the "Science" of DA vs PA since those are really only a tiny consideration across a much broader number of ranking, authority, quality, relevance and trust factors for links from one site to another and not something you would even be able to figure out if you thought it mattered.
The risk of cross-linking the way you described on the other hand, is something that could risk both sites being penalized. How much is the deal worth when considering what the harm would be if either, or both sites were penalized?
Gamble if you want. Just recognize the price paid for gambling that gets snagged in Google's spam trap.
-
RE: Problems with analytics, conversation data and assisted conversions
Jasmine is correct in that GA is not a proper data source for 100% accurate information.
Unfortunately there are many reasons for this.
The free version of GA often basing data on a sampling of visits. If you go to the Conversions/ECommerce/Overview/ report, in the upper right corner you can see whether it's a sampling and if so, what the percentage sampled is.
Another issue is that if the Google Analytics code didn't fire properly during a session or part of a session, that information will not be tracked properly in its entirety or partially. Sometimes that's because site visitors have browser types or browser plugins that block scripts, and other times its because of a breakdown in data transmitted between the visitors browser, your site and Google's analytics servers.
She's also spot on with recommending that you dig into the multi-funnel channel report.
bottom line - it's important to understand that only your actual internal site server system can provide you with accurate sales data, though that might lack certain deeper insights regarding origination and demographics. Any other analytics program can help provide insights, however should only be seen for trend analysis purposes.
-
RE: How authentic is a dynamic footer from bots' perspective?
Nitin
You're dealing with multiple considerations and multiple issues in this setup.
First, it's a matter of link distribution. When you link to x pages from page 1, this informs search engines "we think these are important destination pages". If you change those links every day, or on every refresh, and if crawlers also encounter those changes, it's going to strain that communication.
This is something that happens naturally on news sites - news changes on a regular basis. So it's not completely invalid and alien to search algorithms to see or deal with. And thus it's not likely their systems would consider this black hat.
The scale and frequency of the changes is more of a concern because of that constantly changing link value distribution issue.
Either X cities are really "top" cities, or they are not.
Next, that link value distribution is further weakened by the volume of links. 25 links per section, three sections - that's 75 links. Added to the links at the top of the page, the "scrolling" links in the main content area of the home page, and the actual "footer" links (black background) so it dilutes link equity even further. (Think "going too thin" with too many links).
On category pages it's "only" 50 links in two sub-footer sections. Yet the total number of links even on a category page is a concern.
And on category pages, all those links dilute the primary focus of any main category page. If a category page is "Cell Phone Accessories in Bangalore", then all of those links in the "Top Cities" section dilute the location. All the links in the "Trending Searches" section dilute the non-geo focus.
What we end up with here then is an attempt to "link to all the things". This is never a best practice strategy.
Best practice strategies require a refined experience across the board. Consistency of signals, combined with not over-straining link equity distribution, and combined with refined, non-diluted topical focus are the best path to the most success long-term.
So in the example of where I said initially that news sites change the actual links shown when new news comes along, the best news sites do that while not constantly changing the primary categories featured, and where the overwhelming majority of links on a single category page are not diluted with lots of links to other categories. Consistency is critical.
SO - where any one or a handful of these issues might themselves not be a critical flaw scale big problem, the cumulative negative impact just harms the site's ability to communicate a quality consistent message.
The combined problem here then needs to be recognized as exponentially more problematic because of the scale of what you are doing across the entire site.
-
RE: How have your SEO Audits evolved over time?
Moosa thank you for linking to me about this stuff!
-
RE: How have your SEO Audits evolved over time?
Personally, my audits have not changed dramatically in all the years I've been performing them as far as the overwhelming majority of signals / factors to look at, because sustainable SEO has always been consistent.
User experience has always been the core of what real, sustainable SEO is all about. That has not ever changed. Real, quality and relevant links have always been part of true sustainable SEO as well because that's an off-site consideration to the comprehensive understanding of UX across the web.
Where they have changed is in regard to other realities:
1. As Google has gotten more and more honest about the unrealistic "let us figure it all out" message, they've consistently come out with new ranking signal points including, but in no way limited to Canonical tags, Breadcrumbs, Schema, URL Parameters, and a host of other similar "aids" they either invented, or got on board with to help their algorithms understand what's really going on.
2. That can be extended to the attempt search engines have made to better integrate real user experience signals including page processing speed, crawl efficiency (search bots are users, so user experience for bots matters more now), and also where they've worked to better integrate social signals over the years, all of these have needed to be integrated into the audit process. Now we have mobile and what real UX means to that platform, so of course, we need to look at what the full range of signals are for mobile UX.
2. As everything has gotten more complex across multiple algorithms run by two different teams at Google, their own decisions have caused new problems. Robots.txt files used to be a hard directive. Now, they're only a hint. Canonical tags are supposed to be a directive (hence the word "canonical") yet now they often ignore those. Things like that have thus become an integral part of my audit work in that cross-signal relationships are now more critical than ever.
3. I have always done my best to reevaluate how I present my findings and recommendations in my audits, and routinely have refined that as well - a stronger, clearer and more education-centric audit doc results each time.
Another crucial concept I try to help clients become educated in is that Google is, for the forseeable future, going to work more and more at refining and building on all of these concepts, and because of that, we as auditors need to realize that the writing on the wall type issues are almost certainly going to become ranking factors in the future, and if we can anticipate those well enough, our audits today need to address those so our clients get ahead of the Google roller coaster.
At the moment, one example of this are page speed specific to mobile - I've been advocating that to audit clients for years, and then not long ago Google Page Speed Insights came out and included speed data for mobile. Then, this year, Google's people have stated that they will be upgrading their new Mobile testing tool in the not distant future, to show exactly how page speed is in fact, now a direct ranking factor.
Another one that's about to pop is interstitial pop-ups. The buzz is heavily growing around user complaints about them. Google even just this week came out with a post about the harm interstitial popups did in their own testing on their own properties. And it's been hinted by their people that this will be an upcoming ranking factor.
For someone who has been laser focused on UX, this was an obvious issue to have been advocating against for at least the past year as site owners have gotten more and more aggressive in their use, harming UX.
So audits always have, and always will need to also integrate forward-thinking recommendations.
disclaimer: I do around 80 audits a year - its my primary business.
-
RE: GA Event: to use this feature visit: EVENT-TRACKING.COM
been seeing many others in the industry complain lately about referrer spam getting out of control. And found it in my own site's analytics today. Quick search brought me here. So thank you Analia for posting the question, and thank you Envoy for the link on how to deal with it since .htaccess won't work in this case!
-
RE: Does subdomain hurt SEO on main site
It happens all too often when site owners take a path they or someone advising them thinks makes sense at the time. Until it goes bad. Then it just becomes a beast...
-
RE: Does subdomain hurt SEO on main site
Without seeing the site and example subdomains I can only speak from experience with other sites that have similar problems.
While a subdomain is "technically" a separate site from Google's perspective, one factor that can change that is interlinking - how interwoven are the subdomains to the main site from a linking perspective? If interlinking is heavy, this clouds the "stand-alone" site notion.
However, even then, what is the status of analytics and webmaster tools account assignments? Are all the subdomains tracking with the same account IDs as the main site? If so, it would be important to split them out.
Ultimately, the PROPER, best practice recommendation regardless of any of that, would be to have those subdomains migrated to an entirely different root domain. The topical focus is radically different.
The bottom line factor is that subdomains DO impact a main domain because they are subordinate to that root domain.
-
RE: Two websites (Domains) with same content for more than 4 years which one to choose now?
Teresita,
The first thing I looked at was the approximate volume of content indexed in Google for each domain.
site:http://radiocolombiainternacional.com/web/ shows 403 results.
site:http://radiocolombiainternacional.com/ shows 470 results.
site:http://radiocolombia.com.co/ shows 127 results.
While the "site:domain" check only offers an approximation, its a starting point to evaluate how Google sees each site.
An important question to ask then is "What is the real, actual count of pages for each domain?"
If it's on the higher end (close to 400 pages) for both radiocolombiainternacional and radiocolombia then Google APPEARS to be giving much more trust to content on radiocolombiainternacional.
If that is true, that's what I would go with - its an indicator that radiocolombiainternacional has a stronger foundation in Google's perspective.
If you have site analytics available for both, that's another path to dive into to get even more information and data that may or may not support that concept though. Just because a site has more pages indexed in Google doesn't itself mean that site is getting more quality traffic.
Another consideration is inbound "follow" links. What is the total count pointing to each?
Using OpenSiteExplorer, radiocolombia.com.co has more inbound links, and what appears to be a stronger internal linking structure as well. So that conflicts with the Google page indexed count. Except neither site has very many inbound links at all (a serious challenge that needs to be addressed whichever site you end up with).
Because of these factors, I'd need to see more information about each site regarding inbound links, actual page count, etc.
Since you say they've got the exact same content, I assume they're on the same programming framework so the technical factor is less of a comparison issue. When I visit the home page for each, they do look identical, so that reinforces the notion that technical factors are going to need to be considered.
-
RE: Am I better off buying a .com with a stopword or a .net / .org without?
As I've communicated in other Q&A posts here, and elsewhere, it comes down to the cumulative impact of the entire SEO factor evaluation process.
Any single factor is only as impactful as the entire picture paints. So if a site uses a .net or .org or .whatever domain, and/or the domain root is exact match, partial match or even gobbledegook, as long as the entire SEO picture is strong enough across other factors, the domain name and TLD can be whatever you want, within reason.
If it's an exact match root, regardless of TLD type, it will more likely be evaluated with close scrutiny to determine if its a spammy over-optimized site, given that Google has a specific Exact Match Domain algorithmic factor for spam. Yet plenty of sites have EMDs and do perfectly well in search because enough other signals are strong and authoritative.
Having said that, .org was originally meant to be for non-profits, so it's not ideally a best practice to use .org for a commercial business web site. And .net can be a challenge if there's already a .com with that same domain name due to user confusion and marketing challenges. So its ideal for a .com in those cases. Yet there's a similar issue when you're using a non-exact match domain name and an exact match domain name exists where the exact match version is strong in its overall SEO.
Bottom line - I always recommend that site owners not become completely lost in worrying as much about the domain naming factor because it can become paralyzing - and instead, to go with what you're comfortable with enough to move on. Then focus all your energy on making sure the overall SEO signal strength is strong regarding quality, uniqueness, authority, relevance and trust.