Oh God. Slightly worse? That's nearly Wix level of bad
Posts made by effectdigital
-
RE: Is BigCommerce a good CMS for Improving Search Visibility for our E-Commerce Business?
-
RE: Software "card" carousel results
The next trick will be trying to ascertain if these results are a categorised search entity (e.g: news results, shopping results, image results, movie listings carousel) or whether Google has just noticed that certain search entities are thematically related and is front-ending them
When knowledge graph pulls through a slot of information or you get quite a generic looking insertion of universal search elements, often it's just Google crawling the web and putting 2 and 2 together by itself (without user / webmaster input). For some things though (shopping results, news results etc) there are specific methodologies for getting 'entered' into the results (for example unless you pay Google, you aren't getting in the shopping results period)
Next things I advise you to do:
-
see if you can work out whether it's a pre-determined listing type (or if it's something new)
-
take the top 5 sites linked to by this new carousel. Put them all into this tool: https://search.google.com/structured-data/testing-tool - see if they share any common schema features. If they do, it might be something you can add to your own site
-
-
RE: Is BigCommerce a good CMS for Improving Search Visibility for our E-Commerce Business?
The trick is usually not to migrate from one CMS to anther, but instead to combine the best elements of each. For example I think Magento is a pretty big steaming pile from an SEO POV, but the way it handles eCommerce data is very efficient and regimented. So you'd want the back end on Magento and the front-end on WordPress. I imagine that with BigCommerce it would be much the same thing, WP is so established now from an SEO POV that it's hard to beat so you'd at least want to retain it at your shallow front-end, even if you powered the commerce system(s) a different way
-
RE: Ensuring that Google Display my Meta Descriptions
Delete all the other content on the page
Nah just kidding. There's really no way to achieve that, other than writing a Meta Description which Google would prefer to render, against the rest of the page's content
-
RE: How do internal search results get indexed by Google?
Actually quite often there are links to pages of search results. Sometimes webmasters link to them when there's no decent, official page available for a series of products which they wish to promote internally (so they just write a query that captures what they want and link to that instead, from CTA buttons and promotional pop-outs and stuff)
Even when that's not the case, users often share search results with each other on forums and stuff like that. Quite often, even when you think there are 'no links' (internally or externally) to a search results page, you can end up being wrong
Sometimes you also have stuff like related search results hidden in the coding of a web-page, which don't 'activate' until a user begins typing (instant search facilities and the like). If coded badly, sometimes even when the user has entered nothing, a cloaked default list of related searches will appear in the source code or modified source code (after scripts have run) and occasionally Google can get caught up there too
Another problem that can occur is certain search results pages accidentally ending up in the XML sitemap, but that's another kettle of fish entirely
Sometimes you can have lateral indexation tags (canonical tags, hreflangs) going rogue too. Sometimes if a page exists in one language but not another, the site is programmed to 'do something clever' to find relevant content. In some cases these tags can be re-pointed to search result URLs to 'mask' the error of non-uniform multilingual deployment. Custom 404 pages can sometimes try and 'be helpful' by attempting to find similar content for end users and in some cases, end up linking to search results (which means if Google follows a 404, then ends up at the custom 404 URL - Googlebot can sometimes enter the /search area of a website)
You'd be surprised at the number of search results URLs which are linked to on the web, internally or externally
Remember: robots.txt doesn't control indexation, it only controls crawl accessibility. If Google believes a URL is popular (link signals) then they may ignore the no-crawl directive and index the URL anyway. Robots.txt isn't really the type of defense which you can '100% rely upon'
-
RE: Software "card" carousel results
You might need to share a screenshot for people to help you!
For example, on your exact linked search results I do not see any card-carousel of products:
Personalised search means that, despite people going to the same exact search results URL - often users see completely different things
-
RE: After 301 redirection non-English keyword points to English language pages
What you need to know is whether it was regional and multilingual at the same time. Many multilingual sites still do use regional redirects for their different site sections
-
RE: How can I avoid duplicate brand name in the title serp?
I also looked! Not only did I look at the source code, I also rendered the whole page (after all script executions) and checked the modified source code. And STILL there was nothing referencing " - LATAM.com"
All I could find was that "LATAM.com" was mentioned 2-3 times in the body text, but not in areas which I'd assume Google would count as synonymous with title-content (e.g: H1)
This could actually be a genuine Google bug
-
RE: Our Domains DA has dropped from 52 to 17
This is an absolutely key question, as Google do not use Moz's DA metric in their ranking algorithm (it is a useful indicator, but a 'shadow metric' only)
-
RE: How to deal with parameter URLs as primary internal links and not canonicals? Weird situation inside...
Unfortunately I think that this setup sounds too complex and archaic to really give any recommendations without seeing an example of each URL type and what you want to happen with it (and why)
I know you're trying your best to explain the situation, but the archaic nature and complexity of what you are explaining mean that, without an actual example - no one is really likely to interpret the question correctly. It's not a bad question, it's not your fault - it's clearly just a complicated situation
You should know that the canonical directive is 'just a directive' and not an order to Google. If Google feels that listing another, different URL is more beneficial for its users then it will do that and ignore you. Even if you use canonical tags successfully, there is NEVER ANY GUARANTEE that the canonical URL will inherit all rankings from the previously ranking URL (so quite often, people shoot themselves in the foot by over-using canonical tags. They get 10% more control but lose 30% rankings, bad trade - think bigger)
It sounds like the architecture of the site is so archaic that in reality, any recommendations will "help the site to lose the least rankings over time until it is replaced", so it's more of a damage limiting exercise until the client decides to be reasonable
-
RE: Our Domains DA has dropped from 52 to 17
Moz's Link Explorer (which succeeds OSE / Open Site Explorer) is still very new. It's still discovering backlinks (for its new MLE index) which other tools have logged long ago. This means that when Moz does discover large chunks of links which it previously didn't know about, DA scores can shift quite radically in a matter of seconds (not because anything has actually changed, rather because Moz's view has changed!)
My suspicion is Moz has just uncovered a load of links pointing to the site which have existed for a long time, yet which Moz Link Explorer (MLE) hadn't previously noticed
I think it's probably these ones:
https://d.pr/i/QsznTV.png (Ahrefs screenshot)
They seemed to mostly appear (or at least, were mostly detected by Ahrefs) between 3rd January 2016 and 10th February 2016
Maybe the issue is something different, but to me it 'sounds like' MLE has uncovered a 'stash' of spammy legacy links which it was previously unaware of. This would correlate with what Joe has found
-
RE: How to deal with filter pages - Shopify
As Joe said canonical is fine. No need to play with all the other tags, leave them alone!
-
RE: What is the reason my ranking down?
It looks as if your (estimated SEO) traffic has actually been declining since January 2017:
https://d.pr/i/QcDriP.png (SEMRush screenshot)
https://d.pr/i/SpwvlF.png (Ahrefs screenshot)
... so this isn't a sudden thing, it looks like your site has been in decline for over 2 years (around 29 months, or 894 days). This usually means that you have a lot of things that are going wrong (not just one thing) and that your investment in good SEO and 10x content has been poor for the last couple of years (which you are probably now feeling pains over)
In late August 2018 you had a gigantic spike in referring pages:
https://d.pr/i/4Fgjjv.png (Ahrefs screenshot)
... I reckon that would have looked SUPER obvious to Google and maybe Penguin kicked off, making your situation even worse. Majestic SEO shows exactly the same thing:
https://d.pr/i/5QbQP3.png (Majestic SEO screenshot)
https://d.pr/i/PDrPg8.png (second screenshot from Majestic)
Moz's "Link Explorer" tool might show similar, but can't navigate back far enough in time (d'oh! Still it's running new tech so I guess that creates a time barrier of data-linking from OSE to MLE)
This is how your site looked in October 2016 (just before the serious drops began in January 2017):
https://web.archive.org/web/20161027015942/http://www.rakhibazaar.com/
This is how your site looks now:
... it looks basically the same! Even the page title hasn't changed, except from changing "Rakhi 2016" to "Rakhi 2019". The site gives off all the signals of a website which someone once made, never really invested in again or pushed further - and thus which is destined to die in the future
Mobile UX has improved a little bit, but that's basically it. Nothing revolutionary has happened, the competition has caught up with you and your site is becoming old and obsolete with no 'modern' USPs, no modern value-proposition or value-add for end users. 2016 SEO is not fit for 2019 (not even close!)
In this video, Rand talks about creating web-content (which is ANYTHING a user digests digitally, which exists on a web-page - NOT limited to 'just text'): https://www.youtube.com/watch?v=ZLrKFRYGM5M
^ you should really listen to the ways in which SEO has changed over time
What your site lacks is a unique and credible "value proposition' (which is a proven, qualified business-level integration - NOT a simple quick tag you can optimise). Miley from Google talks about that here: https://www.youtube.com/watch?v=6AmRg3p79pM (you only need to watch her talk about "Issue #1", after that the rest of the video is not relevant)
In the future you need to think about how 'good unique content' is not good enough in 2019: https://moz.com/blog/why-good-unique-content-needs-to-die-whiteboard-friday
... and then you can think about creating 10x content: https://moz.com/blog/how-to-create-10x-content-whiteboard-friday - or experiment with something like the "Skyscraper Content Technique": https://www.youtube.com/watch?v=J9h-9BIC1VU
Think about the story of your brand and what you're bringing to your audience. Think about the unique value-add which your site should 'add' to the internet (which doesn't exist now). Think about the value proposition sof your business and website and how you can unify those in a digital format
-
RE: Site moved. Unable to index page : Noindex detected in robots meta tag?!
That's hugely likely to have had an impact. No-indexing pages before they were ready was a mistake, but the much bigger mistake was releasing the site early before it was 'ready'. The site should only have been set live and released once ALL pages were ported to the new staging environment
Also, if all pages weren't yet live on the staging environment - how can the person looking at staging / the old site, have done all the 301 redirects properly?
When you no-index URLs you kill their SEO authority (dead). Often it never fully recovers and has to be restarted from scratch. In essence, a 301 to a no-indexed URL is moving the SEO authority from the old page into 'nowhere' (cyber oblivion)
The key learning is, don't set a half ready site live and finish development there. WAIT until you are ready, then perform your SEO / architectural / redirect maneuvering
Even if you hadn't no-indexed those new URLs, Google checks to see if the content on the old and new URLs is similar (think Boolean string similarity, in machine terms) before 'allowing' the SEO authority from the old URL to flow to the new one. If the content isn't basically the same, Google expects the pages to 'start over' and 're prove themselves'. Why? Well you tell me why a new page with different content, should benefit from the links of an old URL which was different - when the webmasters who linked to that old URL, may well not choose to link to the new one
Even if you hadn't no-indexed those new URLs, because they were incomplete their content was probably holding content (radically different from the content of the old URLs, on the old site) - it's extremely likely that even without the no-index tags, it still would have fallen flat on its face
In the end, your best course of actions is finish all the content, make sure the 301s are actually accurate (which by the sounds of it many of them won't be), lift the no-index tags, request re-indexation. If you are very, very lucky some of the SEO juice from the old URLs will still exist and the new URLs will get some shreds of authority through (which is better than nothing). In reality though the pooch is already screwed by this point
-
RE: Does anyone experienced to rank a KOREAN Keyword here?
"루비카지노" translates into Ruby Casino according to Google:
Your main difficulty won't be the language, it will be that you are part of the gambling neighborhood, which is one of Google's defined bad neighborhoods
That being said, most of your competitors will be in that same situation, so in a way that's not really a massive problem
There aren't really many people searching for this right now:
https://trends.google.com/trends/explore?date=all&q=%EB%A3%A8%EB%B9%84%EC%B9%B4%EC%A7%80%EB%85%B8
... so you'd have to do something to inspire people to search for the term (as of now, no one really seems to care)
Google doesn't reject the keyword, it's just not really worth anything right now. Google should at least be able to interpret the keyword
-
RE: After 301 redirection non-English keyword points to English language pages
Absolutely not (sorry); I mean exempting the Googlebot user-agent from your redirects - which will involve re-writing the flexible redirect rules that handle the regional 301s. It has nothing to do with no-follows or any other 'normal' SEO stuff, you will need a developer
-
RE: After 301 redirection non-English keyword points to English language pages
If you have regional redirects in place (e.g: detect when users are from UK vs from PL, and then redirect incorrect queries to the right site) - then you may have failed to exempt Googlebot (user-agent) from those regional redirects
Google crawls from one data centre (location) at a time. If Google follows a link on a cached version of your old site to PL content and then regional redirects intercept Googlebot and 'incorrect' the crawl path, you can get stuff like this happening
Although regional redirects are often needed for users, it's best to exempt Googlebot from them so it doesn't get bounced around thus gaining an incomplete view of the site
-
RE: How to rank a transactional query
Well a topic sentence with a supporting sentence is a format. You might be able to demonstrate some value proposition through that format. In a way what you're asking is similar to "what is the difference between winning a race, and a high performance rally car?"
A high performance rally car could be a vehicle (format / medium) through which you deliver race results. But if you used the rally car in an F1 race, it probably wouldn't win (horses for courses)
A value proposition is something about your business which you get clearly across in a cited, qualified manner. Could you do that through a topic sentence with a supporting sentence? Maybe - it depends upon the USP which you are promoting. For example if you are trying to get across your great reviews, then obviously sentences are a bad vehicle whereas TrustPilot embedded reviews (unbiased, from a 3rd party which you cannot manipulate) would be a much better vehicle
One is a format for delivering information, the other is the type of qualified, un-biased and 'proven' information which you are supplying
If your site's value-proposition was a uniquely designed and developed calculator to deliver information to users (which your competitors couldn't deliver) then again sentences of text would be a bad vehicle. A custom designed and developed calculator would be a good vehicle
Look at how sites like Money Supermarket, Confused and Compare the Market got ahead. Their unique aggregation systems ARE their value propositions. Could they have been created using just sentences of text? No
-
RE: Site moved. Unable to index page : Noindex detected in robots meta tag?!
I love these kinds of questions. You have shared a moved page URL, can you give us the URL it resided at before it was moved, which 'should' be redirecting now? That would massively help
Edit: found this one:
https://www.bluegreenrentals.com/searchresults.aspx?s=CO&sl=COLORADO
(this is what the page apparently used to look like before it was redirected, but the image is a little old from 2017 - OP can you confirm if it did look like this directly prior to redirect?)
... which 301 redirects to:
https://www.bluegreenvacations.com/rentals/resorts/colorado/innsbruck-aspen
... gonna carry on looking but this example of the full chain may help any other Mozzers looking to answer this Q
Suspected issue at this juncture, which could be wrong (not loads to go on right now) - content dissimilarity between URLs leading Google to deny the 301s
FYI: info to help OP, the no-index stuff may be relating moreso to this:
https://developers.google.com/search/reference/robots_meta_tag (may be deployed in the HTML as a tag, but can also be fired through the HTTP header which is another kettle of fish...)
-
RE: Our site dropped by April 2018 Google update about content relevance: How to recover?
You kind of dropped a bit but not in a way which affects you very much (apparently, according to Ahrefs)
https://d.pr/i/HBzKpj.png (screenshot of estimated SEO keywords and traffic according to Ahrefs)
You did lose a lot of keywords, but many seem to have since recovered and it didn't seem it actually impact your SEO traffic estimates much at all
SEMRush has a neat (relatively) new tool which looks at more accurate traffic estimates across the board (not just limited to SEO):
Again it does show a bit of a dent around April 2018. If I was going to use SEMRush data to look at this, I'd use the traffic analytics tool not the 'normal' SEO estimate charts from SEMRush (which IMO aren't very good, hence using the Ahrefs one in place of that)
This is what your site looked like in Feb 2018 before the keyword drops:
https://web.archive.org/web/20180224042824/https://www.vtiger.com/
This is what your site looked like later in June 2018:
https://web.archive.org/web/20180606021616/https://www.vtiger.com/
Completely different!
This is what your site looks like now: https://www.vtiger.com/
Again radically different. Maybe you just have a bad case of 'disruptive behavior' where Google is unwilling to rank you well, because the site keeps radically changing in terms of design and content. Sometimes doing too many changes too fast can really put Google off! 3 different designs inside of 1 year is pretty crazy
After each change, your home-page's Page Title was completely different:
Feb 2018 version: Customer Relationship Management | CRM Software - Vtiger
June 2018 version: Vtiger CRM | Customer Relationship Management Software
Current version: CRM | Customer Relationship Management System - Vtiger CRM
In my opinion everything that was done around June 2018 was a huge mistake that you are suffering for now and recovering from gradually. The June 2018 design was horrible, way worse than the Feb 2018 or current one (both were better). If a designer doesn't do a good job, don't just 'go ahead' with a terrible site design just because you paid for it
In addition to that in June 2018 your page title didn't 'begin' with the term (or a synonym of the term) "CRM". In Feb 2018 and on the current version, you either opened with "CRM" or a synonym of "CRM" which is better for SEO. The June 2018 version of the site was really bad and also less well optimised as well (that seems really obvious to me)
Part of me actually feels that the Feb 2018 version of the site was best for SEO. It did a better job of making your USPs (value propositions) stand out to the user and search engines. It blended nice, app-styled UX with functionality that was more than just 'button links'
The current version isn't bad, it certainly looks nicer visually - but the June 2018 version was a bit of a house of horrors. It makes sense it would have been active within the boundaries of the times you got dented because, it's just a bit shocking to be honest. In the Feb 2018 version of your site, more of the individual product links were listed in the top-line nav. Now they are still there but 'hidden' in drop-downs, that could be affecting things too
If I look at the technical SEO of the Feb 2018 site I can see it was relatively streamlined in terms of resource deployment:
... but by June 2018, there were way too many resources for one homepage to be pulling in. Not only did it look plainer and uglier than before (and less helpful, with worse SEO) it was probably also laggier to boot:
Ugh! 89 HTTP requests!? Get outta' here
Now things seem a lot better on that front:
So I think this is more evidence that the short-lived June 2018 site was pretty sucky and you guys bailed on it at light-speed (rightly so it was terrible!)
The question: did you see ranking drops for "CRM" related keywords in the period surrounding April 2018? Say for example, in April, May, June and July of 2018?
I'd say that you did, according to an (extremely rough) ranking movements export from Ahrefs:
Actual data export (formatted) here: https://d.pr/f/pwnrIF.xlsx
So which CRM related URL, was responsible for the most CRM related ranking losses which Ahrefs happened to pick up on?
https://d.pr/i/rCQ8LF.png (table image)
https://d.pr/i/7SJPbt.png (ugly bar chart)
Clearly the URL most responsible for all the drops was this one:
https://www.vtiger.com/open-source-crm/
... so how has this URL changed?
Infuriatingly, the Wayback Machine has barely any records of this URL, so closest I can get to ... just before the end of April 2018, is actually December 2017:
https://web.archive.org/web/20171226021957/https://www.vtiger.com/open-source-crm/
... it looks basically the same as it looks now. No major changes. But wait! On the old version of your homepage, the footer links to the open source CRM were bigger and more prominent than they are now. Another thing, those footer links used to be marked up with itemprop=url, now they are not (could that be making a difference? All I can say is that the coding is different)
Another question would be, between April and July 2018 - did you lose any CRM related links that were worth a lot?
Actually, apparently you did lose a few. Check some of these out:
https://d.pr/i/Zg5XER.png (MEGA screenshot, but first page of results only)
https://d.pr/f/NetqVM.png (full export, lost links which may be about 'CRM', April through July 2018 - raw and unformatted export, open the CSV file in Excel!)
Losing a CRM related link from Capterra, online peer review software experts? Yeah that could hit you hard. Most of the Mashable ones are still there, they are just redirected - but the Capterra one:
https://blog.capterra.com/free-and-open-source-crm/
... that could sting. You used to have a link with anchor text like this:
"for a price starting at about $700" - but now it's gone!
You might be thinking, aha Effect - you silly sausage! Clearly it was a comment link that got pushed down or removed by admins / mods, not a 'real' link Google would have been counted. But no I say, and I have proof to back up that denial:
https://web.archive.org/web/20170930101939/http://blog.capterra.com/free-and-open-source-crm/
That is the same post in April 2018, if you Ctrl+F for "for a price starting at about $700" - you will FIND the in-content link, which actually did matter, which Capterra have removed from their content
I am sure that in the link data you will find other such examples of lost quality links. Some will be duds and false-positives (like the Mashable ones) but some will be legit removals
By the way, although the Mashable links to you are still live, Mashable have 302 redirected the old URLs for the blog posts instead of 301 redirecting them. This means those posts, if they were valued and accrued a lot of backlinks - have been cut off from their own backlinks (as 302s pass no SEO juice). As such links contained inside of them are largely nullified (d'oh! Thanks Mashable)
What this illustrates is that, your site changed too much, the way links are formed changed, the design went through a really bad patch and also you've lost some high quality backlinks. An SEO legacy doesn't last forever, links get removed over time
In the end, these convergence of issues are almost assuredly leading your site through a tough spot. That's what I'd imagine, from a very very top-line look into this issue
-
RE: 301 Redirect in breadcrumb. How bad is it?
Past performance is seldom a good indicator of future success. The web is so competitive now that 'good unique content' isn't really good enough any more (anyone can make it)
This video from Rand is a good illustration: https://moz.com/blog/why-good-unique-content-needs-to-die-whiteboard-friday - where you say "content is original and not bad" - maybe that's not enough any more
One solution is the 10x content initiative: https://moz.com/blog/how-to-create-10x-content-whiteboard-friday
And your site should have a unique value-proposition for end users: https://www.youtube.com/watch?v=6AmRg3p79pM (just wait for Miley to stop outlining issue #1 then stop watching)
It's possible your tech issue is a contributing factor but I'd say search engine advancements and changing standards are likely to be affecting you more
Even if you do have a strong legacy, that's not a 'meal ticket' to rank well forever. SEO is a competitive environment
Sometimes tech issues (like people accidentally no-indexing their whole site or blocking GoogleBot) can be responsible for massive drops. But these days it's usually more a comment on what Google thinks is good / bad
-
RE: Ecommerce store on subdomain - danger of keyword cannibalization?
I posted a bit of a Reddit rant here under my personal SEO alias of "studiumcirclus":
(click "View Entire Discussion")
Mainly these things vex me about the platform:
"In basic terms, Shopify is limited by its vision. They want to make sites easy to design for the average-joe, which means they have to spend most of their platform dev time on the back-end of the system and not the front-end of the sites which it produces
_ If they're always bogged down making extra tick-boxes to change things in the back-end, how can they be keeping up with cutting edge SEO? With WordPress you have a much larger dev community making add-ons, many of them completely free and still very effective. Because everyone is on WP, when new Google features, directives or initiatives come out they are quickly embraced (putting all sites on WP one step ahead)_
_ With smaller dev communities, platforms like Shopify or Magento lag behind. Why do people always expect that 'average' will rank 'well'? Ahead of the curve ranks well, average ranks averagely_
_ Also Shopify has some nasty Page-Speed issues which they won't acknowledge and they just argue about instead of fixing things. It's just not good for SEO_"
Other "Shopify is bad" evidence:
https://moz.com/community/q/main-menu-duplication#reply_391855 - just contains some of my thoughts on why Shopify isn't that good
https://moz.com/community/q/site-crawl-status-code-430 - a relatively recent problem someone had with their Shopify site, scroll down to see my reply
https://moz.com/community/q/duplicate-content-in-shopify-subsequent-pages-in-collections - someone else having tech issues with their Shopify site. While my answer was probably right, they probably couldn't implement the fixes
-
RE: Are provincial third-level domains bad for SEO?
"most provincial level domains are reserved for government institutions" - I didn't know this, very interesting bit of info there!
It would be very hard to say if they had been definitively hindered but IMO it's seeming more and more likely
-
RE: How to rank a transactional query
A value proposition isn't a tag or a technical thing, it's part of the idea of the website. Miley outlines it pretty well in this video: https://www.youtube.com/watch?v=6AmRg3p79pM - though you only need to watch up to the point where she fully covers "Issue #1"
Your value proposition is for your business and should be reflected on your website. It can take the form of a website that is better and easier to use than all the competitors sites due to unique feature/UX integration (e.g: a calculator that assesses the breakdown of terrain on a tour, 10% mountainous 50% urban 40% forest - or something like that, and then maybe does something cool like recommends an actual bike, or supplies for the run, or both)
A value proposition can also take the form of simple USPs (e.g: better service than the competitors, better reviews from Trust Pilot, faster delivery, the lowest prices, covering a wider area, better staff expertise and local knowledge). If you go with USP-based value propositions, qualify them (saying "I'm the best!" isn't usually enough)
If the query space sucks, think what you can do to leave everyone else in the dirt and make it better. Think of some 10x content pieces
Remember: "good unique content" is rarely ever good enough to rank competitively. You have to go next level
-
RE: Why My Domain Authority Dropped
See my relevant answer to an older, similar question here:
https://moz.com/community/q/why-did-my-site-s-da-just-drop-by-50
"Keep in mind that PageRank (which is used by Google to weight the popularity and authority of web pages, yes it's still true even after the little toolbars got deleted) does not read, utilise or rely upon Moz's DA metric in any way shape or form. DA is a 'shadow metric'. Since Google took away our view of the simplified PageRank algorithm (TBPR - ToolBar PageRank, which is dead) people working in SEO needed a new metric to evaluate the quality and ranking potential of web pages
Moz stepped in to supply this (in the form of PA / DA) but Google still use PageRank (they just don't show it to us any more). Whilst Moz's PA/DA metrics are a good 'indicator' of success, Google isn't using them (at all) and so they do not directly affect your rankings"
Only someone from Moz can confirm why your DA dropped, but it may have dropped for a reason that wouldn't impact or effect Google rankings in the slightest (so don't panic yet!)
-
RE: Does anyone experienced to rank a KOREAN Keyword here?
I haven't had experience with this personally, but I think it could be worth it to try
Firstly you have to entirely separate your view of North and South Korea's 'internet situation' as they are both radically different
North Korea
https://en.wikipedia.org/wiki/Internet_in_North_Korea
"Connection to the internet in North Korea is done via Naenara, a modified version of Firefox that can access approximately 1,000 to 5,500 websites in the internet. It runs on Red Star OS, a North Korean Linux distribution."
"Nearly all of North Korea's Internet traffic is routed through China."
... so you might think, hmm maybe it will be impossible. As you may know, the most popular search engine in China is Baidu (not Google). Now I can personally tell you that signing up for Baidu tools (equivalent of Google Search Console) is virtually impossible if you're not native to China. If you can get through the (non-translated) login / setup steps AND provide a Chinese landline number for verification purposes (and collect on that number) you stand a sliver of a chance
I actually once set up one of my own sites on there, it took hours / days to do it. Then - in a few months, I was randomly un-added and removed. The truth is, they don't really want too many Western influences appearing in Chinese search results. Even being seen to promote certain Western ideologies (even if you're not slating or attacking the Chinese state, which I would never do) - you can still be removed 'just like that' with no explanation, no recourse and no means to address the situation (period)
So now you might be thinking, well since almost all of North Korea's internet activity is routed through China, and since Baidu are so impossible to work with - is it really worth even bothering?
There is a small ray of hope: https://dailycaller.com/2018/04/03/most-popular-internet-search-engine-in-north-korea/
Apparently, even though most of North Korea's internet traffic is routed through China, unlike China - North Korean web-users more frequently use Google. So actually the issues surrounding Baidu, aren't so relevant to North Korea
That being said, you might wonder - is there much point in trying to appear on Google's results there, since North Korean users can only access "approximately 1,000 to 5,500 websites in the internet" (a tiny section of the web's true content)
If users in North Korea can type things into Google and your site comes up. but then when users click on your site they are blocked from accessing it (as your site isn't not one of the state-approved 5.5k which are accessible) - then really what's the point?
IMO there's not much you can do here, not many ways to reach people and there are probably better ways to spend your marketing / web / SEO budget
South Korea
https://en.wikipedia.org/wiki/Internet_in_South_Korea
"About 45 million people in South Korea (or 92.4% of the population) use the Internet.The country has the world's fastest average internet connection speed. South Korea has consistently ranked first in the UNICT Development Index since the index's launch. The government established policies and programs that facilitated the rapid expansion and use of broadband."
A completely different situation!
http://gs.statcounter.com/search-engine-market-share/all/south-korea
Many people in South Korea have very fast broadband, access to much of the web's contents and they prefer Google over something like Baidu. The key thing is that when someone sees your Google result listed in South Korea, if they click it they are unlikely to be blocked from getting through to you
I'd say that the South Korean market and culture, which has such an involved and socially integrated web-sub-culture, could assuredly be worth targeting. You'd almost assuredly have to translate your keywords into the right language, and even the right alphabet - according to Google Trends which does seem to suggest that people in South Korea primarily search in their own language (not in English):
https://trends.google.com/trends/trendingsearches/daily?geo=KR
For North Korea you can't even get to or see similar data: https://trends.google.com/trends/trendingsearches/daily?geo=KP - "Daily search trends are not available for this region. Try a different region"
Keep in mind that in South Korea, many searches happen when people are out and about, in cyber / web cafes etc
Final Thoughts
North Korea is too difficult to penetrate from a search POV, even though Google is their primary search engine. For South Korea it's a different story, their population is very web-accessible and celebrates online / cyber culture. You would have to translate all your keywords into the South Korean alphabet and language, and use Google's Keyword Planner as a guide in terms of average monthly search volumes
I personally wouldn't invest time in trying to reach people in North Korea. I don't have a problem with the North, it's just simply that it is very, very hard to reach them and the cost of doing so would probably out-weight the potential financial gains
For South Korea, it's different - I'd certainly have a go
-
RE: Is having my homepage on a subfolder harmful?
I tend to use the base domain as the site's 'home' language (e.g: if the blog was first conceived in the UK and the authors live in the UK, then I'd use "/" as en-GB). I only create sub-folders for the 'additional' languages (e.g: "/nl", "/de", "/fr" etc)
Even 301 redirects can dilute SEO authority a little (or a lot, under the wrong circumstances). Since lots of webmasters and editors will just 'lazy-link' to the base domain (or because it makes the link look cleaner / more legit within their content) I'd have your primary language deployment at the base domain, then all the rest in sub-folders
This is my go-to approach and to be honest it's never failed me yet
-
RE: 404 vs 410 Across Search Engines
Yeah the 410 seems like a good option. It could also be supplemented with X-robots no-index in the HTTP header of each URL (Meta no-index can be deployed via HTTP header if HTML is generated / inaccessible)
Info here: https://developers.google.com/search/reference/robots_meta_tag - you can scroll down to the sub-heading "Using the X-Robots-Tag HTTP header"
-
RE: Link to webdesign bureau in footer on follow or nofollow
Test it. No-follow a portion of the links (if you have designed hundreds of sites, maybe try 10%). See if your results go up, stay the same or go down. If your results go down, remove the no-follow tags again. Even if the results don't instantly come back (and Google keeps the no-follow reference, even when the coding is removed) you won't have lost much as you kept your sample small
SEMRush and Moz Toxicity ratings rely a little too much on linguistic, 'semantic' relevance (e.g: "a link from a car manufacturer to a car insurance site is relevant as they're both about cars"). Deeper relevance (that which Google is actually looking for) is more to do with "why is it relevant for the user to click on this link?"
Toxicity scores may simply be a reflection that you have loads of links from sites which are 'thematically' irrelevant. But that doesn't necessarily make the links themselves irrelevant! It may well be useful for people to Google the sites you made, think they are cool and wonder who designed them
The truth is neither SEMRush nor Moz knows exactly what Google thinks a good / bad link is. They basically look for patterns in backlink profiles and linking sites, which have been involved in penalties which did occur on Google (which they know from their keyword / ranking indexation data). If suddenly 2-3 sites drop out of the rankings and they all shared similar backlinks, SEMRush and Moz can estimate that those linking sites (under certain circumstances) may be bad
But it's not a 100% guarantee, indeed if you disavow or no-follow loads of links based on these ratings alone - you often do see little performance dips (without doing more forensic, more holistic research)
If your main concern is that 'site-wide linking' may be negatively affecting you, there could be a simple cure for that. Your idea of producing case studies on your own site is great, but it stops you getting free traffic and leads from sites that you designed - if those sites stop helping you rank well, or stop linking to you
Instead you could create new pages on the clients sites. Yeah seems crazy but hear me out as I have some logic behind this which might create a good compromise, which would be very interesting to test
In the footer on your client's sites, leave a link saying "Webdesign by Conversal". When users click that link, instead of taking them directly to your own site, you could point that link to a page on the client's site with some design sketches, a bit of blurb about how your approached the project. THAT page (on your client's site) could then link to you directly. In this way, you'd only get ONE link from each site, but the footer link would remain (though it would become an internal link) and continue to serve you. Maybe this could be a decent solution, but I've never tested it (sorry)
The links from these pages on your client's sites (accessible only from the footer links) could connect with the case study URLs on your own site, creating a unified experience which leads people down a funnel - to buying a site design from you
I might try that on a few sample clients, monitor the results. If the results didn't drop I'd at least feel better insulated against Penguin, and would probably then roll out another batch
-
RE: SEO - New URL structure
Google's tolerance for 301 redirects is pretty high as long as you use speedy ones (implement via NginX - 'engine X', not via .htaccess lines). If the redirects are logical and they don't chain or contact with incorrect redirect types (Meta refreshes, 302s etc) then usually you're ok. Still it will take Google time to digest all the changes and you could see a small interim performance dip
Flat URL structure tends to build the 'authority' of URLs better, making them more powerful. Deeper and more nested URL structures serve 'relevance' better as they give much more context. If your domain's overall SEO authority is low to begin with, then a flatter structure may be better for now. If you have lots of SEO authority then you may be able to 'irrigate' more deeply nested URLs more effectively, thus reaping long-tail gains (so each structure has strengths and weaknesses, depending upon your current standing on the web)
Flatter structures rank better for larger terms, but only if you have the SEO authority to power them. Deeper structures rank better for longer-tail terms (but thousands of them) - again though without the right SEO authority metrics, there will be very few droplets of 'SEO juice' which end up reaching the lower-level pages
In the end most sites evolve to a point where they adopt the more deeply nested structure, but they usually suffer growing pains as they transition. In the long run it can be superior, but only for sites which can make good use of it (e.g: eCommerce web stores with categories, products, collections, product variants etc). If a site is services based it often doesn't have so much SEO authority and also - the deeper structure isn't really so relevant! A services based site will usually offer far fewer services than an eCommerce store offers products (tens vs hundreds of thousands)
A strong publisher with lots of ranking power (online magazines, newspaper digital editions) will often switch to the deeper structure for listing their content and (in the long run) see a lot of benefit from that. For smaller publications (blogs, blog or news pages on business / non-publisher sites) - it's often not worth the move
-
RE: Domain SEO
2 because it's easiest to remember. In 2019 exact-match domains have less impact on SEO, it's more about 10x content and demonstrating a solid value proposition (watch up to the point where issue #1 is fully outlined). SEO is a pretty vast field in modern times. Coding tweaks and URL slugs are still somewhat important, but they provide slight, slight bonuses to your core value proposition (the value-add of your site, to the internet). I don't think engagementrings.com is too bad, but without a solid 'idea' and value-prop behind it, the URL won't magically make it rank alone
-
RE: Looking for a Tool to Find Referring Pages of Specific URLs
Not a problem. Sometimes you just need a pro
-
RE: Site move?
Oh lord I really don't like working with Shopify haha! Basically I avoid it like the plague, although I will admit it is slowly, slowly improving
We have one or two clients who run Shopify and we absolutely do the best we can for them, but the truth is usually their results improve 'more' after they leave the platform. I can say that, Shopify is a good product for one-man-bands and people who 'just need a site' that looks professional (for their business cards etc) but no, it's not SEO-competitive (at all)
I don't make a huge effort to keep up with all of Shopify's changes and stuff, because what I have realised is that the fundamental philosophy of the platform (which is unlikely to change) goes against the competitive art and practice of SEO. When the platform stifles and limits 'what you can do' - how can it ever be bleeding edge? You end up confined
Email is on our Moz profile here if you want to have a chat about it. If you're moving 'away' from Shopify, there may be some pains there in terms of redirect limitations - but I'd need to look again and familiarise myself with what they can currently offer in that regard
-
RE: Using Anchor Link in the Main Navigation
depends how many of these you're planning to have. In any case, the loss (which may occur) should be very fractional and not super damaging. If you lose 0.05% traffic but gain +2% conversion rate, then really you're winning overall. Sometimes good SEO involves making a decision that slightly hurts the 'purist' SEO, in order to focus on revenue and 'real business' KPIs (which matter more, which bosses care about more)
As long as you can reason the trade-off which you made and it falls in your favour (overall), then you have done a good job
Your bigger problem is that these new links may push other links further out of the user's reach and / or further down the code / page. If this happens, other links may not supply quite the same SEO authority which they did before
I certainly think your proposal is worth testing. Measure both the CRO (Conversion Rate Optimisation) and SEO impacts. If overall you take in more $, then the fact that SEO has a slightly bruised cheek is really just a matter of vanity. Think with your business head
-
RE: Ecommerce store on subdomain - danger of keyword cannibalization?
That sounds like a hell of a mess. Instead of tying your name to one proposed implementation and saying "yes, this IS the way" - I'd get the complexity of the issue across to the client / boss
I'd then present your idea and say "I want to test this, but if results suffer we will need to revert the changes". I think that with such a complex architectural nightmare (on a HORRIBLE platform like Shopify, which is just awful for SEO) - it would be extremely foolish to charge off into the night without making the risks clear
The best practice is really to not have built such a terrible site to begin with. In making things better, there may be growing pains. There may be NO options which would result in 100% growth and 0% losses
My recommendation would be to continue blocking Google's access to the original, default product variations (as those are already happily ranking on the main site. Don't fix what ain't broken). I might allow Google to crawl the sub-variations which are inaccessible from the main site. I might alter the main site's UX to include links to the sub-variants on the 'shop.' subdomain
In the end though, it's a very tangled web they have spun
-
RE: Is It Beneficial to 'Like' My Clients Google Reviews?
As far as I know, although aggregate review ratings themselves can be used by Google in terms of GMB / Google maps rankings, 'likes' don't factor at all and will make zero difference
It looks weird anyway, I would think that it would be very rare for people to take the time to 'like' reviews. Maybe if it was something they were passionate about (a book they read, then they saw someone's review and really vibed with it. Or a big brand with an extremely innovative product...)
- but to see liked reviews for drier SMB stuff, would seem a little contrived (just my 2pence)
-
RE: Using Anchor Link in the Main Navigation
AFAIK this has no negative impact on SEO, but is symptomatic of another issue which does impact SEO. Putting too much stuff, or too much content on a single URL - thus 'diffusing' Google's ability to nail down 'the primary topic' of the URL (which can make it rank badly)
It's not the anchor links that cause the poor rankings though, it's the ill-thought out information architecture and site design (IMO)
Google won't rank those anchor URLs by the way. Just the main page. But it's not like you'll get some kind of crazy anchor using penalty or something, that doesn't exist
-
RE: Looking for a Tool to Find Referring Pages of Specific URLs
You can use Screaming Frog to do this
Crawl the whole site, then once it's done save the crawl file (File->Save) so you don't lose the project (important, in-case you accidentally export the wrong thing - you can load it back up again!)
Then go: Bulk Export->All Outlinks
That will get you the data you need. It will list all the links on the site, including links to resources (as long as your crawl settings are right, it can be somewhat of a learning curve - if for example you neglect to even crawl pages blocked by Robots.txt, if you neglect to crawl links inside of CSS or if you don't use the JS crawling method correctly you might miss stuff)
The spreadsheet will contain loads of columns containing data on the links. One will show you the referring page / source URL, the other will show you the link target / destination. On the link destination column, filter by your desired URL (which in this case would contain /image-name.jpg or similar)
Once that's done, on the left you'll see all the referring URLs. Sorted!
If the data which is created by Screaming Frog is too large to be analysed in Excel you can export to CSV instead of XLSX. You can then proceed to:
a) Upload it to a local MySQL database for analysis with MySQL queries
b) Use a big-data CSV manipulation tool, like Delimit instead to narrow the data down. Once it's narrow, re-save and then analyse in Excel
Problem solved!
-
RE: 301 Redirect in breadcrumb. How bad is it?
Highly doubt that would be a reason to 'lose of lot of SEO ground'. If those URLs were 404-ing before, you had breadcrumb links to 404s and that's worse than breadcrumb links to 301s
The bigger problem was, you lost your category pages which got set to not visible. And by the way, even when you change them back to 'visible', if the 301 is still in effect - users and search engines still won't be able to access your category URLs (as they will be redirected instead!)
If the category pages have been restored and you're still redirecting them, yes that is a big problem. But it's not because you used a 301 in a link, it's because you took away your category URLs. That very well could impact performance (IMO)
-
RE: Site move?
If everything is done properly, the old site loses rankings for the content which is extracted and placed somewhere else. Hopefully the new site manages to achieve similar rankings for the migrated content, but there are a number of factors which could inhibit that
One thing would be the 'sandbox' that Google often puts around fresh domains which can see rankings inhibited for 3-6 months if the domain is very new (if it's never been used before, if it's only been a domain-sales based holding page before, or if the domain was misused in the past)
Other than that, a proper 301 redirect migration project would be needed to see as small a dip in performance as possible. You hope is that the old site loses all associated rankings and the new site picks them up again, but things seldom go this smoothly with such large and complex projects. What usually happens is that you lose the associated rankings from the old site and on the new site you often see a 10-15% dip in traffic, yes - even if you think all your redirects are perfect!
Remember that page-loading speeds affect rankings. If the new site is seen as a 'small project' and an 'offshoot' it is unlikely to gain the same strong hosting environment as the main site has - even if both sites are built using the same technology. As such you can sometimes see ranking reductions due to that
There's other stuff. Sometimes people only move the content across but don't move any of the Meta / technical SEO which existed on the old pages, to the new pages. Failed tech / Meta migration can also impact rankings quite prominently (thinks like H1s changing, custom URL slugs changing, Meta data changing, Schema getting 'lost' etc)
If the UX of the new site is poor or very different to that of the old site, that could potentially affect results
With 301 redirects there are numerous factors that can dampen their effectiveness:
- Creating too many links to redirects, internally or externally
- Creating redirect chains
- Misuse of redirect types (e.g: using a 302 or Meta refresh instead of a 301)
- Content similarity (e.g: if Google harvests the content on the old page and new page, using something like Boolean string similarity to compare - if the content has a low 'percentage' of similarity, you're not likely to see the SEO authority go across. This is a massively important one
- If Google believes that moving the content is an attempt to manipulate organic-search (SEO) rankings (e.g: if the content is moving purely so it can exist on an exact-match domain, and therefore get higher SEO rankings - but actually no value for the user is added by moving the content out)
- If Google believes the content has been moved from one organisation, brand or entity to another (aka the content has been sold for SEO purposes) that could influence how they look at the movement
... there are so many possible factors. Tread carefully and make sure all the 301 redirects are completely on-point with no errors, and the dip you may observe will likely be short and small. But if you go against Google, you could just lose everything and never see it again!
-
RE: Are provincial third-level domains bad for SEO?
You are right but my POV is that although it's a different situation, the same limiting factors might come into play. Event with just one site with one region based TLD, these points from Google are still valid (mostly, some can be ignored):
Country-specific domain
Cons:
- Expensive (can have limited availability) - still relevant but also since this cost is already paid, of little concern to OP
- Requires more infrastructure - this is irrelevant as it's just one site so loads of infrastructure won't be needed
- Strict ccTLD requirements (sometimes) - still relevant but also since this cost is already paid, of little concern to OP
-
Pros:
- Clear geotargeting - this is highly relevant and could, IMO, if Google had written the documentation correctly - also have been listed in the cons pile
- Server location irrelevant - n/a
- Easy separation of site - n/a
So the main thing to focus on here is this statement from Google on country-specific domains:
"Clear geotargeting"
... now "clear geotargeting" can be highly beneficial, it can give your site and pages more 'relevance' for a specific area. But it's a double-edged sword! If you have international ambition, it can be a limiting factor (that's really what I was getting at) and it could make ranking internationally, very difficult indeed. It would mean that when OP does decide to go international (if that time ever comes) OP will either require a network of domains which could be costly in terms of setting up all the required infrastructure
So although OP's setup might be ok 'for now', later it could become an unwieldy leviathan which proves to be... not very scaleable. Or at the least, not so easily scaleable
So OPs decision is, does OP want to have some local gains now at the cost of having a more difficult time later when OP scales the site, or is OP unwilling to make that trade?
And think of this: Google have pretty much stated numerous times that 'locked' geo-targeting (to one specific area, either through TLD choices or Google Search Console) can make it much more difficult to rank outside of the specified area. One could make the assumption that for provincial TLDs, if Google starts interpreting them in a similar way - it could be hard to rank even outside of the local province. That could be a real thorn in OPs side later, though right now it might matter much
The truth is no approach is intrinsically 'good' or 'bad' for SEO. It entirely depends on OPs goals, KPIs and ambitions (to which we are not currently party)
-
RE: Are provincial third-level domains bad for SEO?
Google haven't extended this documentation to cover provincial third-level TLDs but if you look here:
https://support.google.com/webmasters/answer/182192?hl=en
... there's a table on the page (scroll down) which you might find quite useful. It outlines the various pitfalls of different types of local-specific URLs. I'd expect this kind of stuff to hold true for the newer provincial TLDs
-
RE: How do I prevent duplicate page title errors from being generated by my multiple shop pages?
Two main options:
-
Edit your template so that for additional pages it just adds something like "P2" or "Page 2" to the page title. This is the preferred option
-
Block Rogerbot from crawling paginated content (https://moz.com/community/q/prevent-rodger-bot-for-crwaling-pagination) - this however, would block Rogerbot (Moz's crawler) from identifying other issues you might have with your paginated content / URLs
-
-
RE: Our site dropped by April 2018 Google update about content relevance: How to recover?
Difficult to say without seeing the site, the content and the keywords. Because different query-spaces and search entities are thematically different, the ways to 'become relevant' to each of them can be highly variable in nature. If I could just see an example, it would be much easier to assess why Google has changed its mind in terms of your site's perceived relevance
What you should know about Google is that they truly believe, all of their updates make Google's search results generally more accurate (and better for users) on average, so a roll-back is extremely unlikely. If you have been pinned by a certain algorithm change, it's likely to keep hurting you until you adhere to Google's 'new standards' (which you might argue are lower in your particular niche, but regardless they're not listening)
Sometimes fairy-tales come true and 'Google glitches' get 'undone', resulting in some sites regaining their lost rankings. This is 0.001% of most situations. Usually what happens is, people get red in the face and angry with Google, argue the toss and see their sites disintegrate as a result. Mathematical algorithms don't care if you're mad or not, they don't care what you expect
With an example, I could give an un-biased 3rd party opinion on why Google is 'doing this' to your site, but it won't result in a quick fix. It will likely result in some weeks of hard graft and further investment
All of the 'standard' ways to measure content relevancy are things like, see how many times your keyword(s) are mentioned in your content. But the highest relevancy you can demonstrate is nothing to do with keyword deployment, it's matching your site's unique 'value proposition' with Google's perception of the values which the searchers (within your query-space) hold
Maybe there's been a shift and they suddenly value price over service, thus Google shakes up their results to suit. I'm not saying keyword deployment isn't part of the issue, what I'm saying is that the most 'relevant' site is the one which the largest proportion of connected searches, wish to find. It's more than just linguistic semantics and keyword-play (hope that makes sense)
-
RE: Importance (or lack of) Meta keywords tags and Tags in Drupal
Meta keywords are not used by Google any more. Not for a decade - by the way!
If you fill out your Meta keywords tag for all your pages, anyone with an SEO crawler (like Screaming Frog) can effectively 'steal' all your keyword research. I'd say just don't fill out Meta keywords to be honest. At best it's a waste of time, at worst it's free research for all your competitors
-
RE: How to not appear in incorrect country
You raise valid concerns here and the truth is, it may not be hreflang related - but before we look at anything else, you do technically have a lang / hreflang conflict
Look at this example:
view-source:https://mediabrosonline.com/en/ (view source links only open in Chrome)
Here's your self-referencing hreflang:
rel="alternate" hreflang="en" href="https://mediabrosonline.com/en/" />
Here's your lang tag:
lang="en-US"
Your hreflang says the page is EN international (for all EN users) but your language tag says the page is only for EN speaking users geographically located within the US. So which is it? Confusing for Google
Let's look at an example where the site 'does it right':
view-source:https://mediabrosonline.com/mx/ (view source links only open in Chrome)
Here's your self-referencing hreflang:
rel="alternate" hreflang="es-mx" href="https://mediabrosonline.com/mx/" />
Here's your lang tag:
lang="es-MX"
See! They correctly match. So this shows that on the EN page, implementation is technically wrong. I know, I know - I am really 'splitting hairs' here. But before we look at other factors, let's make your original statement:
"I have the correct hreflang tags"
... actually true! That way we can rule it out