I agree with Robin, I don't think there's a solid way to do what you want - or one that is infinitely superior to GA (which also doesn't damage conversion rates). IP addresses by and large, are sometimes inaccurate at the city / town level but they are 'usually' accurate at the national level. The only other option you have in GA is looking at browser language settings and that's incredibly misleading due to most PCs being shipped on US English by default
Posts made by effectdigital
-
RE: Getting accurate Geo Location traffic stats in Google Analytics - HELP
-
RE: I want to shift my website to a new domain name, with my brand name. Would Lose rankings
Most site migrations, whether they involve a redesign or not (or whether they move domain or simply alter existing architecture, e.g: HTTP to HTTPS) will incur a small dip in performance, yes
Usually when you perform a site migration, it's for strategic and not tactical reasons. You're usually thinking of the long term. Your belief is that in the long term, the new domain and / or design / architecture will perform better than the old one(s)
If redirects are not properly handled, you could lose all of your traffic quite easily. If redirects are handled correctly, you're in a much better position but still likely to suffer some small indent in terms of performance (usually not lasting longer than 1-2 months - if you keep producing good content and earning great links)
What you have to remember is, if you always play it safe and never 'evolve', you might incur less cuts and bruises now and then - but you will die faster. As others overtake you through their efforts, you sink and fall behind. It's worth striding out there, taking a few nicks and cuts - to preserve your overall life-span for longer (think of it like regular rigorous exercise, it's painful when you do it but later you see the benefit)
301 redirects an translate up to 100% of your SEO authority from one place to another, but they won't always. If there are too many links to redirects that can make them slightly less effective. If redirects begin to chain (redirects to redirects) or if the wrong type of redirect is used, that can drastically affect the transfer and you could see as little as 0% of the prior SEO equity on your new domain. Another thing, if content is relatively different (in machine terms, think Boolean string similarity comparison - NOT "oh yeah as a person it looks similar to me") on the old and new pages, that can directly obstruct 301 redirect SEO authority transfer. Google has chosen to rank X page, if you replace it with Y content then it becomes a risk to Google. If content is mostly new, it mostly has to prove itself again (and redirects become largely nullified)
To some extent you can get around this by performing backlink amendments. Getting webmasters to change their links to your site, so that they hit the new domain / architecture and not the old one. This means that the backlinks are not flowing through redirects, and thus Google can have more confidence that the new content is just as good (for similar search terms) as the old content was. If many webmasters disagree to update links for you, that could be a sign that your old content was more useful than your new content (so roll back!)
Your new domain, if it hasn't been used before (ever) may be sand-boxed by Google for a few weeks. That can be a normal thing, until Google digests all the redirects, re-linking and your usage of Search Console's change of address tool (which you absolutely should use, but don't mess it up by even one character or you'll cause yourself months-long headaches)
Sometimes if everything goes swimmingly, you can get very lucky and not even see a dip at all. That's not the norm, so don't set all your expectations around that
-
RE: Does anyone rate CORA SEO Software?
Never heard of it. The main site neglects to mention what any of the features are and looks a bit 'thrown up' there. Personally without further info I wouldn't shell out for anything like that
-
RE: Best Practice Approaches to Canonicals vs. Indexing in Google Sitemap vs. No Follow Tags
This all sounds good, just make sure before you proceed, you use GA to check what % of your SEO (segment: "Organic") traffic comes from these URLs. Don't act on a hunch, act on data!
-
RE: Categories for Google My Business pages - do they need to match terms on website?
I'm not hugely knowledgeable on GMB, but no I don't think that GMB checks an associated / verified website, and then uses data from within it to sort its own internal GMB listings. I could be wrong, so I'd wait to hear from a couple of others. As far as I know though, GMB rankings are affected by GMB data and wider-web rankings (on Google's main search engine) are affected by a rich mix of data, including open-web data. But GMB rankings, are not (I don't think) affected by open-web data
-
RE: Best Practice Approaches to Canonicals vs. Indexing in Google Sitemap vs. No Follow Tags
First of all keep in mind that Google has chosen the pages it is deciding to rank for one reason or another, and that canonical tags do not consolidate link equity (SEO authority) in the same way which 301 redirects do
As such, it's possible that you could implement a very 'logical' canonical tag structure, but for whatever reason Google may not give your new 'canonical' URLs the same rankings which it ascribed to the old URLs. So there is a possibility here that, you could lose some rankings! Google's acceptance of both the canonical tag and the 301 redirect depends upon the (machine-like) similarity of the content on both URLs
Think of Boolean string similarity. You get two strings of text, whack them into a tool like this one - and it tells you the 'percentage' of similarity between the two text strings. Google operate something similar yet infinitely more sophisticated. No one has told me that they do this, I have observed it over hundreds of site migration projects where, sometimes Google gives the new site loads of SEO authority through the 301s and sometimes not much at all. For me, the two main causes of Google refusing to accept new canonical URLs are redirect chains (which could include soft redirect chains) but also content 'dissimilarity'. Basically, content has won links and interactions on one URL which prove it is popular and authoritative. If you move that content somewhere else, or tell Google to go somewhere else instead - they have to be pretty certain that the new content is pretty much the same, otherwise it's a risk to them and an 'unknown quantity' in the SERPs (in terms of CTR and stuff)
If you're pretty damn sure that you have loads of URLs which are essentially the same, read the same, reference the same prices for things (one isn't cheaper than the other), that Google has really chosen the wrong page to rank in terms of Google-user click-through UX, then go ahead and lay out your canonical tag strategy
Personally I'd pick sections of the site and do it one part at a time in isolation, so you can minimise losses from disturbing Google and also measure your efforts more effectively / efficiently
If you no-index and robots-block URLs, it KILLS their SEO authority (dead) instead of moving it elsewhere (so steer clear of those except in extreme situations, they're really a last resort if you have the worst sprawling architecture imaginable). 301 redirects can shift ranking URLs and relevance, but don't pipe much authority. 301 redirects (if handled correctly) do all three things
What you have to ask yourself is, if you flat out deleted the pages you don't want to rank (obviously you wouldn't do this, as it would cause internal UX issues on your site) - if you did that, would Google:
A) Rank the other pages in their place from your site, which you want Google to rank
B) Give up on you and just rank similar pages (to the ones you don't want to rank) from other, competing sites instead
If you think (A) - take a measured, sectioned, small approach to canonical tag deployment and really test it before full roll-out. If you think (B), then you are admitting that there's something more Google-friendly one the pages you don't want to be ranking and just have to accept - no, your Google->conversion funnel will never be completely perfect like how you want it to be. You have to satisfy Google, not the other way around
Hope that helps!
-
RE: Google didn't show my correct language-version homepage.
It's hard to know everything which could be affecting this without a link to both pages.
1) It's possible that the Chinese version is just so much more popular (links) that it still gets returned instead of the EN page despite Hreflangs (which Google can overrule in extreme circumstances).
**2) **It's possible that Google still thinks users will get the 'best deal' on the Chinese version of your site, even when ordering products or services from abroad (look at your currency-normalised price points).
3) It could be that the English version of the page has other technical issues which prevents it from being indexed, which forces Google to list one version only (a common one is if, you have regional redirects implemented for Google - but you forgot to 'exempt' Google from those redirects, and thus it crawls from a particular data centre and can only see one version of your site which it keeps being redirected back to).
4) It could be that the brand term originated in China and thus Google considers the brand term to be part of the Chinese language, not part of the English language (and thus you get keyword targeting problems all over the place). If the brand term looks and sounds Chinese and was originally created in (or for) the Chinese market, if most of the links around the web which mention the brand term are Chinese - you can see how Google could get confused.
5) It could just be a Google glitch which you could post about here (but there's no guarantee of a reply).
-
RE: Where can I find Moz Rank for my websites?
This is true, but interestingly you can still pull MozTrust and MozRank for URLs using the Mozscape API. To do that you need an active Moz subscription and a tool like URL Profiler. I'm not saying it's a good idea to utilise deprecated metrics, in fact the truth is quite the opposite and (as per Eli's response) I'd steer clear
However, the question asked how to fetch these metrics and there are still ways to do so. It's just inadvisable
-
RE: My site on desktop browser: page 2 /mobile browser: page 0
Ah I get you. It could be down to your specific mobile deployment being somehow less-indexable than your desktop deployment (more common than you might think). I you can share a few URLs that rank on desktop but not on mobile at all, I (or someone else) will soon take a look at them for you!
-
RE: Tracking PDF downloads from SERP clicks
To address the main question (sorry we got a bit off track) - you can set up virtual page-views which fire when links to these PDF URLs are clicked. In some browsers this will trigger a download, in other browsers (like Chrome, which contains a built-in PDF viewer) - unless the site has been coded a certain way, a download may not actually even occur. The PDF may simply open in a new tab, and render as a web page with a full URL
As such I prefer to use virtual page-views piped to Google Analytics when the links to these documents are clicked, to track their views / downloads (which under normal circumstances, you can't distinguish between those two view types). Even when a PDF is being viewed 'as' a page on your site in a new tab, remember that PDF documents don't support the GA tracking script (so views to those PDF URLs get 'lost' from GA). You need to use virtual page-views, to remedy that
-
RE: My site on desktop browser: page 2 /mobile browser: page 0
All of the rest of SEO.
Inside of SEO - you have:
- Content optimisation and deployment
- Technical SEO
- Off-site activity
- Social amplification
- CRO / UX best practices
... and many other pillars that come and go. What you have done is some page-speed optimisation. Page-speed optimisation (due to its legacy and history, of how it came about) sort of fits inside of mobile optimisation. Mobile optimisation is itself, part of technical SEO optimisation
You have partially done (I say partially because your score isn't 90+) one small part, of another small part, of technical SEO which is merely one among many pillars of the practice
What you are missing, is (probably) everything else
-
RE: 404's being re-indexed
First it would be helpful to know how you are detecting that it isn't working. What indexation tool are you using to see whether the blocks are being detected? I personally really like this one: https://chrome.google.com/webstore/detail/seo-indexability-check/olojclckfadnlhnlmlekdihebmjpjnoa?hl=en-GB
Or obviously at scale - Screaming Frog
-
RE: 404's being re-indexed
Well if a page has been removed and has not been moved to a new destination - you shouldn't redirect a user anyway (which kind of 'tricks' users into thinking the content was found). That's actually bad UX
If the content has been properly removed or was never supposed to be there, just leave it at a 410 (but maybe create a nice custom 410 page, in the same vein as a decent UX custom 404 page). Use the page to admit that the content is gone (without shady redirects) but to point to related posts or products. Let the user decide, but still be useful
If the content is actually still there and, hence you are doing a redirect - then you shouldn't be serving 404s or 410s in the first place. You should be serving 301s, and just doing HTTP redirects to the content's new (or revised) destination URL
Yes, the HTTP header method is the correct replacement when the HTML implementation gets stripped out. HTTP Header X-Robots is the way for you!
-
RE: Changed url and now not listing
Another thing that affects the effectiveness of a 301 redirect, is the 'similarity' of the content (in machine terms, I'm talking about Boolean string similarity). If the content on both pages is highly similar (say, 75% similar) then most of the SEO authority will transfer across. If the content is not very similar at all, what you are doing is replacing 'proven' content with a new, unknown quantity which is a risk to Google. As such, the new content will have to prove itself
-
RE: When serving a 410 for page gone, should I serve an error page?
Completely agree with this.
Also deploy Meta no-index (through the HTTP header / X-Robots, rather than through the HTML if that ends up being a problem... Info on both HTTP and HTML deployments here: https://developers.google.com/search/reference/robots_meta_tag)
Then you're firing both barrels of the shotgun. Telling Google not to index the pages _and _telling Google that the pages won't be coming back
-
RE: 404's being re-indexed
You know that 404 means "temporarily gone but will be coming back" right? By saying a page is temporarily unavailable, you actively encourage Google to come back later
If you want to say that the page is permanently gone use status code 410 (gone)
Leave the Meta no-index stuff in the HTTP header via X-Robots, that was a good call. But it was a bad call to combine Meta no-index and 404, as they contradict each other ("don't index me now but then do come back and index me later as I'll probably be back at some point")
Use Meta no-index and 410, which agree with each other ("don't index me now and don't bother coming back")
-
RE: Find archived sitemap of a website that no longer exists
You can use this site to see legacy site-maps for some websites (though they may be partial or incomplete):
For example, check these sitemap results:
For smaller sites, the results are much easier to look at.
-
RE: Tracking PDF downloads from SERP clicks
This has actually significantly changed my views on PDF optimisation. I didn't know that they held so much optimisation potential. I have always agreed with allowing them to index, but pushed to have them replaced with pages (which contain optional links / buttons to download the original PDF, for users who prefer that)
The sticking point is usually budget. Many clients can't afford the required redesign efforts, so it's good to know that PDFs actually hold (within their native format) some optimisation potential. Thank you EGOL
-
RE: SEO Implications of firewalls that block "foreign connections"
I guess that if Google decides to crawl your site using one of their data-centers from one of the blocked regions, suddenly Google will believe that your whole site has gone down and become inaccessible (as Google rarely launches crawls from multiple multi-regional data centers, for one website - simultaneously)
**Exempting GoogleBot via user-agent would be the only possible work-around **(that I know of) If those trying to access your site (whom you are trying to block out) became aware of this modification, they could alter their scripts, browsers and tools to send you the GoogleBot user-agent (thus penetrating your firewall pretty easily)
In the end, you just have to decide what's more important to you
It might be possible to identify Google's data-centre IP addresses from server logs and exempt those instead of exempting their user-agent, but that would probably need a full time employee just to keep up with all the changes. You can be sure that Google won't make it easy to identify their data centers via IP data
-
RE: Subdomain or subfolder?
The only real info I've seen direct from Google on a similar subject, is on this page:
If you scroll down, there's a table on the page. It's a table of Google's supposed views on the pros and cons of different configurations (e.g: sub-folders vs sub domains) when considering an international roll-out. Obviously your situation is slightly different
All I'd say is, your home-page is supposed to be the top of the tree. The main page from which all other sub-things (including sub pages and sub domains) stem from
That being the case - why the heck would you have the homepage on a sub-domain sub-page? It's kind of like building an automobile with its wheels on the roof
I can't find any specific guidance on why you shouldn't do this. But my suspicion is, no one has felt the need to write much on it because it seems like sheer lunacy
If you have a developer / designer who can't work with a normal structure, I**'d probably replace them with someone more competent**. That to me sounds like very worrying whinging (and I'm usually someone who backs devs to the hilt!)
-
RE: Is there a benefit to changing .com domain to .edu?
No, none whatsoever. The old TLD bonus debates drew an accurate correlation but completely inaccurate causality
People thought:
1) I see lots of EDU sites
2) They rank really well
3) If I make an EDU it will rank well
... WRONG! Google aren't that stupid. Otherwise all webmasters would now be using EDU domains and all other domains would be pointless (which would be a weird internet to live on)
The truth was actually this:
1) EDU TLDs (Top-Level Domains) tend to be chosen by educational bodies or organisations
2) Such organisations are usually run by educated people and academics
3) One thing those people are good at, is creating really strong (in-depth) accurate content
4) As such many EDU sites naturally became prominent, because of Google's normal ranking rules (not some weird EDU TLD bonus scheme)
If you're looking for quick and easy answers in SEO, you're gonna have a bad time
-
RE: Duplicate content
This is good advice. Canonical tags would be a weak fallback compared with picking where you want to spend your efforts and choosing one 'main' site. As such, just like Alex has suggested - redirects and site consolidation are probably the best bet (this would also bind the backlinks for both domains, to one domain)
This could end up being pretty technical depending upon current site(s) performance. If it's done incorrectly (e.g: using 302s instead of 301s) then it could be a disaster
Alex's advice is right but just make sure you're careful how you approach this
-
RE: Keyword position
It's great that you fixed your crawl issues, but SEO is about much more than crawl issues and content
I highly advise that you watch this Moz whiteboard Friday video:
It was posted in 2014, but it's such a good video that I send it to potential clients all the time (even to this day). Because SEO performance is constructed of so many elements (some of which are NOT part of your own coding and website) - you can't rely on just one thing to move keyword positions
You have to look at SEO 'in the round' if you want to be successful
-
RE: My website is struggling to receive traffic I think I have a serious error
As it turns out, Robin's insights here (that HTTP redirects to HTTPS via 302 redirects, instead of 301s) turns out to be pretty much hitting the nail on the head
Here's the data for anyone who is interested (and can help OP more):
- https://d.pr/f/TskHsn.xlsx (spreadsheet download)
I spent a lot of time on this data. I compiled all of OP's backlinks from many sources (Ahrefs, Majestic etc) and then re-crawled using Screaming Frog. This shows all of OPs backlinks, their current status and critically how they 'land' on OP's pages / URLs (at the destination end)
Surprise surprise, almost all links point to HTTP (not to HTTPS) and are then 302 redirected instead of 301'd, thus cutting off almost all link equity post HTTPS migration
Whoever fked up here, did an epic job of messing up OP's internal SEO authority flow**. This is probably now, the leading on-site factor in terms of OP's site struggles on Google (so thanks for that Robin Lord!)
I still think there's an off-site element, but this needs fixing ASAP. All HTTP->HTTPS oriented 302 redirects must be converted to 301s with immediate effect
-
RE: Rel="prev" / "next"
I had never actually considered that. My thought is, no. I'd literally just leave canonicals entirely off ambiguous URLs like that. Have seen a lot of instances lately where over-zealous sculpting has led to loss of traffic. In the instance of this exact comment / reply, it's just my hunch here. I'd just remove the tag entirely. There's always risk in adding layers of unrequired complexity, even if it's not immediately obvious
-
RE: My website is struggling to receive traffic I think I have a serious error
What Robin says about 301s and 302s is pure truth and that could also be a significant contributing factor, especially if it's quite widespread. Chantelle remind me at some point via email to look into this and nail down the 'exact' URLs that are 302-ing. If there is a problem there, we can find it and address it
-
RE: Few pages without SSL
It may potentially affect the rankings on:
-
pages without SSL
-
pages linking to pages without SSL
At first, not drastically - but you'll find that you'll get more and more behind until you had wished you just embraced HTTPS.
The exception to this of course, is if no one who is competing over the same keywords, is fully embracing SSL. If the majority of the query-space's ranking sites are insecure, even though Google frowns upon that - there's not much they can do (they can't just rank no one!)
So you need to do some legwork. See if your competitors suffer from the same issue. If they all do, maybe don't be so concerned at this point. If they're all showing signs of fully moving over to HTTPS, be more worried
-
-
RE: Rel="prev" / "next"
Both are directives to google. All of the "rel=" links are directives, including hreflang, alternate/mobile, AMP, prev/next
It's not really necessary to use a canonical tag in addition to any of the other "rel=" family links
A canonical tag says to Google: "I am not the real version of this page, I am non-canonical. For the canonical version of the page, please follow this canonical tag. Don't index me at all, index the canonical destination URL"
The pagination based prev/next links say to Google: "I am the main version of this page, or one of the other paginated URLs. Did you know, if you follow this link - you can find and index more pages of content if you want to"
So the problem you create by using both, is creating the following dialogue to Google:
1.) "Hey Google. Follow this link to index paginated URLs if they happen to have useful content on"
*Google goes to paginated URL
2.) "WHAT ARE YOU DOING HERE Google!? I am not canonical, go back where you came from #buildawall"
*Google goes backwards to non-paginated URL
3.) "Hey Google. Follow this link to index paginated URLs if they happen to have useful content on"
*Google goes to paginated URL
4.) "WHAT ARE YOU DOING HERE Google!? I am not canonical, go back where you came from"
*Google goes backwards to non-paginated URL
... etc.
As you can see, it's confusing to tell Google to crawl and index URLs with one tag, then tell them not to with another. All your indexation factors (canonical tags, other rel links, robots tags, HTTP header X-Robots, sitemap, robots.txt files) should tell the SAME, logical story (not different stories, which contradict each other directly)
If you point to a web page via any indexation method (rel links, sitemap links) then don't turn around and say, actually no I've changed my mind I don't want this page indexed (by 'canonicalling' that URL elsewhere). If you didn't want a page to be indexed, then don't even point to it via other indexation methods
A) If you do want those URLs to be indexed by Google:
1) Keep in mind that by using rel prev/next, Google will know they are pagination URLs and won't weight them very strongly. If however, Google decides that some paginated content is very useful - it may decide to rank such URLs
2) If you want this, remove the canonical tags and leave rel=prev/next deployment as-is
B) If you don't want those URLs to be indexed by Google:
1) This is only a directive, Google can disregard it but it will be much more effective as you won't be contradicting yourself
2) Remove the rel= prev / next stuff completely from paginated URLs. Leave the canonical tag in place and also add a Meta no-index tag to paginated URLs
Keep in mind that, just because you block Google from indexing the paginated URLs, it doesn't necessarily mean that the non-paginated URLs will rank in the same place (with the same power) as the paginated URLs (which will be, mostly lost from the rankings). You may get lucky in that area, you may not (depending upon the content similarity of both URLs, depending whether or not Google's perceived reason to rank that URL - hinged strongly on a piece of content that exists only in the paginated URL variant)
My advice? Don't be a control freak and use option (B). Instead use option (A). Free traffic is free traffic, don't turn your nose up at it
-
RE: Does redirected traffic still contribute to SEO?
Yes, there is a difference - but it is variable.
If you have links pointing to a redirect which:
1) chains (redirects stringing together)
2) is not a 301
3) lands on a page with highly dissimilar content (in machine terms, think Boolean string similarity), to Google's last active ache of the redirecting URL
... then your links are likely to be nullified, or they won't help you very much
Use redirects that point to (mathematically / %-wise) similar content. Use 301s, don't chain your redirects
If you meet all of these conditions, then some links can continue to supply a decent amount of ranking authority, even through redirects
-
RE: My website is struggling to receive traffic I think I have a serious error
So far we have identified some potential issues:
1.) Backlinks don't seem great. I took backlink data from a load of tools (including Ahrefs, Majestic, SEOSpyGlass etc) and funneled them all into SEMRush for it to evaluate those (in addition to the ones it found by itself) and give a toxicity rating. This is what we're looking at - screenshot
2.) Because links are a state, a forensic - intelligent disavow (which doesn't disavow the decent links) is sorely needed as at this point algorithmic devaluations are in play and a penalty may be looming (not too far off)
3.) Once that's complete - the disavow will likely result in a very minor dip (as no one's view of what Google thinks are good / bad links, is perfect). Due to this some really good link building (Digital PR level link building) will be needed afterwards, to clog the wound (only a small wound, but will still need clogging)
4.) Someone has been over-zealous with the indexation sculpting. Canonical tags (which also act like no-index tags, because they tell Google that the 'active' URL is non-canonical, and point it elsewhere) could be removed from the AMP pages on this site and also from a string of parameter URLs. When you use hreflangs, you don't canonical the foreign URLs to the original language. You just use the hreflangs, on their own! Same should be true for AMP links (they're both part of the rel=/link family). Yes, it's sometimes common on a site with sprawling architecture, to reign in parameter URL indexation. Our pal here (OP), isn't in that predicament - so it's been misapplied
5.) The site wasn't registering as mobile friendly earlier. Now that seems to have been fixed but implementation may need examining in more detail (e.g: check a page of every template type in Google's mobile friendly tool, not just the homepage. Check implementation didn't hurt page-loading speeds too much)
6.) Mobile-oriented page-loading speeds, last I checked, didn't even achieve a rating of 20 on Google PSI (it was in the teens). That's real bad news and probably still needs looking into
^ This is all the stuff I've found so far. Any further help, from anyone else would be amaze-balls
-
RE: Few pages without SSL
Yes that can hurt Google rankings. Insecure pages tend to rank less well and over time, that trend is only set to increase (with Google becoming less and less accepting of insecure pages, eventually they will probably be labelled a 'bad neighborhood' like gambling and porn sites). Additionally, URLs which link out to insecure pages (which are not on HTTPS) can also see adverse ranking effects (as Google knows that those pages are likely to direct users to insecure areas of the web)
At the moment, you can probably get by with some concessions. Those concessions would be, accepting that the insecure URLs probably won't rank very well compared with pages offering the same entertainment / functionality, which have fully embraced secure browsing (which are on HTTPS, which are still responsive, which don't link to insecure addresses)
If you're confident that the functionality you are offering, fundamentally can't be offered through HTTPS - then that may be only a minor concern (as all your competitors are bound by the same restrictions). If you're wrong, though - you're gonna have a bad time. Being 'wrong' now, may be more appealing than being 'dead wrong' later
Google will not remove the warnings your pages have, unless you play ball. If you think that won't bother your users, or that your competition is fundamentally incapable of a better, more secure integration - fair enough. Google is set to take more and more action on this over time
P.S: if your main, ranking pages are secure and if they don't directly link to this small subset of insecure pages, then you'll probably be ok (at least in the short term)
-
RE: My website is struggling to receive traffic I think I have a serious error
Thanks for all the info. I got your email and replied in full
-
RE: Redirecting an Entire Site to a Page on Another Site?
If you shed most of your content and / or pages, you WILL lose most of the SEO authority.
301 redirects can transfer 'up to' 100% of SEO authority, from one place to another
They fall down in two situations:
-
Chaining redirects
-
Dissimilar content
Even if 301s go A-to-B, if the content is highly dissimilar, the SEO authority gets vented instead of transferring over. This is to put a stop to black-hat techniques, like buying up old authoritative domains and then 301-ing them all to one page for a giant SEO boost (it used to work, it kind of works now... but not well at all - and not for long)
If a machine can read all the content from Google's last 'active' cache of the old URLs, and then read the content and technical / brand facets of the new URL - and feel they are highly dissimilar (in machine terms, think Boolean string similarity) then... the 301 redirect loses much of its power
Google don't see you, as the owner of your own page's SEO authority. You didn't earn it, the pages did - with their great content. As such, if you're removing content from the web which Google likes, and putting an 'unknown quantity' in its place - that's risk for Google's rankings. Be very careful how you move...
-
-
RE: My website is struggling to receive traffic I think I have a serious error
So in the SEO industry, we have tools which measure a site's estimated worth and traffic intake (just from search, it doesn't tend to reflect anything else)
One is SEMRush:
- https://d.pr/i/zvr8cY.png (screenshot)
The other main one, is Ahrefs:
- https://d.pr/i/0EbHaI.png (screenshot)
Neither of these tools picked up extremely significant movements within the past year. Neither of them seem to think that (in terms of SEO) the site was ever doing that well to begin with.
Basically these tools contain colossal indexes of Google keywords. They monitor these high to mid-value terms frequently, and see who is ranking. They leverage CTR (click-through-rate) data against ranking positions to estimate 'search visibility' (which is like a, ultra-rough traffic estimate - never to be taken as an absolute)
If these tools aren't showing that anything bad happened (or if they're not showing that performance was ever very good), then there are some possible reasons:
-
The tools happen not to contain most of your main keywords in their keyword indexes
-
Your visits were never coming from SEO in the first place, or you had broken tracking which was inflating those numbers
-
You had good tracking and your SEO company broke it (thus making it look falsely like there's been some massive drop)
-
Your visits were coming from SEO, but mostly not from Google. Other search engines exist
It's hard to know which of these is the main offender, or whether there's another reason (some 'unknown-unknown'). I'll take a brief look into your Analytics profile if you want. It could possibly shed some light in terms of what the heck is going on!
If you want to connect further, my email is on my profile page. I can't promise I'll find a solution, but these kinds of problems intrigue me
-
RE: Impressions skyrocketed 10x in few days on just one keyword and I don't understand why.
Or the keyword was just searched more. Since your ranking position seems relatively stable, it would make sense that for some reason - more people searched the keyword than usual
You can always check using Google Trends
If it seems like there was no national or international spike in search interest, it's probably a GSC data-push update or something similar
-
RE: Increasing in 404 errors that doesnt exist
You can put the domain here, I'm sure lots of people would like to weigh in on this it's an interesting problem
I have replied to your email
-
RE: Robots.txt & Disallow: /*? Question!
With this kind of thing, it's really better to pick the specific parameters (or parameter combinations) which you'd like to exclude, e.g:
User-agent: *
Disallow: /shop/product/&size=*
Disallow: */shop/product/*?size=*
Disallow: /stockists?product=*
^ I just took the above from a robots.txt file which I have been working on, as these particular pages don't have 'pretty' URLs with unique content on. Very soon now that will change and the blocks will be lifted
If you are really 100% sure that there's only one param which you want to let through, then you'd go with:
User-agent: *
Disallow: /?
Allow: /?utm_source=google_shopping
Allow: /*&utm_source=google_shopping*
(or something pretty similar to that!)
Before you set anything live, get down a list of URLs which represent the blocks (and allows) which you want to achieve. Test it all with the Robots.txt tester (in Search Console) before you set anything live!
-
RE: Can an external firewall affect rankings?
Site speed impact is where I see this becoming a real problem, unless the setup is done correctly
-
RE: Informational query
On Google, query-spaces can become ambiguous. For some keywords, Google know that there is a very strong affinity in terms of the user's search-intent
For example, if the query is: "properties to rent in Camden, London" - then it's almost certain that the searcher is looking for a new place to live and wants to see rental property listings
If on the other hand, the query is something like "science", that's extremely broad. Do the users want science news? Maybe to pick up a sciences degree? Do they want to know the basic principles of science (e.g: the scientific method?)
The answer to your question is variable. It's not that Google 'always' assumes one meaning, or 'always' assumes multiple meanings. It depends upon the specific search-query, and the resources available within the appended query-space
You'll find that some query-spaces are very, very noisy and not really very helpful - because there's just too many search audiences 'competing' (through their clicks and queries) for 'control' of the query-space. Some query-spaces are like a battleground, others are much more straight-forwards and easy to interpret
As a general rule of thumb, if a search query returns results predominantly from one type of site - all about the exact same thing, that query-space is 'clean'. If you search for something and the results are messy and all over the place, then the query-space is 'noisy'
It's easier to optimise for clean query-spaces, but because they are clean your competition will be harder to overcome. In a noisy query-space, it's harder to write that one piece of content that addresses everyone perfectly - but competition is usually not as stiff (because most people can't be bothered optimising for noisy query-spaces, you can't do it with crappy textbroker articles - it takes real thought!)
So there you go. You should now have a lens to analyse Google's results with, and decide upon your SEO / content implementation
-
RE: For a parent blog on our website, what should we go for - Subdomain or Subdirectory?
I agree with this response. If the content is mostly under your own control (the main content is yours, maybe with some scattering of UGC 'comments' from parents) then go subdirectory
If in the future you create an area where you have way less control over what is posted (like a forum, which is 99% UGC) then go subdomain
-
RE: Related Topics what is this ?
It would really help to get some context. Are you just talking about the general phraseology, of what people might mean when talking about a 'related topic'? Are you talking about related search queries, appearing on Google's front end? Maybe you are talking about a place in Moz's back-end where you have seen 'related topic' appearing as a field or column in the data output
If you can give a bit more detail, I am sure that someone here could help
-
RE: Temporarily redirecting a small website to a specific url of another website
This all comes down to the fact that, technically 302 has always been 'found', but there was no status code for a temporary redirect so Google advised people to use 302 (as no one really ever used it for its intended purpose)
Now you have 307. To this day, you can still use 302 or 307 (we're still in the transition period, where both still function identically)
A 301 will gradually transfer SEO authority from one page to another, over a few weeks / months - so that the old URL stops ranking and the new URL 'has a chance' of ranking in its place. If the new URL has highly dissimilar content (in machine-terms) then the 301 fails to transfer a portion of the authority and some is 'deleted' (vented into cyberspace)
A 302 retains the ranking benefit on the old page and nothing is transferred to the new page (period). Over time (a month or six) the 302 will decay. Slowly the authority (which has been kept on the old URL) will begin to 'die off' and you end up (in an extreme situation) with no authority left anywhere from that particular URL (it's just gone). 307s function the same way
As such, using a 302 or 307 is the correct measure, but remember - Google will be watching to check that the redirect really is temporary. If your whole company forgets about restoring the content to the original URL (for a significant period of time) then don't expect that there will be anything left when you come back
In an ideal world, you'd turn it all around inside of one month if you wanted some good juice left when you lifted the 302 / 307
-
RE: Without slash URLs not redirected with slash URLs; but canonicalised: Any potential harm at Google?
The potential harm is that, even though canonical tags stop duplicate content from being a problem - they don't do much to consolidate backlink authority hitting each web-page. If you have two pages which both have links pointing to them (with "/" and without "/") then only 301s will properly 'merge' those URLs (assuming that their content is near identical) in terms of backlink authority.
For this reason, a real architectural solution is always better than using canonical tags. Canonical tags are really a fall-back measure, if you have poor on-site architecture which you fundamentally cannot change. The end goal, though, is not to need them.
-
RE: Increasing in 404 errors that doesnt exist
It's so annoying when things like that happen! When Google refuses to give the 'linked from' data, it's a real head-test working out where the links are coming from. Did you know that the links could even be coming from other websites, not just your own? When a user follows a link to your site (regardless of where that link is from), Google consider it your error if a valid page isn't returned
Since this error is only occurring in the old area of WMT, it probably doesn't matter much. That being said, one simple fix would be to 301 redirect all the broken links, to the functional article pages. After that you can just bulk mark them all as fixed
Usually I tell people to fix the actual link, but if it's an external link which you have no control over (or if Google can't even be bothered to tell you what the linking page is) then 301 and mark as fixed is probably your best bet. Especially since, these are only individual article pages (it's not like a malformed version of your homepage or something)
If you email me the domain (check my profile page) then I might be able to crawl your site for you to determine whether there are any obviously broken internal links. Regardless, you'd want the 301s as a back-stop anyway
Hope that helps
-
RE: Very wierd pages. 2900 403 errors in page crawl for a site that only has 140 pages.
Almost right, but 'just about' wrong; the 403 error is only served once an URL 'is' accessed. The content may not be accessible (as it's forbidden) but the URL itself, still is. Whilst it's unlikely that these URLs would ever be indexed, there's still an infinite loop in the link architecture which could impact upon crawl allowance and site health metrics
I'd get it sorted out!
-
RE: Correct use of schema for online store and physical stores
Google state here:
https://developers.google.com/search/docs/data-types/local-business
That "Local Business" is what they use. "Organization" does not appear in that list
Think about what you want to achieve. Utilising schema helps contact details (and many other, granular pieces of information) to jump out for brand, or entity-based queries
If you have a head office which you're working on, aren't most of the queries to HQ internal? Do you really want people calling up HQ instead of going to one of the purpose-built, consumer outlets? Obviously if you're looking to ascertain a mixture of B2B and B2C leads, what I'm saying might not quite be accurate
In most circumstances, I wouldn't want work-offices (HQ) to be more visible in Google's search results, so I would eradicate all schema. Then I'd just go with LocalBusiness schema for all the outlets
-
RE: Canonical and Alternate Advice
This is the correct solution!
-
RE: Quality Links From Web Directory Usefull ??
Equally a pleasure, there's a few of us on here today