No you don't need to change anything. In-fact, you actively DON'T want the HTTP sitemap to be feeding Google a list of HTTP URLs, which I am sure you are trying to steer Google away from. Only feed Google the HTTPS URLs, delete the HTTP sitemap from search console if you can so that it doesn't keep flagging false positives, and feeding Google bad (insecure) URLs
- Home
- effectdigital
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
effectdigital
@effectdigital
Job Title: SEO
Company: Effect Digital Ltd.
Effect Digital is a productive partner to ambitious brands. Connecting organisations and audiences in a digital-first world
Favorite Thing about SEO
The blend of art and science
Latest posts made by effectdigital
-
RE: No index for http version of website
-
RE: Does &pws=0 still work?
This post from Jan 2020 seems to assume that you can still use &pws=0
... but I don't know how reliable it is!
-
RE: Keyword rich domain names -> Point to sales funnel sites or to landing pages on primary domain?
To me this depends upon the traffic build of your old domains. If they mostly receive direct and referral traffic, then the redirect idea could work very well. If they gain most of their traffic from Google, redirecting them will eventually make them stop ranking as Google don't like to rank (in the long-term) redirecting URLs
Once that occurs, your main site may gain ranking features from your old sites, but even with perfect redirects (using the mighty 301) you would still stand to lose rankings. Google will basically check how similar the last active cache of the redirecting URL is to your new page (the redirect destination). Even with a 301; if the content (in machine / Boolean terms) is highly 'dissimilar', then your new page will only receive a fraction of the SEO / ranking authority of the old (redirecting) URL. This is to stop webmasters buying up authoritative expired domains, redirecting them to themselves and gaining free ranking power
From Google's POV, a lot of ranking 'power' (authority) still comes from links. Which sites have the best links? Do other sites in the same area of the web (same theme) have more quality links? How fresh are those links? Are there any positive / negative trust signals to refract along that axiom?
When a page ranks well on Google, it is because it has recently (or historically) 'impressed the web' (thus gaining backlinks and un-linked citations). If you replace a page which has 'earned' links with another page (like a sales funnel) or redirect it to a completely different page, why should that new page benefit from the same links? The webmasters who linked to the old URL, may not have chosen to link to the new page (be it a replacement or redirect destination) so it shouldn't see loads of SEO authority coming from a past legacy
Obviously if you just change domain and the pages are essentially the same, then it's fair that those pages retain their former Google rankings. This is why Google has to validate the 'similarity' of old vs new pages (whether they replace the current content, or exist at the end of a redirect)
Be careful with your path forwards. You could have a 'great idea' only to lose most of the traffic which those domains were supplying
Obviously if the old domains which you are sweeping up, don't see much traffic from Ads or Google (SEO / organic), then you can do pretty much whatever you want with them. But if the traffic came mostly from Google (organic) then it may be tricky. It may also be tricky to redirect the domains if paid ads are served to them, as ads will often be 'disapproved' if they point to a redirecting URL (true of FaceBook and Google ads). So at the very least there would be a major overhaul of your ads campaign(s) which would be required
-
RE: Why not just use an alias if the only change is a different domain Name?
It depends what you mean by 'alias'. If you means configuring the old domain to properly 301 redirect all URLs from the old site to the new site (so the old site becomes inaccessible, due to serving as a redirect platform) then yes. If you mean doing something else, like pointing the old domain to your new site - other than by 301 redirects, it's probably not a good idea for SEO!
-
RE: What are best SEO plugins for wordpress?
I'm still watching RankMath like a hawk. It really does look very good
-
RE: Does google penalize you if you post content in french and english on a website
No - translations don't count as duplicate content, but you should ensure that your site has a proper multiregional build-out (e.g: site.com/press-releases/artice (EN) vs site.com/fr/press-releases/artice (FR)
You should properly 'build out' the site in an international way, don't use low quality auto-translate plugins or live-translation features. You will need all your hreflang tags set up properly, so Google knows they are alternate language page variants (see: https://yoast.com/hreflang-ultimate-guide/)
See this from 2011: https://www.youtube.com/watch?time_continue=2&v=UDg2AGRGjLQ&feature=emb_logo where Matt talks about whether translations are duplicate content. AFAIK Google's stance hasn't changed loads. The translation must add value and you must use human-translated content (written by someone competent enough, that it doesn't read as if it was written by a machine)
More recently John Mu (from Google) has said that auto-translated content won't gain penalties but the rankings will suck so basically still get humans to write stuff: https://www.seroundtable.com/google-auto-translating-content-penalty-28413.html
Interestingly Google recently said that they think there may come a time in the future where auto / machine-translated content is acceptable: https://www.seroundtable.com/machine-written-content-google-guidelines-28338.html
... but as of now, it's still considered poor and against guidelines!
-
RE: Should our rebranded company update our existing Instagram profile or delete it and start from scratch?
I would not advise starting from scratch, unless you want to lose all your followers and all the progress which has been made so far! If you possibly can redesign and alter the name / URL of the profile, doing that would be a much better idea. The only exception to this is if, your profile has never previously performed well for you and has seen a lot of bot traffic (which may be perceived as negative by IG). If your profile has anything remotely positive going for it, keep it.
-
RE: Removing a site from Google index with no index met tags
This is the best response. Others have cited using robots.txt, that's a bad idea IMO. Robots.txt will stop Google from crawling pages, Meta no-index directs Google not to index a page. If Google can't crawl a page (due to robots.txt) then they won't be able to 'find' the no-index directive. As Jordan says, no-index should come first. When all pages are de-indexed, then OP can begin to think about robots.txt as suggested by Rajesh. OP could also combine Meta no-index with status code 410 (gone) to make it a stronger signal - though this is inadvisable with OP's situation (where the site will remain live for users, but be gone from Google). In the end, Jordan's reply is the best one which has been left here
A final note might be that, instead of editing the HTML of all OPs pages, OP could fire no-index though x-robots via the HTTP header (which is often more scaleable)
-
RE: Strange landing page in Google Analytics
The default URL is what you should change. It is the setting which controls how the landing pages are written in the GA interface. It is necessarry because Google Analytics (STILL) does not track protocol (only hostname and page paths, which is really annoying!)
If GA would just track protocol, all of this could be handled for you automatically. Since that's not the case, you have to amend this property setting
-
RE: SEO + Structured Data for Metered Paywall
You could just exempt Googlebot's user-agent from your paywalling mechanism. Theoretically users could use Chrome extensions to alter their own user-agent to 'Googlebot' and thereby evade your paywall, but Joe average user isn't going to do this (Ad-blocker usage is far more common than user-agent evasion)
Best posts made by effectdigital
-
RE: How Much Time It Will Take To Lower the Spam Score?
If you're talking about the Moz spam score of the domain, it's higher than many would like but it's not extremely high:
https://d.pr/i/4WwfDq.png (screenshot)
A score of 80 or higher is indicative of very, very spammy sites. Although lots of strong language is used within the tool, 50% isn't super awful really. By the way, Moz is not connected to your disavow file and cannot see it (something many have requested many times, which I continue to request at any given opportunity). As such, disavow work will not decrease your Moz spam score
Moz's spam score is not something which Google use within their ranking algorithm(s). Google have private spam metrics which they do not share with webmasters. Moz's spam score is simply an attempt by our industry to make our 'best guess' at how spammy Google 'might' think a website is. Ultimately though, it's just an indicator and a 'shadow metric', it's meant to mimic the decisions that Google might make but Google (again) does not actually use any Moz metrics (at all) within their ranking algorithm(s)
Your disavow file goes straight to Google, so even if Moz doesn't see it and their best 'guess' is that your spam score is still high, you know that 'actual Google' have seen your disavow work and thus Moz's spam metric is not likely to be accurate for your domain (which is why it's only an indicator, even when looking at other domains, as you don't know what link removal and disavow work they have carried out)
If you want your actual Moz spam score to go down (though there is no reason for such vanity, as Google doesn't use Moz metrics) then you have to actually remove the links and that's that (sorry)
Remember that the spam score is derived based on factors which Moz perceives as being common to penalised websites:
Spam Score: "Represents the percentage of sites with similar features we've found to be penalized or banned by Google." ~ Moz
This is not necessarily linked to backlink features in isolation, I would expect that some on-page features may be counted. The site just doesn't look and feel very legit:
1.) A review site, with only seven reviews, one of which appears to be for a gun or fire-arm (paintball or not, it's a gun)
2.) No seeming ability for any users to add their own reviews, so this is just one person's biased voice. Why does the internet need this website?
3.) Logo is blurry and low-res and doesn't look 'proper'
4.) Only three pages seem to exist. One of these pages is a 'disclaimer'. Webmasters put up disclaimers when, they should be taking more responsibility for the content of their own website but they refuse to do so. Disclaimers are a low-quality signal, and unless there are thousands of contradictory positive signals (which there are not, for this domain) then this is how this will be viewed
5.) The site has no value-add for end-users, or unique value-proposition. People can find more in-depth reviews from product critics they trust, or shallow yet more numerous reviews from review aggregators like Trust Pilot. Either way, they would be on a better site with a better value-proposition for the end user. Why would Google rank this site?
6.) Site claims to be a review site, yet does not make good usage of review schema and star-ratings for prettier SERPs. Seems more like a blog with aspirations to be a review site, which didn't quite make it. The site marks up the supposed 'reviews' with BlogPosting schema, not with review schema
7.) Content is dry with poor layout and feels boring. In most reviews no numerical evaluation is made, no star ratings are given. There's no point at which the author accepts: "I am a reviewer now, I must give an opinion, I must give something useful to the user which they could use at a glance". The unwillingness to take responsibility for giving an opinion, combined with the disclaimer which reinforces the author's 'shunning' of their own content (are they worried their own content is bad? Why are they so careful not to give or take responsibility for opinions? The images all look like stock images, are these fake reviews? Right now it feels like yes they are)
8.) It feels as if this site has bee made 'for the sake of' SEO. That's not the kind of site Google wants to rank
... so as you can see, even if you tackle your poor backlinks, this site doesn't really have much hope of ranking well on Google. Google is ultimately looking for trust and a value proposition. Over the materials which Google already has indexed on their first page of results for the reviewed products, the pages on this site don't really add anything. In addition the domain is giving off multiple off-page AND on-page mistrust signals, which will really stand against in the rankings
In this case I think you'd better head all the way back to the drawing board
Look at this video in which Miley from Google (think she's an ex-Googler now) outlines the #1 common SEO mistake as 'working without a value proposition':
You only need to watch her outline issue #1, the rest of the video isn't that relevant to you
Also watch Moz's video on how unique content isn't good enough to rank any more:
https://moz.com/blog/why-good-unique-content-needs-to-die-whiteboard-friday
... and how you should craft 10x content to replace the prior 'plague' of 'good unique content':
https://moz.com/blog/how-to-create-10x-content-whiteboard-friday
After watching these videos, you should begin to understand why what you're doing isn't working and why it won't work
-
RE: Does Google Understand H2 As Subtitle?
Yeah Google is perfectly able to interpret an H2 as a sub-heading. It's more of a directive than an absolute rule, for example if you crammed loads of H2s into your footer and made them really small, Google would be able to tell that the H2 was being deployed illegitimately
In your case you seem to be using the H2 correctly. I think it adds some space to add a little extra context to your pages, I think that's a really good idea! I might use the space a little differently though
This is what you have:
H1: Flavour & Chidinma – 40 Yrs
H2: 40 Yrs by Flavour & Chidinma - Mp3 Download
They essentially say exactly the same thing just with the difference of "MP3 Download"
I might use the H1 more as the news heading and the H2 for the additional context of what exactly the reader will be getting
H1: Flavour and Chidinma Release 40yrs Everlasting EP
H2: 40 Yrs by Flavour & Chidinma - Mp3 Download & Video
I gave the page a schema scan:
Nice usage of Article schema. You could also think about using AudioObject schema for the MP3 download. Google have recently come out and said that whilst some schemas don't result in visual changes in the SERPs, they're still a good structural framework for Google to work with (in terms of contextualising information) so usually I always push for the maximum Schema.org implementation possible
Did you know that MP3 files also contain their own Meta data, inside of the file? You can inspect and modify the Meta data with industry-standard audio-editing software, or simple applications such as MP3Tag
This is what your MP3 file looks like in terms of the internal MP3-tagging Meta:
Screenshot: https://d.pr/i/iUxtv0.png
I have boxed in red the field "Album Artist" which has not been filled out. Most media players and media apps, actually categorise music into artists by the "Album Artist" field and not by the "Artist" field (makes no sense, I know!)
You might consider copying the Artist text into the Album Artist field and re-saving the file, then re-uploading it. There are a lot of sites that illegally rip music and upload it in hopes of search rankings and ad-revenue. Much of the time, those sites fail to correctly fill out their MP3 file Meta (sometimes everything is 100% blank) and that's often a piracy signal
I don't think that's what your doing, but it might pay to verify you have correctly amended MP3 Meta before uploading the files to your site (especially as a UX thing, if people download the MP3 and then can't find it on their media player then it won't get many listens)
Fun track by the way, thanks for the listen
-
RE: My product category pages are not being indexed on google can someone help?
This is probably more of a ranking authority problem, rather than an indexation problem. If you can force Google to render one of your category URLs within its search results, then it's highly likely the page is indeed indexed (it's just not ranking very well for associated keywords)
Follow this link:
https://www.google.co.uk/search?q=site%3Askirtinguk.com%2Fproduct-category%2Fmdf-skirting-board%2F
As you can see, the category URL which you referenced is indexed. Google can render it within their search results!
Although Google know the page exists and it is in their index, they don't bother to keep a cache of the URL: http://webcache.googleusercontent.com/search?q=cache:https%3A%2F%2Fwww.skirtinguk.com%2Fproduct-category%2Fmdf-skirting-board%2F
This probably means that they don't think many people use the page or that it is of low value.
What you have to keep in mind is, lower value long-tail terms (like product keywords or part number keywords) are much easier to achieve. Category terms are worth more in terms of search volume, so competition for them is higher. If your site ranks for product terms but not for category terms, it probably means your authority and / or trust metrics (as well as UX metrics) may be lower. Remember: Google don't consider their ranking results to be a space to advertise lots of companies. They want to render the best results possible for the end-user (that way people keep 'Googling' and Google continue to leverage revenue from Google AdWords etc)
Let's look at your site's domain-level metrics and see if they paint a picture of an 'authoritative' site which should be ranking for such terms...
Domain Level Metrics from Moz
Domain Authority: 24 (low)
Total Inbound Links: 1,200+
Total Referring Domains (much more important than total link count!): 123 - This is too many links from too few domains IMO
Ranking keywords: 38
Domain Level Metrics from Ahrefs
Homepage URL Rating: 11 (very low)
Domain Rating: 11 (very low)
Total Inbound Links: 2,110+
Referring Domains: 149 - Again, the disparity here could be causing problems! Not a diverse backlink profile
Ranking Keywords: 374 (Ahrefs usually finds more, go with this figure)
SEO Traffic Insights: Between 250 and 380 visits (from SEO) a day on average, not much traffic at all from SEO before November 2016 when things improved significantly
SEMRush Traffic Insights (to compare against Ahrefs): Estimates between 100 and 150 visits from SEO per day. This is narrowed to UK only though. Seems to tally with what Ahrefs is saying, the Ahrefs data is probably more accurate
Domain Level Metrics from Majestic SEO
Trust Flow: 5 - This is extremely low and really bad! Basically Majestic track the number of clicks from a seed set of trusted sites, to your site. A low number (it's on a scale of 0 to 100 I think) indicates that trustworthy seed sites aren't linking to you, or that where you are linked - people avoid clicking a link to your site (or visiting it)
Citation Flow: 24 - low but now awful
What do I get from all of this info?
I don't think your site is doing enough digital PR, or making 'enough of a difference to the web' to rank highly for category related terms. Certainly the site looks very drab and 'cookie-cutter' in terms of the template. It doesn't instil a sense of pride in the business behind the website. That can put people off linking to you, which can cause your SEO authority to fall flat on its face leaving you with no ranking power.
A lot of the product images look as if they are fake which probably isn't helping. They actually look at lot like ads which often look a bit cartoony or CGI-generated, with a balance between blue and white (colour deployment). Maybe they're being misinterpreted as spam due to Google PLA (Page Layout Algorithm). Design is not helping you out at all I am afraid!
So who is ranking for MDF skirting board? The top non-PPC (ad-based) result on Google.co.uk is this one:
https://skirtingboardsdirect.com/products/category/mdf-skirting-boards/
Ok so their content is better and deeper than yours (bullet-pointed specs or stats often imply 'granular' content to Google, which Google really likes - your content is just one solid paragraph). Overall though, I'd actually say their design is awful! It's worse than the design of your site (so maybe design isn't such a big factor here after all).
Let's compare some top-line SEO authority metrics on your site against those earned by this competitor
- Domain Authority from Moz: 24
- Referring Domains from Moz: 123
- Ahrefs Homepage URL Rating: 11
- Ahrefs Domain Rating: 11
- Ahrefs Referring Domains: 149
- Majestic SEO Trust Flow: 5
- Majestic SEO Citation Flow: 24
Now the other site...
- Domain Authority from Moz: 33 (+9)
- Referring Domains from Moz: 464 (+341)
- Ahrefs Homepage URL Rating: 31 (+20)
- Ahrefs Domain Rating: 65 (+54)
- Ahrefs Referring Domains: 265 (+116)
- Majestic SEO Trust Flow: 29 (+24)
- Majestic SEO Citation Flow: 30 (+6)
They beat you in all the important areas! That's not good.
Your category-level URLs aren't Meta no indexed, or blocked in the robots.txt file. Since we have found evidence that Google are in fact indexing your category level URLs, it's actually a ranking / authority problem, cleverly disguised as an indexation issue (I can see why you assumed that). These pages aren't **good enough **to be frequently indexed by Google, for keywords which they know hold lucrative financial value. Only the better sites (or the more authoritative ones) will rank there
A main competitor has similar design standards but has slightly deeper content and much more SEO authority than you do. The same is probably true for other competing sites. In SEO, you have to fight to maintain your positions. Sitting back is equivalent to begging your competitors to steal all of your traffic...
Hope this analysis helps!
-
RE: If website users don't accept GDPR cookie consent, does that prevent GA-GTM from tracking pageviews and any traffic from that user that would cause significant traffic decreases?
This is a common and over-zealous implementation of GDPR tracking compliance. Lots of people have lost lots of data, by going slightly overboard in a similar way. Basically you have taken GDPR compliance too far!
GDPR is supposed to protect the user's data, but in terms of - is there a 1 or a 0 in a box in an SQL database for whether an anonymous user visited your site or not (traffic data, not belonging to the user) - it's actually fine to track that (in most instances) without consent. Why? Because the data cannot be used to identify the user, ergo it's your website data and not the user's user data
There used to be a GA hack which Google patched, which forced GA to render IP addresses - but even before it was patched, they banned people (who were using the exploit) from GA for breaking ToS. That kind of data (PII / PID), unless you have specifically set something up through event tracking that records sensitive stuff - just shouldn't even be in Google Analytics at all (and if you do have data like that in your GA, you may be breaking Google's ToS depending upon deployment)
If the data which you will be storing (data controller rules apply) or sending to a 3rd party to store (in which case you are only the data processor and they are the data controller) does not contain PID (personally identifiable data - e.g: email addresses, physical addresses, first and last names, phone numbers etc) - then it's not really covered by GDPR. If you can say that these users have an interest in your business and show that a portion of them transact regularly, you're even less at risk of breaking GDPR compliance
If you're worried about cookie stuff:
"Note: gtag.js and analytics.js do not require setting cookies to transmit data to Google Analytics."
It's possible with some advanced features switched on like re-marketing related stuff, this might change. But by default at least, it seems as if Google themselves are saying that the transmission of data and the deploying of any cookies are not related to each other, and that without cookies the later scripts can send data to GA just fine without cookies
If you are not tracking basic, page-view level data which is not the user's data (which is not PII / PID), then you are over-applying GDPR. The reason there aren't loads of people moaning about this problem, is that it's only a problem for the minority of people who have accidentally over-applied GDPR compliance. As such it's not a problem for others, so there's no outcry
There'**s lots more info here: **https://www.blastam.com/blog/gdpr-need-consent-for-google-analytics-tracking
"This direction is quite clear. If you have enabled Advertising features in Google Analytics, then you need consent from the EU citizen first. Google defines ‘Advertising features’ as:
- Remarketing with Google Analytics.
- Google Display Network Impression Reporting.
- Google Analytics Demographics and Interest Reporting.
- Integrated services that require Google Analytics to collect data for advertising purposes, including the collection of data via advertising cookies and identifiers.
-"
If you aren't using most, many or any of the advanced advertising features, your implementation is likely to be way too aggressive. Even if you are using those advanced features, you only need consent for those elements and specifically where they apply and transmit data. A broad-brush ban on transmitting all GA data is thoroughly overkill
Think about proposing a more granular, more detailed approach. Yes it will likely need some custom dev time to get it right and it could be costly, but the benefit is not throwing away all your future data for absolutely no reason at all
Don't forget that, as the data 'storer' (controller), a lot of the burden is actually on Google's side
**Read more here: **https://privacy.google.com/businesses/compliance/#!?modal_active=none
Hope this helps
-
RE: Why My Domain Authority Dropped
See my relevant answer to an older, similar question here:
https://moz.com/community/q/why-did-my-site-s-da-just-drop-by-50
"Keep in mind that PageRank (which is used by Google to weight the popularity and authority of web pages, yes it's still true even after the little toolbars got deleted) does not read, utilise or rely upon Moz's DA metric in any way shape or form. DA is a 'shadow metric'. Since Google took away our view of the simplified PageRank algorithm (TBPR - ToolBar PageRank, which is dead) people working in SEO needed a new metric to evaluate the quality and ranking potential of web pages
Moz stepped in to supply this (in the form of PA / DA) but Google still use PageRank (they just don't show it to us any more). Whilst Moz's PA/DA metrics are a good 'indicator' of success, Google isn't using them (at all) and so they do not directly affect your rankings"
Only someone from Moz can confirm why your DA dropped, but it may have dropped for a reason that wouldn't impact or effect Google rankings in the slightest (so don't panic yet!)
-
RE: What would be causing our linking domains and inbound links to decline?
This is a really good answer.
OP also needs to check the data they are looking at. Is it link growth data, or actual static link data? Some charts make it look as if your links are disappearing, when what they are really saying is less domains are 'creating' links to you over time (aka your link growth is slowing)
If OP is sure that their actual links are shrinking over time, Steve gave great answers
Here are some others:
- People re-designing their websites and streamlining their content, some links get removed as some old content (which may contain links) doesn't make it onto the new site
- People killing their own content even if it's not part of a re-design, removing old blog posts etc (which may contain links)
- People un-linking their internal links to insulate their own PageRank better, which leaves you with un-linked citations
- People adding no-follows to their links. These links should still be detected, but they won't count to your SEO any more
- People blocking the indexation of content that contains links (e.g: putting Meta no-index and / or robots.txt blocks on blog posts which contain links) as a risk nullification measure
- People moving their site from one domain to another. The new links from the new site should be found eventually, but often there's a trough where a backlink tool will see the old site is gone but it won't have found the new site yet!
- More people opting out of having their site crawled by backlink data suppliers (e.g: blocking rogerbot, Moz's crawler in robots.txt)
-
RE: Google Image Search - Is there a way to influence the related icons at the top of the image search results?
Yes there is, in fact there's a way to influence ALL of the images which are displayed, but it's usually costly and time-intensive
For example, look at this Google search query:
https://www.google.com/search?q=frozen&tbm=isch
... this used to contain loads of pictures of frozen foods and frozen landscapes. Now it's all about a Disney movie! Another good query is "Matrix" which (for image results) used to be very technical, but for over a decade it's been dominated by the Matrix movie franchise
If you create such an online storm, that you 'become' the trend, you can 'take over' Google's image results. Sometimes this only lasts a short while, sometimes it lasts well over 10 years
The 'related' images that run along the top (which can sometimes be derivatives, e.g: 'related movies', or instead they can be search narrowing facilities, e.g: 'frozen foods' as opposed to the generic 'frozen' results) - can be influenced. Usually the related images, are 'runner up' trends that didn't quite manage to dominate Google's results, yet which still count as distinct and highly popular search entities
This one is quite a good example: https://www.google.com/search?tbm=isch&q=automobile - there are related images for specific vehicles, titans in the automobile industry (Henry Ford) / historic, even stuff like 'vector' which covers digital automobile art
Your best bet at influencing which things appear along the top, is to influence which commonly-related pictures people ALSO search for when they use Google. Unfortunately, that's not easy at all and often involves colossal production and / or marketing budgets which extend offline in a big way
-
RE: Canonical: Same content but different countries
Basically the canonical tags should self reference, so long as they are also supported by hreflangs.
So for example if you had these two URLs:
... then on site.com/en/category/product you'd need:
**... and on site.com/fr/category/product you'd need:**
It's pretty simple really! Remember, only canonical URLs (usually not parameter-based child URLs) should self-reference with a canonical tag. Remember that Hreflangs need to be mutually agreed between pages for them to work (so if the FR page links to the EN page with a hreflang, but there's no hreflang coming back - it fails!) - Keep hreflangs simple and exactly symmetrical
-
RE: Someone redirected his website to ours
They can sometimes be harmful yeah. Disavow the domain in Google's disavow tool. Remember to download the existing disavow file and add your new entries on, otherwise you might undo some previous work. The file you upload doesn't get 'added' to what you have submitted previously, whatever you upload IS the complete file (be wary)
Other than that I'd just listen for any traffic from the domain and 301 redirect it somewhere else. The problem you'll get is that if you 301 it back to them, their redirect will pass back to you and you'll get an inter-domain redirect loop I don't know what the consequences of that are. You could just 301 redirect traffic and negative equity from that site, to someone you don't like no? Ok, maybe a bit too volatile and thermonuclear
In all seriousness the best thing to do is disavow and code your server to refuse to serve anything to any requests processed from that domain (be they crawlers or users). Just shut it down and disavow it, that's what I'd do. Redirect-wars are seldom beneficial
-
RE: Is it ok to repeat a (focus) keyword used on a previous page, on a new page?
The pages will compete against each other under normal circumstances, but that's not necessarily an awful thing. For example, maybe your older page only achieved positions 16-30 to the keyword, but the new page might achieve a higher ranking. Unless you pit them against each other, how will you know what's best?
Stopping newer pages competing for old rankings, doesn't give a magical bonus to the old page and make it rank higher. Unless you're absolutely certain that the old page should be the 'definite' landing page for the keyword, a bit of friendly competition doesn't usually hurt much
The pages which really contend for your rankings, are those from other websites. Good luck emailing all the webmasters and complaining at them, that they are using your keywords
Sometimes, under very specific circumstances, keyword cannibalisation can come into play and cause problems. But 90% of the time it's just not really that big of a deal
The big deal is that if you write loads of pages with the same focus keyword, you're NOT writing about new keywords. And if you're not doing that, how will you increase your footprint? Often it's more lucrative to cover other, newer material rather than re-hashing old stuff
The worst you tend to get are rankings that stay largely in the same place, but their ranking URL jumps around as Google tries to decide which page to rank (and then eventually settles on one)
IMO, the worst part about keyword cannibalisation is not the fall-out from it (which is usually minimal) - it's the WASTED time, in terms of getting onto new topics to attract new visitors. Always be expanding
Effect Digital is a productive partner to ambitious brands. Connecting organisations and audiences in a digital-first world
Looks like your connection to Moz was lost, please wait while we try to reconnect.