You can call me James nice to meet you Thomas
Posts made by effectdigital
-
RE: Quality Links From Web Directory Usefull ??
-
RE: Tens of duplicate homepages indexed and blocked later: How to remove from Google cache?
It is assuredly true that, just like in any number of fields (medicine) - in SEO, prevention is better than cleanup based methodology. If your website doesn't take its medicine, you get problems like this one
I think your advice here was really good
-
RE: Move to new domain using Canonical Tag
This is exactly right and is a great answer. Canonical tags stop content duplication from being a problem and can alleviate content duplication related devaluations (or in extreme cases, penalties)
What canonical tags don't do anywhere near so well (if at all) is transfer SEO authority from one page to another. If OP did what they were suggesting, the risks would be (1) Google interprets the canonical tags wrong (2) Google starts ranking pages on the new site instead of the old pages, but (critically) without any appended backlink equity (3) all rankings are then lost on both sites
I'd be extremely, extremely hesitant to deploy in the OPs specified manner and I think that Nigel is 110% correct here
-
RE: Canonical - unexpected page ranking
Full support for this answer
-
RE: Very wierd pages. 2900 403 errors in page crawl for a site that only has 140 pages.
This is almost assuredly a link-based architectural error. It will be something similar to this:
- You load a page on EN
- You click the EN flag or language icon
- Instead of just reloading the page you are already on (since you're already on EN) the link is coded wrong and adds another /EN/ layer to the URL
- Once the new URL loads, the problem can be repeated
- This creates infinity URLs on your site
- Bad for Google, and Moz's crawler
Bet you it's something like that. If you give me the exact URL I might even be able to find the flaw and detail it for you via email or something
-
RE: Tens of duplicate homepages indexed and blocked later: How to remove from Google cache?
It's likely that you don't have access to edit the coding on these weird plugin URLs. As such, normal techniques like using a Meta no-index tag in the HTML may be non-viable.
You could use the HTTP header (server level stuff) to help you out. I'd advise adding two strong directives to the afflicted URLs through the HTTP header so that Google gets the message:
-
Use the X-Robots deployment of the no-index directive on the affected URLs, at the HTTP header (not the HTML) level. That linked pages tells you about the normal HTML implementation, but also about the X-Robots implementation which is the one you need (scroll down a bit)
-
Serve status code 410 (gone) on the affected URLs
That should prompt Google to de-index those pages. Once they are de-indexed, you can use robots.txt to block Google from crawling such URLs in the future (which will stop the problem happening again!)
It's important to de-index the URLs before you do any robots.txt stuff. If Google can't crawl the affected URLs, it can't find the info (in the HTTP header) to know that it should de-index those pages
Once Google is blocked from both indexing and crawling these pages, they should begin to stop caching them too
Hope that helps
-
-
RE: Canonical and Alternate Advice
The self referencing canonical advice was solid and I 100% agree with it. The rel=alternate advice, I felt would cause problems (IMO). But as we all know, fiddly issues like this are highly subjective
-
RE: Tens of duplicate homepages indexed and blocked later: How to remove from Google cache?
+1 for "Make sure that they are not created in the first place" haha
-
RE: Canonical and Alternate Advice
Your problem is that you have two different sites loading on the same URL. If you are returning both the mobile and desktop / laptop site on the same URL, you would be expected to be using responsive design. In-fact, you may have re-invented another different way to implement responsive design which is probably, slightly less fluid yet slightly more efficient :')
Since your mobile and desktop pages both reside on exactly the same URL, I'd test the page(s) with this tool (the mobile friendly tool) and this tool (the page-speed insights tool). If Google correctly views your site as mobile friendly, and if within PageSpeed insights Google is correctly differentiating between the mobile and desktop site versions (check the mobile and desktop tabs) then both URLs should canonical to themselves (self referencing canonical) and no alternate tag should be used or deployed. Google will misread the alternate tag, which points to itself - as an error. That tag is to be used when your separate mobile site (page) exists on a separate URL, like an 'm.' subdomain or something like that
Imagine you are Googlebot. You are crawling in desktop mode, load the desktop URL version and find that the page says, it (itself) is also the mobile page. You'd get really confused
Check to see whether your implementation is even supported by Google using the tools I linked you to. If it is, then just use self referencing canonical tags and do not deploy alternate tags (which would make no sense, since both versions of the site are on the same URL). When people build responsive sites (same source code on the same URL, but it's adaptive CSS which re-organises the contents of the page based upon viewport widths) - they don't use alternate tags, only canonicals
Since your situation is more similar to responsive design (from a crawling perspective) than it is to separate mobile site design, drop the alt
-
RE: Canonical and Alternate Advice
The problem with this is, where you say "corresponding mobile URL" - there isn't one as OP has stated that, two different source codes (pages) can be rendered on the same URL depending upon the user's screen size / user-agent (however they are detecting mobile, and serving different pages)
-
RE: Multilingual Sitewide Links
Without any indicators that Google 'do' think the links are spammy, I wouldn't worry about this too much. If you start to notice performance issues which you can isolate to these footer links, then I'd no-follow them right away
Usually site-wide links are only an issue between different domains, and even then - only if it's not a multi-domain site. A multi-domain site is usually where you have exactly the same site with linguistic differences, spread across multiple domains (so instead of having site.com/fr/ and site.com/en/, you have site.fr and site.co.uk). As long as the templates are highly, highly similar and Google begins linking the 'brand-entity' across those sites, there shouldn't be a problem
Lot's of sitewide links placed in footers across the web (cross-domain) are paid for links to manipulate SEO rankings. Those are bad. If your links are 'editorial' in nature (e.g: the site owner or editor decided they were required for user benefit) then I wouldn't be so concerned. There's always the chance Google's algorithm could get it wrong, and you could eventually have a problem
What you need to decide is, would you rather have some small performance issues now (by removing the links or no-following them) and prevent any further 'possible' action in the future? Or would you rather take a small risk, and keep your results solid. No one 100% knows how Google's algorithm(s) work (not even Googlers). As such, there are elements of chance at play here and only you can decide what you are happy with:
A) Undo or no-follow the links now for a high chance of mild devaluation now and some affected results, but it will almost 100% stop any site-wide linking penalty (which could wipe out all results) from occurring. The damage of that would be devastating, but the chance of it occurring in the first place is low
B) Leave the links as they are. Experience no mild devaluations or performance issues at all, for now. But possibly in the future, you get struck with a penalty and lose everything. The chances of that seem very low, but if it does happen... ouch
Sometimes both your choices are less than ideal. But you still have to choose! If it were me, I think (with the information which you have supplied thusfar) I'd leave it alone for now (but watch performance like a hawk)
-
RE: Multilingual Sitewide Links
If your own links are being interpreted as link-spam and causing problems, then yes I am certain. If however your suspicions in that area are incorrect, then no it would be a bad idea. It depends upon your confidence in your evaluation of the situation at hand
Without evidence (performance impacts) that these links are harming you, I'd hold back. In which case, you can just leave them as they are and there's no need for 'any' action (this question becomes moot)
I assumed that your reason for waning to 'eliminate these links' was that you feared the SEO repercussions of leaving them (link spam). If you do feel that they are harming your site from an SEO POV, then yes - no-follow them across the board. However if your assumption in that area is wrong, you could see problems (so think hard on it!)
-
RE: Domain Masking SEO Impact
This is a good idea, but Robots.txt stops pages being crawled - it doesn't stop pages being indexed. For that you need to fire the Meta No-Index directive on the affected URLs. If you can't edit their code you can fire the same directive through the HTTP header via X-Robots. On that linked post, you'll need to scroll down a little. If possible you could also alter those URLs to serve status code 410 (gone) so that Google knows, those URLs aren't really on your site
Note that you'll need to make the changes on the 'affected' site, not the site which is the 'source' of the masked pages / data. If you make the changes there, that site will have all the Google traffic killed as well (and they'll probably want to punch you!)
I recommend that you lead with hard signals and directives which stop Google indexing the pages on the 'affected' site (which is receiving the masked URLs / content and doesn't want them to rank). Once the pages fall out of Google's index, then you swoop in behind and put the robots.txt stuff in to stop them ever coming back
-
RE: Is there a Risk Around Creating a Website for Each Country in The World?
Unfortunately yes. We have a number of clients who went 'geo-mad' and in almost all situations, it has caused problems for them. Sometimes it has created colossal site footprints which Google doesn't care to index (unless you're a household name, don't expect Google to care about your hundreds of thousands of URLs). Sometimes that has also caused server-load issues for them too, irrespective of Google
Other issues include Google ignoring their canonical tags and setting one language URL as the 'canonical' result (and thus de-indexing the other language URLs). This can happen due to link signals and similar content, stuff like that
Many clients in such a position have **seen their pages devalued as a result of them going against Google's content guidelines **(and simplicity guidelines). If you're not super important, Google don't want to waste 4x, 6x or 20x crawl budget in your site just because you decide to serve in more combinations of language and geo-location. Even with perfect Hreflang deployments, a lot can go wrong if you go nutty so cherry-pick your language/geo combinations and don't be greedy with it
If your brand is powerful online and you have loads of SEO authority / ranking power, then you can deploy hreflangs extensively and usually you can make real gains. Not everyone is in that position, most aren't
Having unique content (not powered by some crappy auto-translate plugin) per deployment is strongly, strongly recommended. By the way if you have less ranking power than most sites which have 'successful' broad-reaching hreflang deployments, you need to adhere to Google's guidelines more strictly than those sites do. You need to make up for you lack of trust and authority, by doing things by the book
Too many people look at big international sites and say: "well Google lets them use relatively thin content so I should be alright too". Nope, you are likely standing upon a platform of radically different stature to those guys, so don't over-reach too quickly or you'll stumble and fall
Also if you are planning to use canonical tags to 'canonical' from one language to another, don't do that. If a page points to another, separate URL with its canonical tag - then it tells Google that it (the active page) is the non-canonical version and usually de-indexes itself
Be very careful how you proceed. If you increase your footprint too far, all the great authority you have built up may bleed out over a sprawling site and you could end up with nothing
-
RE: Local Search filter or penalty
Unfortunately Google are far less clear and forthcoming about their GMB stuff. If it were a general website penalty, I could be of more help. Sadly with GMB, it's been through so many re-builds and revisions (Google Local, Google Places, Google My Business) that I reckon' even someone from Google would have a hard time telling you what you should do
I guess the way to fix any devaluation is to adhere to Google's guidelines. Have an address. Don't put your address in your Business name. That being said... I am fairly certain that the algos monitoring GMB are way less sophisticated than those monitoring mainline search results. You could do everything right and still never see those results return to how they were before
It could be to do with having an address that isn't really an address, or the keyword stuffing. There are some generic steps here which Google lists to improve your 'local rankings'. Maybe if you do stuff that you hadn't done to begin with (verify your location, respond to more reviews than you did before with better - fuller answers, add a load more photos of higher quality) then you might be able to tip the see-saw back
I managed to find a Google My Business help / contact form you could fill out and send to Google here. You could also post on the official Google My Business community here. Yeah I know, it's not a Google.com URL but I found the link when browsing something related over here, so I think it's legit.
-
RE: Local Search filter or penalty
Google's official guidelines can be found here.
Looking around the web, we can find a plethora of resources which show that Google (and leading marketers) explicitly state that keyword stuffing your business name (with a location) is bad
This is Google's official word on the matter:
"Business information
Name
Your name should reflect your business’ real-world name, as used consistently on your storefront, website, stationery, and as known to customers.
Any additional information, when relevant, can be included in other sections of your business information (e.g., "Address", "Categories"). Adding unnecessary information to your name (e.g., "Google Inc. - Mountain View Corporate Headquarters" instead of "Google") by including marketing taglines, store codes, special characters, hours or closed/open status, phone numbers, website URLs, service/product information, location/address or directions, or containment information (e.g. "Chase ATM in Duane Reade") is not permitted."
(sources here and here, where it is 'quoted' - assumedly 'from' Google - and obviously here where Google state this explicitly [ctrl+F for "not permitted"])
Read 'marketing taglines' as 'keywords' I guess!
If you actually had a manual penalty and suspension, you'd likely see something like this:
- https://d.pr/i/ZFPJgz.png (screenshot)
What is probably happening is some kind of listing 'devaluation', it could potentially be linked to keyword stuffing but I can't confirm that. It could just be that (unfortunately for you) Google have decided to take a harsher stance against businesses that 'can't' manage to have a defined address
-
RE: Data that shows people who click on paid vs organic listings
I found this which someone at 'Smart Insights' compiled in early 2018:
If you Ctrl+F for "Click through rates (CTR) for PPC per Industries" and read from there, they seem to have taken some data from Wordstream which IMO is a trusted source
That **led me to find this which looks pretty interesting: **https://www.wordstream.com/average-ctr
Sorry if this is no help. Just trying to find some information :')
-
RE: Multilingual Sitewide Links
There actually is! If you're worried that Google might see the links as 'manipulative' but you still need them for UX, then all you have to do is to is inject the individual links (in your footer / template) with rel="nofollow". Google will then discount the links from their algorithm
Note that if you are wrong and Google sees the links as valid and they are helping all your sites interconnect better (in terms of SEO authority) - then you could see some tail-off. Hope this helps
-
RE: Site migration/ CMS/domain site structure change-no access to search console
If your architecture is changing, (e.g: from non-www to www, then from HTTP to HTTPS) - just be careful that your developer's logic doesn't start 'stacking' redirect rules
You want to avoid this:
A) user requests http://oldsite.com/category/information
B) 301 Redirect to - http://newsite.com/category/information
C) 301 Redirect to - https://newsite.com/category/information
D) 301 Redirect to - https://www.newsite.com/category/information
Keep your redirects **strictly origin to final destination, and you'll probably be ok! **In the case of my example the redirect should go straight from A to D, not from A to B (hope that makes sense)
Install this Chrome extension so that you can see redirect paths in your Chrome extension buttons menu. It's very, very handy for testing redirects
-
RE: How Have You Managed GDPR?
We actually found that, whilst it require strict management in terms of file transfers, GDPR wasn't as scary as everyone said it would be
One thing we did was to sign up for Wizuda, a GDPR compliant file-transfer system (previously we just sent stuff to clients through Dropbox links, Sync.com links or WeTransfer links). It's important to note that a compliant file transfer system, doesn't 'make' all your file transfers GDPR compliant. It provides a platform which records certain info and erases files past a certain date, thus 'enabling' you to be GDPR compliant (but not necessitating that your actions will make it so)
We also asked clients whom wanted to transfer data to us, to sign up to it and to send a covering note (through Wizuda mail) on every single file which they fired through to us. If they don't include the note we delete the file and reject the transfer
The note they must send to us goes something like this:
- https://d.pr/i/tIhQBK.png (screenshot from Wizuda Mail - redacted)
We also initially got a lot of pressure whereby, our Account Managers were going directly to analysts (whom were, at the time - managing GDPR transfers) and trying to 'push through stuff that the client just wanted' without the client having properly proven - that they owned the data and had the 'right' to transfer it to us for marketing activities. Needless to say we immediately clamped down on that with full force, by creating an interactive (digital or printable) 'fillable' PDF form which AMs 'have' to get filled in (by the client) before we accept ANY inbound data which contains any PID
- https://d.pr/i/1nkG5F.png (PDF screenshot - redacted)
Since only Account Managers have a relationship with a client and can tell them 'no you do not have permission to legally do this, and we will not support you with illegal data transfers' - it made sense to unburden those 'physically' transferring the data and leave it up to higher level AMs / ADs and clients to sort out between themselves
We have now adopted more advanced approaches but all this stuff was an integral stop-gap
This all prevented two things:
1) Us transferring data which was not GDPR compliant to clients
2) Clients being able to get us to 'work on' illegally transferred data, which would make us an accessory to their malpractice
Some think we went crazy and went way too far, but I'm pleased that we're taking more steps every day to ensure full GDPR compliance. That being said even our initial steps were really strong
The truth is, no one knows whose practices are / are not safe. Most of this GDPR stuff hasn't worked its way through the courts yet - and until that happens, whose to say which approach is most compliant? I think we're doing well, though
At the beginning we were quite scared that our email marketing would die off. But actually that's not the case! It just has much less churn than before. To be honest, the people whom were targeted before GDPR came into play, who may have not given explicit permissions for our client(s) to share their data, were the group who never really converted anyway. The people who signed up to be contacted, whom demonstrated their interest, supplied far more of our client(s) conversions. So in a way it was kind of irrelevant, just meant we spent less of firing out emails in the first place. Most ethical, strong-performing email marketing is re-targeting and usually users have to interact and give consent for that to happen anyway (subscribe to our newsletter, etc.)
-
RE: How to overcome Connection Timeout Status Error?
This means that Screaming Frog is not 'waiting long enough' before returning the time-out error.
Just do this:
- https://d.pr/i/D7Cj5M.png (screenshot)
- https://d.pr/i/N6xdLA.png (screenshot)
Raise that number up, until you don't get 0 / Time Out any more. Note that if it does fail a lot on moderate crawl settings, there are likely to be underlying page-speed issues (either that or the machine you are crawling from has bad bandwidth)
It could also be that your crawl ' frequency' is too high, go Configuration->Speed and lower the thread count (to 2-3) and the URI/s (to one or two)
Finally it might be that the SF user-agent is blocked so go Configuration->User Agent and switch it to Chrome
**To help you **- I did the crawl for you, here is your crawl data: https://d.pr/f/a1ux4b.zip (archive of crawl file and some exports)
You can actually use my crawl file as a starting point (by double clicking it) when you want to re-crawl in future. Should be useful to you
-
RE: Site migration/ CMS/domain site structure change-no access to search console
You wanna' be really careful here. From the sounds of it you had a collection of 'web pages' under an old umbrella site (which contains loads of other stuff too) and you are 'extracting' those web pages and turning them into a new website. For most intents and purposes, a domain 'is' a website
If the old site is staying live with other stuff still on it, and only part of it is migrating - obviously you DON'T want to tell Google that the whole umbrella site is 'becoming' a much narrower site on a new domain. That's inaccurate information, and will kill off the main site's performance
Another issue. Currently your 'site section' which will become its own site, is receiving SEO authority through the main domain's backlinks, then transferred through the internal link structure. If the old site is staying live, most of it won't be redirected to the new 'extract' site. The internal linking from the main site will also be gone, which means a performance reset for those section of URLs is quite darn likely
There is some potential, that I got this exactly the wrong way around. Maybe you are saying that a previously external site is coming 'under' the big umbrella. That would be much easier to deal with!
In this second scenario, yes I'd recommend telling Google that one whole domain is becoming part of another domain using the domain migration tool within search console. I have seen migration projects succeed without this, but I've also seen Google's algos throw wobblies so... Yeah, I'd say do it to be safe
The old domain needs to still exist, with a hosting package - in order to perform your redirects. Redirects are handled by the .htaccess or web.config file(s) and they need hosting to live on. Without it, all your redirects will die. If you don't keep the redirects live for 6-12 months, prepare to lose some SEO authority as it won't have all translated across by then
Your new pages, regardless of whether they are on an external or internal domain, should be listed in an XML sitemap. Wherever they are moving to, that domain's XML sitemap needs to have the newly spawned URLs in
Hope that helps
-
RE: 301 Question - issue
It's probably just taking Google a while to process all the changes. Really your 301s should point to the same content, not just all go to the homepage. If you had pages showing on two sites, the pages do 'really' exist on one site but weren't supposed to exist on the other. Correct the 301s so that they point from the URLs on the affected site, to the exact same pieces of content on the site where they were originally located (where they were supposed to be located)
If that fails use the HTTP header and X-robots (not no-index tags, fire the no-index directive from the HTTP header instead of the HTML) to tell Google not to index those URLs on the 'affected' website. In conjunction with that, alter the status code of all bogus URLs on the 'affected' site to 410, which is stronger than 404 (it means: GONE - not coming back, 404 just means temporarily gone but will return...)
-
RE: Results After a Disavow File Submission
I gave a solid response here which is hugely likely to shed some light on your disavow predicament.
Disavow work is a preventative measure, it is not work which you should 'expect' to raise rankings.
If you didn't replace the disavowed (discredited) backlinks with decent ones, obviously you'll just go down and down
-
RE: Quality Links From Web Directory Usefull ??
This post is pretty much on the money. Directory links don't really have much SEO 'authority' value anyway, they only really help with geo-relevance. The fact of the matter is, barely anyone uses web-directories any more. Since search, they have become outmoded
As such, 9 times out of 10, no traffic flows through these links. Due to that, Google doesn't weight them very strongly. They can help to confirm the geographic position of your business and stuff like that, but only a select number of directories and directory-aggregators (like Central Index) are effective for that kind of thing
Like Thomas says, don't buy spammy directory links it's not safe. Well... I guess I'd say, you can buy paid links under certain circumstances - but if they're created for advertising purposes, they must be no-followed (so that they don't manipulate SEO rankings). Due to this, you would never create paid links 'just for SEO'. You'd be creating the paid links as part of a 'referral' traffic campaign, when you were certain that the link was priced right and you'd see decent traffic through it. But you'd nullify the SEO impact, to prevent any potential issues with Google
Google only want 'editorial' (not paid for, without bias) hyperlinks, created by editors or webmasters - to alter their ranking positions. If you pay an editor to review content which you might place on their site (but don't pay for the link itself), that's a gray area where you might be alright (as long as you content wasn't spammy). That being said, if you'rte paying for an editorial review process, you could pay and have your content declined (so there's more risk there also)
I second tom's opinion that 'guest posting' in its TRUE form is not dead. The problem with Google's language there is that, they confused 'sponsored' (paid) posting with guest authorship. Too many people did sponsored posts and claimed (falsely, against advertising laws) that they were editorial pieces. Advertorials and sponsored posts are dead. Guest posting, in its real (editorial) format is still very much alive and can work wonders
-
RE: Should we move old videos from 1 YouTube channel to new channel , so that old videos ranking does not get affected?
Huh, that's a very clever idea. I don't think it will work though! What you are saying is that, only a few videos on a channel sit on that channel's main feed, which is where the channel most prominently links to them. Once videos get older, they kind of 'fall off' the channel's front-page feed, and thus get less link-juice. Is that right?
The thing is, your main (first) channel has inbound links pointing to it -right? And some of those videos also have backlinks too, if they are very popular. That's what makes them rank well (that and YouTube internal metrics like comments and shares and stuff).
If you move popular videos to a new channel, their URLs will change. That means that any backlinks pointed at your video (or the channel which links to the video) will be 'cut off' from the videos new placement. As such I'd expect this move, to see even worse rankings (not better ones)
It's a clever idea, I can see what you are saying - but no I wouldn't recommend it and I don't think it would work out as you had intended
Keep thinking in that way, it's smart. But on this particular occasion, no don't do it
-
RE: Is it more beneficial to use Yext rather than doing the citations manually?
This is a really good question and something with which, I am at least personally experienced with - in terms of having done recent research on the subject
Recently we (at Effect Digital, UK) moved office. I say recently, it was probably nearly a year ago now. Previously we had been using Moz Local, but were shocked to find that simple features like 'changing' or 'updating' our business address were not supported. The problem comes down to directory aggregators (the main offender being Central Index) and how they interact with various listing suppliers
When Moz Local failed to update our business address, we tried to contact a lot of directory-owners (and directory aggregators like Central Index) manually. Trust me when I say, the largest ones (like Central Index, which handle the majority of your directory listings across local newspaper sites - at least here in the UK) do not want to hear from you at all
We submitted contact forms, spent hours browsing to hidden contact URLs, even found a phone number for them - which rang for a minute, then forwarded to a separate disconnected number. These guys don't care about receiving updates from Moz and they don't care about your update requests. They don't even want to listen to you, for 2 minutes
The problem with Moz local, as a Moz rep told us - is that when they push data (from Moz's Local database, to the databases of aggregate listing handlers like Central Index), there is no legal agreement requiring the aggregator(s) to accept Moz's update(s). They will accept Moz Local's first set of data, but if you ever want to change anything substantial - that rapidly becomes a massive problem. Moz Local is a great way to take your first steps into local listings handling, but the solution isn't end-to-end, or rigorous enough for on-going usage
Yext is much better, from the sounds of it. Myself and another experienced colleague spent some hours on the phone to their support people who seemed qualified and able to handle our complex queries. They stated to us that, they have a more 'direct' agreement with listing aggregators which directly pushes their data over the top of everyone else's. That tells me that, either their solution works well or - if it didn't work, you'd have recourse to push them to get things sorted out. They take more responsibility and really try to make sure that their updates 'actually' end up live in aggregator databases (which is great)
We didn't end up using Yext because, as an SMB the pricing was pretty severe. We were told that Yell basically re-sell Yext's technology to smaller businesses and that we'd have better luck with Yell. Whilst Yell may be based on the same technology, the support is woefully inferior and I'd never recommend working with Yell. We had one rep visit us to answer technical questions. He knew none of the answers to our questions, knew nothing about SEO and actually had the tenacity to try and 'steer' us toward purchasing some crappy directory listing on Yell which wasn't at all what we wanted (or were interested in)
Moz is great when you're starting out, but because the 'deals' they have with data aggregators aren't 'forceful' enough, it's not a good on-going solution (Moz know this and said as much to us over the phone). It's still the best place for you to 'start' as the prices are great and it does function well, until more crucial data changes are required (at which point it falls flat on its face)
At that point you really should be going with Yext if you can afford Yext. The product is superior, the support is very strong (Moz's support is actually great too, but hindered by the product which is defective under some specific circumstances). In-between the cheap and cheerful Moz and the very expensive Yext, there's nothing good enough to fill the void (which sucks!)
You may have some luck manually adjusting some listings. But for the ones controlled by central databases which fire their data out to many directories, your only real option to have a reasonable shot at getting them changed - is assuredly Yext
-
RE: Stuctured data for different sized packages
Not a problem hopefully it will prove useful...
-
RE: Truncated product names
If you had two different source codes served via user-agent (web-user vs googlebot) then you'd be more at risk of this. I can't categorically state that there is no risk in what you are doing, as Google operates multiple mathematical algorithms to determine when 'cloaked' content is being used - and guess what? Sometimes they go wrong
That being said, I don't believe your risk of garnering a penalty is particularly high with this type of thing
These are the guidelines:
You're in a really gray area because, you aren't serving different URLs - but you _could _be serving different content (albeit only slightly). I say 'could' rather than 'are' as it entirely depends upon whether Google (on any particular crawl) decides to enable rendered crawling or not
If Google uses rendered crawling, and they take the content from their headless-browser page-render (which they can do, but don't always choose to as it's a more intensive crawling technique) then your content is actually the same for users and search engines. If however they just do a base-source scrape (which they also do frequently) and they take the content from the source code (which doesn't contain the visual cut-off) then you are serving different content to users and search engines
Because you've got right down into a granular area where the rules may or may not apply conditionally, I wouldn't think the risk was very high. If you ever get any problems, your main roadblock will be explaining the detail of the problem on Google's Webmaster Forums here. Support can be very hit and miss
-
RE: Stuctured data for different sized packages
Wow that's quite a query. If I am understanding you right, you have this problem:
- You sell bags of stones and stuff
- They come in multiple sizes
- The user goes to the product page, selects the size - and is then presented with a price
- But because the price depends upon the user's interaction, because there are multiple product variants, Google doesn't understand your product pages very well - or the prices of your products
- This is particularly true for Google shopping
I can't say I have experienced this exact issue as Google shopping is one thing that, I haven't had much to do with it - since the good old days (when it was free, and all you needed was an XML feed!)
But your basic problem is how do you mark up product 'variants' with Schema, right?
I have tried to find some resources for you on the subject:
- https://www.schemaapp.com/tips/schema-org-variable-products-productmodels-offers/ - this seems really in-depth and helpful. Suggest giving it a read
- https://schema.org/ProductModel - Product models seem like a concept you'd need to know about
- https://schema.org/isVariantOf - this seems to be a symmetrical schema, going from variant to master (also something you'd need to know about)
From a top-line check, it seems that you need to establish product models and variants. The model seems to be the master 'thing' that has children, whilst the variant seems to be one of the children (makes sense I guess)
I'd try to get as close to those materials as possible, then debug with Google's official structured data testing tool (until everything is perfectly digested...)
-
RE: Apart from spying on competitors back link what else can be done in MOZ?
Another usage is URL benchmarking but it requires additional tools like Netpeak Checker or URL Profiler (my preferred one). Say you were doing a migration from one domain to another and also changing your site's design (and maybe shedding some content too). In such migration projects, it's almost a requirement to benchmark the 'SEO value' of each individual URL
Why? Well you may have tens-of-thousands of URLs to migrate, and usually you won't have enough time to do those all manually in 1-1 (A-to-B) terms. As such, if you can demonstrate that large volumes of URLs hold little or no SEO value, those can just be stuck under blanket redirects (leaving you with abundant time to concentrate on high-value pages)
Because different backlink index suppliers (like Moz, Ahrefs and Majestic SEO) crawl the web in different ways, some ascribe value to pages which the others haven't even found. As such I'd use Moz's Links Explorer data as one input, but I wouldn't ignore the others (using just one tool can be really misleading)
I take all the metrics from all the tools, boil them down in a formula and work out what 'really needs doing'. By the way, you can use a similar technique to evaluate all your backlinks if you get a Penguin (link) penalty notification (ready for your disavow)
This is how I approach the disavow stuff:
- https://d.pr/i/o4GM8p.png (screenshot)
If you're willing to pull Moz's data off of Moz, onto a central platform (Excel, Google Sheets, Google Data Studio) where you can manipulate the data and 'normalise' or balance it to your liking, it can be absolutely invaluable. So there's a lot you can do with Moz's data, you just need to get stuck in with it
- Don't use just one data-source
- Have an expert compile the data and draw out insights
- Learn more, than you did before
-
RE: Meta robots
I am pretty sure that's not how Meta robots tags work. If you fail to specify something, Google assumes they are allowed to index by default. By the way, search engines do not index pages which they don't think users will like or be interested in. Just because a search engine 'can' index a URL, that doesn't mean it will!
Follow directives and index directives actually operate on two entirely different sub-sets of data. Follow / nofollow directives are link-level (meaning they apply only to the hyperlinks on a page, not to the page itself). Index / no-index directives are page-level, and apply to the entire page upon which they are situated
Due to this, I don't believe they could or would interfere with each other in the way you described
Interesting experiment though. To test, I'd recommend adding index instead of removing follow. If hat doesn't make any kind of difference, it's not the issue
-
RE: Rebranded Website Uses a Forward Slash /at End of URS-Is This Considered a Redirect?
301 redirects will almost assuredly be utilised to keep this maneuver SEO-friendly. but wait! 301 redirects fail to translate 'most' of the SEO authority from one page to another, in two key situations. If the content is too dissimilar on the destination URL, 301s can fail to port authority across
For you, this won't be a big issue as (from the sounds of it) the pages will be almost identical, byte for byte. The new pages may be very, very slightly larger due to having source code that contains more instances of the character "/" but that's not something which would phase Google at all
Another situation where 301s can fail to move all the SEO authority across is when redirect chains occur. But you're just 301-ing "non trailing /" URLs to "trailing /" URLs, so it shouldn't be a problem right? Hmmm there are ways you could come unstuck here
Let's imagine we have a hypothetical retail site called "buymyproducts.com"
Let's imagine that a few years ago, the site used to be on HTTP (insecure) and has moved over to HTTPS (encrypted)
All pages were influenced by a HTTPS-injecting redirect, let's create and example:
http://buymyproducts.com/product-category/product
was 301 redirected to
https://buymyproducts.com/product-category/product (with HTTPS)
That redirect rule now sits within the web.config or .htaccess file and waits for insecure requests, redirecting as appropriate
Now we want a new redirect rule, and it will affect the page like this:
https://buymyproducts.com/product-category/product
will be 301 redirected to
https://buymyproducts.com/product-category/product/ (with a trailing slash)
That seems fine, but when the oldest architecture is queried, you'll end up with redirect chaining like this:
A) http://buymyproducts.com/product-category/product
will be redirected to
B) https://buymyproducts.com/product-category/product (with HTTPS)
which will then be redirected to
C) https://buymyproducts.com/product-category/product/ (with HTTPS and a trailing slash)
... so as you can see, your redirects will begin to chain unless you foresee that problem up-front and write 'more complex' redirect rules that just connect A to C whilst entirely skipping B.
If the site existed on the oldest architecture (no trailing slash, insecure / HTTP) for the longest time (say 7 out of 10 years) then it's likely that many of the best links will still be hitting the very oldest architecture in terms of link destinations. Those backlinks won't translate into SEO authority for your site (very well) if your redirects begin to chain-up
To stop yourself from losing large chunks of legacy-authority, you'd have to do the redirects really well and ensure that your developer's rules do not ever begin to chain. If they are confident that they can avoid this chaining by writing much more complex redirect rules then go for it. If not, hold off
-
RE: Domain Authority hasn't recovered since August
Thanks for your responses Maureen
From what I know, sometimes when you alter your site to be 'faster', you sometimes have to wait a few days for that to start reflecting in the page-loading speeds. I am pretty sure that, if you have server-side caching enabled, and resources have been cached (previously) non-compressed, then sometimes the old resources will continue being served to people for days (or even weeks) after alterations are made
This is certainly true of image compression (where the old JPG / PNG files continue to be served after being replaced with more highly compressed versions, since the cache has not refreshed yet) - I am unsure of whether that applies to GZip compressed files or not (sorry!)
From what I understand, page-speed optimisation is not a straightforward, linear process. For example many changes you could make, benefit 'returning' visitors whilst making the site slower for fir-time visitors (and the reverse is also true, there are changes which take you in both directions). Due to these competing axioms, it's often tricky to get the best of both. For example, one common recommendation is to get all your il-line (or in-source) CSS and JS - and place it in '.css' or '.js' files which are linked to by your web pages
Because most pages will call in the 'separated out' CSS or JS files as a kind of external common module (library), this means that once a user has cached the CSS or JS, it doesn't have to be loaded again. This benefits returning site-users. On the flip-side, because external files have to be pulled in and referenced on the first load (and because they often contain more CSS / JS than is needed) - first time users take a hit. As you can see, these are tricky waters to navigate and Google still doesn't make it clear whether they prefer faster speeds for returning or first-time users. In my experience, their bias floats more towards satisfying first-time users
Some changes that you make like compressing image files (and making them smaller) benefit both groups, just be wary of recommendations which push one user-group's experience at the expense of another
For image compression, I'd recommend running all your images (download them all via FTP to preserve the folder structure) through something like Kraken. I tend to use the 'lossy' compression algorithm, which is still relatively lossless in terms of quality (I can't tell the difference, anyway). Quite often developers will tell me that a 'great' WordPress plugin has been installed to compress images effectively. In almost all cases, Kraken does a 25%-50% better job. This is because WP plugins are designed to be run on a server which is also hosting the main site and serving web-traffic, as such these plugins are coded not to use too much processing power (and they fail to achieve good level of compression). I'm afraid there's still no substitute for a purpose-built tool and some FTP file-swapping :') remember though, even when the images are replaced, the cache will have to cycle before you'll see gains...
Hope that helps
-
RE: Domain Authority hasn't recovered since August
It may help, but a lot of plugins burden the server more on the back-end than they do on the front end. Still, it all helps! In this case though, I wouldn't expect anything noticeable in that area (at all)
Right... I wrote WAY more below than I had anticipated writing. But I think I have demonstrated pretty conclusively that yes, site performance is your main hindrance. I may use some dramatic language below, it's just because I'm passionate about search and SEO :') so please don't be offended. That being said, yeah the situation is pretty bad
It (the site) actually is very slow and laggy, that could have a lot to do with it if site performance has decreased. Using my favoured page-speed tool, GTMetrix - you can easily see that the scores are pretty bad all around. Here's a screenshot
If you look at the waterfall chart it generates (needs a free account only, no payment details required) then you can see that the request "GET ryemeadgroup.co.uk" occurs three times and seems to take ages and ages to respond. Looking at the data as a once-over, I can't tell if that's just the request to get the whole page (so obviously it would be longest) or something else. If that is what it is, I don't get why it recurs thrice
You could optimise all your images. You could set GZip compression and do lots of things, but the fact is - the server environment is just horribly, horribly weak. I did a very small, few minutes long stress test which I had to cancel almost immediately. Even crawling the site at a few URLs per second stops it rendering and causes it to time-out! If I had left the test going it could have hurt the site or taken it offline, so before that happened I stopped the crawler in its tracks
By the way, this is a crawler designed for low-intensity SEO crawling, not actual stress-testing or DDoS simulation. From what I can see here, any user with a reasonable website connection (maybe BT infinity or one of the Virgin cable deals) could, if they wanted to - just take your site offline, just like that. Someone trying to do it maliciously would use much more aggressive tools and crawling techniques
The crawl delay has probably been set in your robots.txt to compensate for this. But what that means is, with the crawl delay on Google can't index your site and content fast enough. With the delay removed, they still won't be able to because even their basic, non-intensive crawling will take your site offline in seconds / minutes
Obviously when a new site goes live, even with the same files on the same server-environment, it slows down for a bit (while the cache builds back up again). Whilst that could be part of the problem, the main problem is that no matter what you do - that server environ is only fit for a hobbyist, not a fully-fledged business. Worse still, even if you did some amazing Digital PR and got loads of traffic, you wouldn't get it because it would knock the site offline. So even when you win, you'll lose anyway
Check out these terrible (worst I have ever seen) Google Page-Speed loading scores. I fired Google off to check your site, just after I had finished and killed my extremely moderate stress-test. Look at this screenshot. I know Google PSI asks for too much, but this is just dire
Let's check through Google's Mobile Website Speed Testing tool (which fires requests through 3g, just to be safe). This time I left a margin of time after the mild stress-test to see if scores got significantly better. Nope, the results are still really poor
Let's try Pingdom Tools. Here are the results. Again, really poor grade. Wouldn't be happy if my kid got a D on a school assignment, not happy with the grade here either. Beginning to see a pattern with all this?
I guess you might be in one of those situations where decision-makers are saying, before we put more money into the site (for a better server) we want to see more success. Well guess what? That's technically impossible. If you get more traffic, your site will go down - taking the Google Analytics tracking script with it. So all that traffic will be invisible to them, and they'll never have the data to decide that they need better. As such, it's a self-fulfilling circle here unless they'll just budge
What you're in danger of here, is taking an old mule on its last legs and 'optimising it'. Give it reinforced leg-braces, stuff it full of steroids. It all helps a bit, but ... you know, never be surprised when it entirely fails to beat an actual race-horse. It's not winning the grand national, the old girl (the hosting environment) simply doesn't have it in her
So what's the problem? Not enough processing power (brain-power) for the server? Not enough RAM (memory)? Not enough bandwidth?
It could be any or all of the above. When someone requests data (web-pages) from your server, three basic things have to happen. The server has to 'think of' what the user wants, if the processor can't keep up - then no matter how good the bandwidth, it's like putting "2+2=4" on a huge blackboard and expecting it to look good, look sophisticated (it won't). Next you have local memory. Once the server thinks of what it has to 'assemble' for the user, those lego pieces have to be put down in (very) temporary 'storage' before they can be shipped to the user. If you have great processing power and bandwidth on your server, great - but if it's funneled through narrow local memory... It's like trying to fit the entire theory of general relatively on one corner of a post-it note. It's not happening
Finally you have your bandwidth. If everything else is great and your bandwidth sucks, then locally you generate complex pages really fast - but can't get them 'shipped' to the user in a timely fashion
I don't know what the exact problem(s) with you server are, but it sucks. You have to investigate and secure better spend, or it will never ever improve!
Quite often, page-speed changes only influence moderate gains in Google's SERPs. That's because once you reach a certain standard, most users will be satisfied and so will Google. But in your extreme situation, I can well mark that - you will be under nasty algorithmic SERP devaluations.
Dev changes and coding will only get you so far, at the end of the day your site needs a good home to live in. Currently, it doesn't have one. It doesn't live on a server that Google would take seriously for an online business (IMO)
Important P.S: There is one other alternative issue, other than what I have summarised. It may not be that the server is weak, it may be that the server is programmed to 'fake' weakness and 'play dead' when one source (a crawler, or a user) gets too aggressive. If so, that same fail-safe has been over applied and is affecting Google's results. Play dead to Google? Get dead results. To establish whether my initial thoughts are right - or whether this final 'PS' is correct, we'd need to talk in real-time (over chat) and establish a two-way stress test. I'd need to stress the site again a little, make it time out for me - then see if it's also timing out for you. If it's affecting just me, your problem is an over-applied defense mechanism. If it's affecting both of us... the server is garbage
-
RE: Domain Authority hasn't recovered since August
There's a lot of conflicting information circulating about, what constitutes 'proper' redirects. If your content is slightly different on the new domain, then 301 redirects won't translate the 'full' amount of SEO authority across. You did say that, the content is exactly the same on the new domain - so I guess that wouldn't be it!
That makes me think that something could be technically wrong with the redirects, or that something is different for the new domain. Is it still on the same hosting environment, or did you move that when the new domain was applied? I am wondering if page-loading speeds have changed negatively
Another thing I see is that, in your robots.txt file you have set a crawl delay. If that wasn't on the site before it moved, it could potentially be hampering Google in terms of ... keeping their view of the site up to date (which in turn could hit rankings)
-
RE: Domain Authority hasn't recovered since August
There are a few things to consider here. The first and foremost, is that PA and DA (Moz's metrics) are 'shadow metrics' which are meant to mimic Google's true PageRank algorithm. Since Google has never made PageRank public knowledge (except for a very watered down, over-simplified version which used to be accessible through some browser extensions, which Google have now decommissioned) - obviously SEOs needed to build a metric all of their own. Moz accomplished this
Due to this, many backlink-index providing platforms (like Moz, Ahrefs, Majestic etc) have tried to create alternate metrics (PA, Citation Flow, Ahrefs Rating) based upon similar philosophies, so that web-marketers have 'something' to go on, in terms of evaluating web-page worth from a machine's perspective. But don't be fooled, Google evaluates the strength of web-pages via their own internal PageRank algorithm (in its 'true' form, web-marketers have never seen it!)
Because Moz's page and link index is nowhere close to the same size or scope as Google's, PA and DA are 'shadow' metrics. They are indicators only, and are to be taken with a pinch of salt. Google does not use 'Page Authority' or 'Domain Authority' from Moz in their ranking algorithms, instead they use PageRank
Because PA and DA are shadow metrics, based on a smaller index (sample of web-pages), they don't react as quickly to change as Google's 'real' page-weighting metrics. As such, unless you're seeing a colossal drop-off in terms of traffic, revenue etc... I wouldn't worry much (at all) about your DA score
**If you are also **seeing a performance drop-off, that's bad news and it hints at a botched site migration with improperly configured redirects
-
RE: Site Crawl Status code 430
This is a common issue with Shopify hosted stores, see this post:
It seems to be related to crawling speed. If a bot crawls your site too fast, you'll get 430s.
It may also be related to the proposed, 'additional' status code 430 documented here:
"430 Request Header Fields Too Large
This status code indicates that the server is unwilling to process the request because its header fields are too large. The request MAY be resubmitted after reducing the size of the request header fields."
I'd probably look at that Shopify thread and see if anything sounds familiar
-
RE: Huge drop in rankins, traffic and impressions after changing to CloudFlare
Trying a Reverse-IP lookup via domain may also help the OP to assess any potential threat(s) in that area
-
RE: Can subdomains hurt your primary domain's SEO?
Under certain circumstances it can harm the rankings and SEO performance of your main site. In terms of low-end technical SEO issues like broken links and stuff, that usually wouldn't be a problem (unless it were creating loads of broken links from the sub-site, to the main site). If you had a more substantial issue like an active penalty (manual GSC notification) or a malware threat (or hacked content) on your subdomain, that could assuredly harm the main site's rankings. Except in extreme circumstances such as these - I wouldn't worry about it too much
-
RE: Tracking Chat Conversions on WP?
Thanks for the info! So what this means is that, although Analytics (where people typically analyse all their funnels, by porting their AdWords data into GA) would be able to track that users are visiting pages, and Analytics (if you have set it up right) will be able to know where users came from (e.g: AdWords / Ads) - Analytics will not be able to determine that open chats (leads) are occurring and it will not be able to determine the value of any successful conversion
For that to happen, your chat plugin (and your AdWords account) would actually have to talk to Google Analytics. For example, if your chat plugin were coded to fire a confirmation message from the operator to the chat-user (which contained the amount they had paid and the fact they had converted) - that information could be wrapped into events fired to Google Analytics via JavaScript. From there you could easily filter down to users from paid-search (PPC / Ads) only and then just view the number of conversions, and the value which were ascribed to them
What we have identified is that the weak point you have, is one of these 3:
- Your AdWords / Ads is not talking to GA properly (maybe you don't have Google Analytics? In which case... that would be your centre-point where all the data needs to go, you need to get it)
- Your chat plugin is not sending data to GA, which could then be married (within Google Analytics) to your AdWords / PPC data
- Both of the above at the same time
So the steps I can see are:
- Step one would be (if you don't already have it) setting up Google Analytics
- Then having it properly integrated with your AdWords so both talk to each other
- Then making your chat plugin, also talk to Google Analytics
- Finally - deriving all your wonderful insights, in the Google Analytics back-end
For example, ZenDesk properly integrates with Google Analytics (see this post). Luckily your chat plugin (Chatra) also has this functionality (see here).
One concession with my answer here, I haven't told you how to get AdWords data into Chatra, or Chatra data into AdWords. Sorry about that, but trust me when I say - an Analytics (GA is free) integration will be better for you. Sorry I'm not 'exactly' answering the question here, still doing my best :')
You must confirm with Chatra, as part of their Analytics integration - exactly what data will Chatra send to Google Analytics? For example, maybe the Chatra / GA integration, only sends number of chats and length of chats, but not chat-based conversions (as you specified earlier: users convert on the chat, so that conversion data MUST come from Chatra or similar). If that's the case, you'd then have a problem and have to seriously consider other alternatives like ZenDesk or something else
For each plugin you think of trying, you'd have to email them with the same question. What data can the flow from their service, to your Google Analytics? If you don't like what you hear - it's the WRONG plugin for you
Hope that helps
-
RE: Duplicate titles from hreflang variations
Aha I see! That makes some sense. If the products are 'branded' and therefore the name never changes in any language, you have two options
Let's imagine you are selling a branded air conditioning unit, with the made-up name of GreenAir (maybe it's more economical and uses less electricity, thus the name from the 'green movement')
You could just leave it duplicate:
- EN: GreenAir | GreenWave Solutions
- FR: GreenAir | GreenWave Solutions
Or you could add more contextual info, which would be better:
- EN: GreenAir Environmental Air Conditioning Unit | GreenWave
- FR: GreenAir Unité de Climatisation Environnementale | GreenWave
I know, I know - my French sucks (actually that's from Google Translate). But still, you can see that - you could add more in there. The hurdle for you will be, what is required in terms of costs to deploy to that level of complexity?
From a straight-up SEO POV, I stand by my preference. But once mass translation work is factored and targeted, dev-based implementation... you may feel otherwise!
-
RE: Search Console Click Through Results
That is actually really, really weird! I am sorry that I don't have an answer for you. What would be really cool, would be if you posted the Search Console bug on Google's Webmaster Central help forum. A Google rep or someone from the wider community may reply. If someone from their side does crack it, I'd be interested to hear the update here on Moz's forums
-
RE: Search ranking for a term dropped from 1st/2nd to 106th in 3 months
Thanks for the info! It's good to get a bigger picture of the nefarious 'globe' network which seems to link to every site on the entire internet, with absolutely zero value-add whatsoever for end users. It's interesting to see that you guys got hit by some variants of that pure-spam domain, which didn't seem to hit us. Clearly the problem is far more widespread than we had at first anticipated
We also disavowed a whole load of non-globe related domains, those weren't in our export
What I'm talking about in terms of the 'targeted' methodology, is not the deployment of the disavow - but the decision making process before the disavow file was compiled. We really made sure that, we got a very granular view of each and every link before deciding whether to disavow or not. We had rows of metrics against each link, before we decided whether to keep or disavow any particular link
In almost all situations, once we reached deployment we used to domain-level disavow directives. There were only 1-2 exceptions, where the client had good editorial pieces on a site - yet also spammy banner / sidebar links from paid advertising. In such situations we used a mixture of disavow directives, to try (as hard as we could) to let to good links through the net. That being said, very few people will be in that same situation. In the majority of cases, if you don't want one link from a domain - you don't want any!
-
RE: Search Console Click Through Results
1.) Select a web property in Google Search Console (remember that 'www' usage, protocol and HTTP vs HTTPS usage all render as different web properties in Search Console)
- https://d.pr/i/cqH7XR.png (screenshot)
2.) Don't go into "Search Analytics", go into Google's new "Performance" report instead
- https://d.pr/i/PGwX48.png (screenshot)
3.) Click to browse pages, apply a filter here if it is your desire to do so (but it is not required, this can all be done with the mouse only). Once you find the page you want, click on it
- https://d.pr/i/EeKlNh.png (screenshot)
4.) Once the page you want to look at has been 'clicked' and only that page is visible, go back to query data
- https://d.pr/i/xP05QX.png (screenshot)
5.) Check out the query data, which is related only to your chosen page
- https://d.pr/i/hZ1LM3.png (screenshot)
And that's that!
-
RE: Duplicate titles from hreflang variations
I think it is an issue because, people browsing your site in other languages will have the wrong language title displayed in their browser tabs if they are multi-tab browsing! The title tag is still one of the important ones for SEO, nothing has really come along to replace it
A businesses' ambitions in terms of an international roll-out, are to break into new (foreign) international query-spaces and get extra traffic (especially from Google, or leading search engines in other nations like Yandex and Baidu). Google's ambitions (when adding your international pages to their index) are that their audience can break onto other areas of the web which (due to the language barrier) were previously closed to them. But they want your content to then be 'tailored' to their international audiences, traffic which Google has no obligation to send your way. Google wants good UX for their searchers, so that Google remains top-dog in the search world
The less tailored your international roll-out is, the more shallow it is (with more pieces missing), the less confident Google will be. They will be less confident that sending their users to you will result in positive search-sentiment
Every piece of the jigsaw which you are missing, counts against you. It makes your international roll-out look more like a quick Google-translate powered land-grab, and less like an authentic international roll-out
My question to you is, when you identify a bad signal - why carry on sending it to Google?
Search is a competitive environment. If there are thing you won't do, others will