I hear what you are saying. Like you say there are people on both sides of the fence, I get rid of them and I am pretty sure I've seen some examples where it has actually benefited results
Best posts made by effectdigital
-
RE: Following urls should be add in disavow file or not
-
RE: Duplicate titles from hreflang variations
I think it is an issue because, people browsing your site in other languages will have the wrong language title displayed in their browser tabs if they are multi-tab browsing! The title tag is still one of the important ones for SEO, nothing has really come along to replace it
A businesses' ambitions in terms of an international roll-out, are to break into new (foreign) international query-spaces and get extra traffic (especially from Google, or leading search engines in other nations like Yandex and Baidu). Google's ambitions (when adding your international pages to their index) are that their audience can break onto other areas of the web which (due to the language barrier) were previously closed to them. But they want your content to then be 'tailored' to their international audiences, traffic which Google has no obligation to send your way. Google wants good UX for their searchers, so that Google remains top-dog in the search world
The less tailored your international roll-out is, the more shallow it is (with more pieces missing), the less confident Google will be. They will be less confident that sending their users to you will result in positive search-sentiment
Every piece of the jigsaw which you are missing, counts against you. It makes your international roll-out look more like a quick Google-translate powered land-grab, and less like an authentic international roll-out
My question to you is, when you identify a bad signal - why carry on sending it to Google?
Search is a competitive environment. If there are thing you won't do, others will
-
RE: How do I determine if some of our paid keywords should be changed to organic?
Usually you just see which search terms (a more specific subset of keyword data) are working really well for you on PPC. You then exclude those specific search terms (NOT the whole keyword, which may be at least partially matched) from your bidding strategy. Note that this is only a good idea if ONLY you have ads for the term (otherwise you lose your ad, and some users may click on competing ads instead of your organic listing - bad). In modern times most people just go full-balls down both channels, which in the end is probably best - as it's so fiddly to get right, that the time spent doing so often exceeds the cost-efficiency yield
-
RE: Is there a benefit to changing .com domain to .edu?
No, none whatsoever. The old TLD bonus debates drew an accurate correlation but completely inaccurate causality
People thought:
1) I see lots of EDU sites
2) They rank really well
3) If I make an EDU it will rank well
... WRONG! Google aren't that stupid. Otherwise all webmasters would now be using EDU domains and all other domains would be pointless (which would be a weird internet to live on)
The truth was actually this:
1) EDU TLDs (Top-Level Domains) tend to be chosen by educational bodies or organisations
2) Such organisations are usually run by educated people and academics
3) One thing those people are good at, is creating really strong (in-depth) accurate content
4) As such many EDU sites naturally became prominent, because of Google's normal ranking rules (not some weird EDU TLD bonus scheme)
If you're looking for quick and easy answers in SEO, you're gonna have a bad time
-
RE: Free tool, and it ranks well for adult sites and checking if they are down, will that hurt us with ranking for normal sites with google?
It could potentially do some harm yes. If you are even referencing porn sites and people can navigate to that URL internally, then minors could find the name of the porn provider and Google it or click links from your site to the adult site (if there are any links). As such you automatically make your site an adult site. How severely Google will grade you on that, I don't know. It's a risk I wouldn't take, personally. Same for linking to gambling site, or new (fake) medical product and treatment sites
-
RE: Truncated product names
If you had two different source codes served via user-agent (web-user vs googlebot) then you'd be more at risk of this. I can't categorically state that there is no risk in what you are doing, as Google operates multiple mathematical algorithms to determine when 'cloaked' content is being used - and guess what? Sometimes they go wrong
That being said, I don't believe your risk of garnering a penalty is particularly high with this type of thing
These are the guidelines:
You're in a really gray area because, you aren't serving different URLs - but you _could _be serving different content (albeit only slightly). I say 'could' rather than 'are' as it entirely depends upon whether Google (on any particular crawl) decides to enable rendered crawling or not
If Google uses rendered crawling, and they take the content from their headless-browser page-render (which they can do, but don't always choose to as it's a more intensive crawling technique) then your content is actually the same for users and search engines. If however they just do a base-source scrape (which they also do frequently) and they take the content from the source code (which doesn't contain the visual cut-off) then you are serving different content to users and search engines
Because you've got right down into a granular area where the rules may or may not apply conditionally, I wouldn't think the risk was very high. If you ever get any problems, your main roadblock will be explaining the detail of the problem on Google's Webmaster Forums here. Support can be very hit and miss
-
RE: Should I noindex user-created fundraising pages?
Be VERY careful
Thing #1) Just because you stop Google indexing and crawling some pages, that doesn't mean they will give that same traffic (keywords linking to those pages) to other URLs on your site. They may decide that your other URLs, do not satisfy the specific keywords connecting with the fundraiser URLs
Thing#2) CHECK. Go onto Google Analytics and actually check what percentage of your Google traffic (and overall traffic, I guess) comes specifically through these URLs. If it's like 2-3%, no big deal. If most of your traffic comes to and lands on these pages, no-indexing them all could be the single largest mistake you'll ever make
Blog posts and articles are fun but no substitute for checking your own, real, actual, factual data. Always always do that
-
RE: SEO Implications of firewalls that block "foreign connections"
I guess that if Google decides to crawl your site using one of their data-centers from one of the blocked regions, suddenly Google will believe that your whole site has gone down and become inaccessible (as Google rarely launches crawls from multiple multi-regional data centers, for one website - simultaneously)
**Exempting GoogleBot via user-agent would be the only possible work-around **(that I know of) If those trying to access your site (whom you are trying to block out) became aware of this modification, they could alter their scripts, browsers and tools to send you the GoogleBot user-agent (thus penetrating your firewall pretty easily)
In the end, you just have to decide what's more important to you
It might be possible to identify Google's data-centre IP addresses from server logs and exempt those instead of exempting their user-agent, but that would probably need a full time employee just to keep up with all the changes. You can be sure that Google won't make it easy to identify their data centers via IP data
-
RE: Does google sandbox aged domains too?
The authority has probably decayed, I think it's more a case of starting over and rebuilding the authority - rather than waiting and hoping for the best. I know, it sucks when you have shelled out on a domain. But in my experience domain purchasing is really hit and miss. If you don't see an immediate difference, often you don't see one at all. Maybe others have different POVs though
-
RE: Missing/Duplicate Content but it's definitely all right!
Even if you disable JS the cited elements are there, so it must be a Moz crawler bug!
-
RE: When serving a 410 for page gone, should I serve an error page?
Completely agree with this.
Also deploy Meta no-index (through the HTTP header / X-Robots, rather than through the HTML if that ends up being a problem... Info on both HTTP and HTML deployments here: https://developers.google.com/search/reference/robots_meta_tag)
Then you're firing both barrels of the shotgun. Telling Google not to index the pages _and _telling Google that the pages won't be coming back
-
RE: Too many links pointing to our privacy policy page: Hurting our ranking efforts of main pages?
I'd at least give it a try and see what happens, if it turns out that your hypothesis is wrong you can always un-disavow the links. When you upload a txt file to the disavow tool, it counts that as your total disavow efforts (so if you just upload a txt file containing these links you have spotted, you may lose previous disavow work). Be sure to download your existing disavow file so you can add to it, instead of just uploading the links you have found
A side-effect of this though, is that later you could upload your disavow again excluding those links and un-disavow them. Before you do anything, you'd want to evaluate the technical features of those links. If they are all no-followed, they won't count towards Google's rankings anyway, so no action would be required (on your part)
Usually speaking, I would suggest making your homepage more link-worthy so that it gains more backlinks over time. Since this case is so extreme, you might actually want to consider disavow (or getting those links no-followed). Followed site-wide links can negatively affect rankings, under certain circumstances
-
RE: MOZ Crawler
Yeah it will take a while but it's necessarily slow. Moz's crawler has to share bandwidth with your real, human users. If they crawl your site too fast they could lag your site or accidentally DDoS you or something
Since it's a paid for service, I'm guessing that Moz err on the side of caution with Rogerbot and don't crawl insanely fast. I know that Rogerbot supports adding a crawl delay to make the crawling even slower:
https://moz.com/help/moz-procedures/crawlers/rogerbot
But I don't think there are any 'go faster' commands. Whilst I really like Moz's crawler I tend to find that the SEMRush one is faster for on-site stuff, though it does have URL crawling caps
To be honest, just get Screaming Frog and learn how to use that. Now that it has a database storage mode (instead of maniacally trying to store all the crawl data in the RAM) - it probably could crawl all 50K URLs. It could probably do it faster than Rogerbot, though you wouldn't wanna go nutty as you could easily DDoS yourself :')
-
RE: Best Practice Approaches to Canonicals vs. Indexing in Google Sitemap vs. No Follow Tags
First of all keep in mind that Google has chosen the pages it is deciding to rank for one reason or another, and that canonical tags do not consolidate link equity (SEO authority) in the same way which 301 redirects do
As such, it's possible that you could implement a very 'logical' canonical tag structure, but for whatever reason Google may not give your new 'canonical' URLs the same rankings which it ascribed to the old URLs. So there is a possibility here that, you could lose some rankings! Google's acceptance of both the canonical tag and the 301 redirect depends upon the (machine-like) similarity of the content on both URLs
Think of Boolean string similarity. You get two strings of text, whack them into a tool like this one - and it tells you the 'percentage' of similarity between the two text strings. Google operate something similar yet infinitely more sophisticated. No one has told me that they do this, I have observed it over hundreds of site migration projects where, sometimes Google gives the new site loads of SEO authority through the 301s and sometimes not much at all. For me, the two main causes of Google refusing to accept new canonical URLs are redirect chains (which could include soft redirect chains) but also content 'dissimilarity'. Basically, content has won links and interactions on one URL which prove it is popular and authoritative. If you move that content somewhere else, or tell Google to go somewhere else instead - they have to be pretty certain that the new content is pretty much the same, otherwise it's a risk to them and an 'unknown quantity' in the SERPs (in terms of CTR and stuff)
If you're pretty damn sure that you have loads of URLs which are essentially the same, read the same, reference the same prices for things (one isn't cheaper than the other), that Google has really chosen the wrong page to rank in terms of Google-user click-through UX, then go ahead and lay out your canonical tag strategy
Personally I'd pick sections of the site and do it one part at a time in isolation, so you can minimise losses from disturbing Google and also measure your efforts more effectively / efficiently
If you no-index and robots-block URLs, it KILLS their SEO authority (dead) instead of moving it elsewhere (so steer clear of those except in extreme situations, they're really a last resort if you have the worst sprawling architecture imaginable). 301 redirects can shift ranking URLs and relevance, but don't pipe much authority. 301 redirects (if handled correctly) do all three things
What you have to ask yourself is, if you flat out deleted the pages you don't want to rank (obviously you wouldn't do this, as it would cause internal UX issues on your site) - if you did that, would Google:
A) Rank the other pages in their place from your site, which you want Google to rank
B) Give up on you and just rank similar pages (to the ones you don't want to rank) from other, competing sites instead
If you think (A) - take a measured, sectioned, small approach to canonical tag deployment and really test it before full roll-out. If you think (B), then you are admitting that there's something more Google-friendly one the pages you don't want to be ranking and just have to accept - no, your Google->conversion funnel will never be completely perfect like how you want it to be. You have to satisfy Google, not the other way around
Hope that helps!
-
RE: Inbound Links - Redirect, Leave Alone, etc
If you want to disavow and redirect at the same time, you probably wouldn't want to use a 301 which passes SEO authority (and negative equity) along to the resultant page. I'd probably use a 302 or a 307, and then disavow the linking domain (or page) in Google's disavow tool which is here. I might also try to no-index the redirecting URL, though with a redirect in place this could not be done within the HTML / source code. You'd have to deploy the no-index directive via the HTTP header instead, using X-Robots
-
RE: Very wierd pages. 2900 403 errors in page crawl for a site that only has 140 pages.
This is almost assuredly a link-based architectural error. It will be something similar to this:
- You load a page on EN
- You click the EN flag or language icon
- Instead of just reloading the page you are already on (since you're already on EN) the link is coded wrong and adds another /EN/ layer to the URL
- Once the new URL loads, the problem can be repeated
- This creates infinity URLs on your site
- Bad for Google, and Moz's crawler
Bet you it's something like that. If you give me the exact URL I might even be able to find the flaw and detail it for you via email or something
-
RE: How do I fix 5xx errors on pdf's?
Big question here is, are any other tools giving 5XX errors on those PDF URLs (ending in .pdf) or just Moz on its own? Moz is known to have issues crawling URLs which contain certain characters, which other crawlers may not have a problem with - so you need a second opinion here
Install this on Chrome: https://chrome.google.com/webstore/detail/redirect-path/aomidfkchockcldhbkggjokdkkebmdll?hl=en (mainly for tracking redirects, but also tells you status info in a handy way from a pop-out button)
Visit one of the URLs directly which Moz is saying gives a 5XX. Does it on Chrome? Note: sometimes a page will look like it has rendered properly, but for some reason the server will still send an invisible 5XX response - this Chrome plugin would pick that up!)
Also check the URL with this second plugin: https://chrome.google.com/webstore/detail/seo-indexability-check/olojclckfadnlhnlmlekdihebmjpjnoa?hl=en-GB
If both those plugins agree the page loads fine on a 200, then Moz has an error. If they both agree that the page is a 5XX even though it looks legit, then you have a server response error
If you get mixed results we have a mystery to look deeper into...
-
RE: Is there a Risk Around Creating a Website for Each Country in The World?
Unfortunately yes. We have a number of clients who went 'geo-mad' and in almost all situations, it has caused problems for them. Sometimes it has created colossal site footprints which Google doesn't care to index (unless you're a household name, don't expect Google to care about your hundreds of thousands of URLs). Sometimes that has also caused server-load issues for them too, irrespective of Google
Other issues include Google ignoring their canonical tags and setting one language URL as the 'canonical' result (and thus de-indexing the other language URLs). This can happen due to link signals and similar content, stuff like that
Many clients in such a position have **seen their pages devalued as a result of them going against Google's content guidelines **(and simplicity guidelines). If you're not super important, Google don't want to waste 4x, 6x or 20x crawl budget in your site just because you decide to serve in more combinations of language and geo-location. Even with perfect Hreflang deployments, a lot can go wrong if you go nutty so cherry-pick your language/geo combinations and don't be greedy with it
If your brand is powerful online and you have loads of SEO authority / ranking power, then you can deploy hreflangs extensively and usually you can make real gains. Not everyone is in that position, most aren't
Having unique content (not powered by some crappy auto-translate plugin) per deployment is strongly, strongly recommended. By the way if you have less ranking power than most sites which have 'successful' broad-reaching hreflang deployments, you need to adhere to Google's guidelines more strictly than those sites do. You need to make up for you lack of trust and authority, by doing things by the book
Too many people look at big international sites and say: "well Google lets them use relatively thin content so I should be alright too". Nope, you are likely standing upon a platform of radically different stature to those guys, so don't over-reach too quickly or you'll stumble and fall
Also if you are planning to use canonical tags to 'canonical' from one language to another, don't do that. If a page points to another, separate URL with its canonical tag - then it tells Google that it (the active page) is the non-canonical version and usually de-indexes itself
Be very careful how you proceed. If you increase your footprint too far, all the great authority you have built up may bleed out over a sprawling site and you could end up with nothing
-
RE: Site Migration Question - Do I Need to Preserve Links in Main Menu to Preserve Traffic or Can I Simply Link to on Each Page?
Personally I think it would be wise to preserve those links. A secondary top-of-page menu (maybe hover to expand) would preserve the authority in a similar way, footer links don't really do much in modern SEO
It depends what you want. Maybe it's worth sacrificing some small amount of authority from those pages, to have killer new UX that converts 10x better. Do the math
-
RE: Can an external firewall affect rankings?
Site speed impact is where I see this becoming a real problem, unless the setup is done correctly
-
RE: Is BigCommerce a good CMS for Improving Search Visibility for our E-Commerce Business?
The trick is usually not to migrate from one CMS to anther, but instead to combine the best elements of each. For example I think Magento is a pretty big steaming pile from an SEO POV, but the way it handles eCommerce data is very efficient and regimented. So you'd want the back end on Magento and the front-end on WordPress. I imagine that with BigCommerce it would be much the same thing, WP is so established now from an SEO POV that it's hard to beat so you'd at least want to retain it at your shallow front-end, even if you powered the commerce system(s) a different way
-
RE: How to Configure Robots.txt File
It depends on how your unique website creates URLs and how they are formatted, it also depends upon the current contents of your robots.txt file. If you can share some examples of URLs that are blocked, which you think should not be blocked, and also the contents of your robots.txt file, someone can probably tell you what you did wrong
-
RE: Magento missing SEO fields?
I haven't worked with anyone using Magento in a while, but from memory - it relies upon 3rd party plugins to get the SEO stuff working right. This list was updated near the end of 2018: https://www.cloudways.com/blog/best-magento-seo-extensions/ - maybe it will be helpful!
-
RE: Duplicate content
This is good advice. Canonical tags would be a weak fallback compared with picking where you want to spend your efforts and choosing one 'main' site. As such, just like Alex has suggested - redirects and site consolidation are probably the best bet (this would also bind the backlinks for both domains, to one domain)
This could end up being pretty technical depending upon current site(s) performance. If it's done incorrectly (e.g: using 302s instead of 301s) then it could be a disaster
Alex's advice is right but just make sure you're careful how you approach this
-
RE: Moz-Specific 404 Errors Jumped with URLs that don't exist
404s are usually for pages that 'don't exist' so that's pretty usual. This is either:
-
somewhere on your site, links are being malformed leading to these duff pages (which may be happening invisibly, unless you look deep into the base / modified source code). Google simply hasn't picked up on the error yet
-
something is wrong with Rogerbot and he's compiling hyperlinks incorrectly, thus running off to thousands of URLs that don't exist
At this juncture it could be either one, I am sure someone from Moz will be able to help you further
-
-
RE: Does not having any hreflang tags for U.S Visitors lead to an increase in International Visitors?
It could easily be possible. Google usually takes the most specific directive when multiple directives contradict each other
If you previously had your site targeting 'international' in GSC and you had US hreflangs, Google would have still targeted to USA as US directive is more specific
Since you say you had targeting set to USA (and still do) and you may have had US hreflangs which were removed, this is a bit odd. Even without hreflangs, the GSC target US users directive is still more specific and thus Google 'should' default back to that
That being said, hreflangs might be a bit of a harder directive as they are actually coded onto your web pages. It may also be Google ignoring GSC directives for some sites to try and 'encourage' more webmasters to embrace hreflangs and proper internationally targeted websites
Google won't look at your bounce rate I don't think. Certainly not from Google Analytics (as that's your own data and it's also really easy to manipulate, e.g: you could use JS to detect what country people are connecting from and when you serve the pages you could strip out the analytics tracking script, thus lowering the bounce rate). Google use their own data, mostly the metrics from Search Console (clicks, impressions etc)
People talk about bounce rate being bad because it's bad UX (usually) and Google wants sites built with proper UX, sites that are useful to people. But the aim is to make your site better so that real people won't bounce as easily, not to get better GA bounce-rate metrics (which aren't used in Google's algorithm). Of course Google must have a way to evaluate something similar, but that number in that database wouldn't be 'the one' that they factor
I would try embedding the relevant hreflangs to support your geo-targeting and additionally checking what GSC property has your US settings in. You can always block traffic from other countries, like how if you try to view a certain series on Netflix from the UK which is only available in the USA - it says "this show isn't available in your region". If you really care that much, just do something similar (this page is not available in your region)
-
RE: Very Old Pages Creeping Up - Advice
Makes a lot of sense. You'd need to crawl all the backlinks using something like ScrapeBox Free Link Checker or Screaming Frog (with certain XPath extraction settings) to see which of these links are still live. For the live ones you'd want to run something like URL Profiler over them (with paid API keys for Moz, Majestic SEO and Ahrefs) over the links to see if any of them are worth anything (by bulk fetching metrics and aggregating them in your own spreadsheet, using your own weighted custom formula). Once that was done you could decide which ones to keep and redirect those. For the others you could handle them separately or just ignore them
-
RE: 404's being re-indexed
Well if a page has been removed and has not been moved to a new destination - you shouldn't redirect a user anyway (which kind of 'tricks' users into thinking the content was found). That's actually bad UX
If the content has been properly removed or was never supposed to be there, just leave it at a 410 (but maybe create a nice custom 410 page, in the same vein as a decent UX custom 404 page). Use the page to admit that the content is gone (without shady redirects) but to point to related posts or products. Let the user decide, but still be useful
If the content is actually still there and, hence you are doing a redirect - then you shouldn't be serving 404s or 410s in the first place. You should be serving 301s, and just doing HTTP redirects to the content's new (or revised) destination URL
Yes, the HTTP header method is the correct replacement when the HTML implementation gets stripped out. HTTP Header X-Robots is the way for you!
-
RE: Duplicate canonical tag issue
Oh dear this has gotten into a bit of a mess!
For a start, there's an error in your question which may be causing your to get fewer than average responses. You have embedded two links which look like they point to different pages, but you have encoded both the links in your question to point to exactly the same site (even though the text says they should be going to different URLs). This may be confusing people whom are seeking to answer your question
These are the real links for anyone who runs into this:
Old site: https://www.selldealsmango.com/
New site: https://www.dealsmango.com/
Where you say: "I have this site https://www.dealsmango.com/ which i have selected for canonical , but google is still selecting my old website" - no, you have not fully selected the new site as canonical, as the old site (here) does NOT canonical to your new site, instead the old site still canonicals to itself. The new site canonicals to itself to, but the old site does NOT canonical to the new site
https://d.pr/i/P70o8N.png (screenshot)
Where you write: "only one page with new site link, and also put 301 redirect" - you have contradicted yourself. You say there is a 301 redirect, but also say the page is still up with a link on it. If there were a 301 redirect, all users and bots would get redirected before the page loaded and you would never see the link (at all) as pages which redirect cannot support any source code
The fact that the page is still live with a link on it, means that there is NO 301 redirect. Maybe you have redirected some sub-pages, I don't know. But your homepage is your post important page, so failing to 301 redirect that (which you have not done) is a big error. Fix that ASAP if you do nothing else
Also make sure you use Google's change of address tool from within the old property's Google Search Console dashboard:
https://support.google.com/webmasters/answer/83106?hl=en
Even if you follow all of these steps, it may take a long time for performance to return now. That's because this migration was handled in a very strange way, what's done is done
Fix what you have implemented with a proper 301 redirect migration project down to the granular (A-to-B) URL level including historic URLs which you can pull from backlink destination data, as well as from Search Console (or analytics by blending the hostname and landing page dimensions)
You'll have to just hope for the best!
-
RE: Empty href lang
That is technically an error. If a page or post doesn't exist in a certain language, hreflangs pointing to it (from other pages) shouldn't exist. When the post is published, then the hreflangs should be written. It's probably not going to destroy your sites rankings by itself, but in SEO, the second you start making concessions on one front - people argue that you should make more concessions in other areas. Before you know it, you have a big tangle of loads of different errors. Optimisation, is about making things 'optimum'. It's not optimum to have hreflang errors like that
-
RE: Site Migration - Pagination
Google doesn't use rel=prev/next any more: https://searchengineland.com/google-no-longer-supports-relnext-prev-314319 - so forget about it unless you think it has benefits for crawlers other than Google
I would do the redirects properly, so redirect the old paginated URLs to the same page (paginated URL) on the new site
Google usually doesn't list paginated content but it can do sometimes. A good example of this is when you type really specific queries into Google and find that Google is linking to a topic on a forum. Quite often you'll see Google linking to paginated content there. Why? Because that specific page of the topic, is the part where the thread really gets answered (or gets its best insight). Maybe some people link to that page of that thread specifically, and it becomes more popular than the first page
In those situations, Google's usual view (that first page should be canonical) gets overridden. So whilst Google 'usually' makes the first page canonical, sometimes Google can change its mind if popularity metrics suggest a different paginated URL should be canonical instead
As such, you don't need rel=prev/next (which Google doesn't even use) and you don't need to put canonical tags on paginated content pointing to the parent (which might disable Google from overriding the default canonical URL). I would properly redirect all the old paginated URLs, to all the new ones - so Google doesn't get confused
-
RE: Can slow mobile page speed affect desktop search results?
This is a really good answer. You can also look for messages from Google saying that "mobile first indexing" was enabled for your site / GSC property
-
RE: How to check if a domain has been penalised before I buy?
Check using the organic search estimates in Ahrefs, those are usually pretty good and they keep quite a long history. You can also get similar estimates from SEMRush
-
RE: Is there a way to forward banklink benefits from one domain to another without a redirect?
Canonical tags avoid duplicate content and help to determine page relevance, but common current SEO thinking is that they do not pass link equity or SEO authority. If they do, it's not much - and not comparable to the power of a 'properly' set up 301 redirect
Even when you DO use 301 redirects, they can fail for loads of different reasons. One big reason is content similarity in machine terms (think Boolean string similarity, for the content of the old and new URLs)
If even the mighty 301 has so many stipulations where it can just 'stop working' (or never work in the first place) I'd be highly, highly skeptical that canonical tags would have the desired effect
-
RE: Several hreflang links pointing to same URL
Yeah you can do language only hreflangs. But it's pure nonsense to direct Google to the very same URL and state that it is the URL for all of those different languages. At the end of the day, Google will crawl from one data centre at once which may be from one of many countries. It will see one version of the page, and assume that 'this is what the page is'
If the site structure is that you have one URL only and the contents are modified based on the user's origin, then the structure is wrong as Google will have a very hard time ranking one URL as many different URLs. People who have such a structure always end up here, always argue why it's ok and then end up 'doing it properly' later on as it just doesn't work
Also note that, if you have one version of a page served to people in different regions (e.g: an EN page which is stated in the hreflangs to be for both Canadians and Americans), Google may see that as a 'minimum effort' deployment with no value proposition. Different audiences need tailored content to suit them, so a re-write of some of the content is still expected if you want to see an increased international footprint (and you're not a giant like Santander or Coca-Cola)
The number of times I see people clone their EN site into a US folder and just 'expect it to rank' with no extra effort, just with hreflangs - is staggering. Google expect to see a value proposition when you build out your site. Value-prop ('value add'), the #1 yet never talked about ranking factor
I don't think your current implementation will work very well, if at all. You may have lots of human-brain reasons why it should - but crawlers are robots
-
RE: How to overcome Connection Timeout Status Error?
This means that Screaming Frog is not 'waiting long enough' before returning the time-out error.
Just do this:
- https://d.pr/i/D7Cj5M.png (screenshot)
- https://d.pr/i/N6xdLA.png (screenshot)
Raise that number up, until you don't get 0 / Time Out any more. Note that if it does fail a lot on moderate crawl settings, there are likely to be underlying page-speed issues (either that or the machine you are crawling from has bad bandwidth)
It could also be that your crawl ' frequency' is too high, go Configuration->Speed and lower the thread count (to 2-3) and the URI/s (to one or two)
Finally it might be that the SF user-agent is blocked so go Configuration->User Agent and switch it to Chrome
**To help you **- I did the crawl for you, here is your crawl data: https://d.pr/f/a1ux4b.zip (archive of crawl file and some exports)
You can actually use my crawl file as a starting point (by double clicking it) when you want to re-crawl in future. Should be useful to you
-
RE: Should I apply Canonical Links from my Landing Pages to Core Website Pages?
Yeah definitely don't do it the other way around. To be honest if the landing pages are orphaned, Google probably won't rank them anyway. If Google 'is' ranking them instead of the other ones, the real question is why does Google find them more useful, and what can you add from the landing pages to the organic pages - to make them better!
-
RE: Blocking pages from Moz and Alexa robots
That looks valid to me. It's possible you may not need "*" at the end of each rule but I can't see it doing any harm either
I might go more like:
User-agent: ia_archiver
Disallow: /*/search/User-agent: rogerbot
Disallow: /*/search/^ this would stop all search URLs being indexed, so even if you introduced new search facilities later in other directories - they would 'probably' be caught too (assuming that is your intention, assuming they were still in /search/ subdirs)
Don't think what you have done is wrong though.
Always check using Google's robots.txt tester to be safe. Just put your rules into the tester (altering them to be used for all user-agents), and try out some different URL patterns. When it works as you like, update your real robots.txt file (remembering of course, to restore your rogerbot / alexa UA targeting - if you don't want the rules to also apply to Google!)
-
RE: Hreflang: customize, selection the best URL structure
1.) I'd use option two:
• http://example.com/en-ca/ - hreflang="en-CA"
• http://example.com/fr-ca/ - hreflang="fr-CA"... as it most closely resembles the structure of a double barreled hreflang tag. Some hreflangs only reference a location or a language, a double barreled hreflang references both. Since you're using double barreled hreflangs, it makes sense that your architecture should actually fit your own hreflang deployment
If your hreflangs are deployed accurately, there's no reason to use the Search Console geo-targeting. In-fact, a lot of people find it overly constricting as it cuts off the tail-ends of the rough-edges of your traffic. Personally, I usually steer clear of it
Hreflang structure basically comes from these two lists:
The more closely your site structure mirrors that and the hreflang tag, the better
2.) Yes, single barreled hreflang deployment is possible:
Even Google show that this is possible:
- https://d.pr/i/HUJESr.png (screenshot)
As you can see, it can even be mixed in with other types of hreflangs so that's cool.
3.) From that Google screenshot I just pasted, you can see it's in the header. No - do not add x-default as a hreflang to a URL which has already been assigned a language. In your situation you don't need x-default, ignore and don't use it
-
RE: Google-selected canonical makes no sense
Oh wow that's very insensitive of Google! What you have to understand is that, most online content exists to sell products, to drive revenue and business - to a large degree that's how Google evaluates web-pages (the lens that it sees through)
If you page were commercial in nature (which obviously it is not) then Google would be making a semi logical decision. They're trying to skip users past the 'waffle and blurb' to the 'action point' where the user performs their only meaningful interaction with the page (in this case, a contact form)
For your site this is entirely inappropriate. To be honest you could Meta no-index and / or robots.txt block the "/share" (contact form) URL - to discourage Google from crawling and indexing it. Robots.txt controls crawling (less relevant), Meta no-index controls indexation. Note that like the canonical tag, these are both still 'directives' which Google doesn't 'have' to obey (fundamentally). Don't deploy both at once, as if you deploy robots.txt first (thus stopping Google from crawling the URL) - Google won't be able to crawl and 'find' the Meta no-index directive
Remember: telling Google not to crawl one URL, doesn't necessarily mean that your preferred URL will rank in its place
Your other option is to re-code the site, so that the contact form pops out in a content-box (or slider). That way, the contact from will share the same URL as the main page - thus Google will have to rank them both simultaneously (as it will have no choice)
Sorry that you have encountered such a difficult issue, hope my advice helps somewhat
-
RE: Google Search Console "Text too small to read" Errors
Hard to tell what to do without seeing specifically where this error is coming up, but it usually has something to do with font sizes being too small on mobile devices. Mobiles have much more densely packed pixels on their screens (images are sharper, the pixels are smaller and more numerous). This means that 12 pixels (font-size) on desktop can look like 5 or 6 pixels on mobile devices (or even smaller)
This is why, with responsive design, many people don't specify a pixel-based font-size. They'll make the font size relative (rather than absolute) somehow, be that by calculating pixel width against screen width as some percentage, or by working with newer font-sizing specifications (there are many, and many ways to use them). It's all about inflating font-sizes on devices with more densely packed pixel screens, so that the fonts don't come out looking minuscule
Sometimes you can get errors where, even though the site's design is responsive, as someone was writing text in the CMS text editor - it appended styling info including the px font-size, which is then inlined (overruling all your perfectly thought out font-sizing CSS rules)
If it's not mobile related at all, it's likely that the font is just generally too small
-
RE: Does anyone experienced to rank a KOREAN Keyword here?
I guess the main thing to do, if you are concentrating on giving gambling reviews - would be to focus around the terms that people are searching for right now.
It might not be bad to try for "도박" which transaltes to "gambling", which has (according to Google's keyword planner) - 3,600 average monthly Korean language searches in South Korea
If you want to be more ethical there's keywords like "도박 중독 치료 (260 avg. searches / month, S.Korea)" which is "Gambling Addiction Treatment". If you are reviewing online gambling sites, maybe giving some links and resources to help people suffering from addiction could also be helpful. Reviews are one type of 'advice', so really this is just an expansion of what you are already doing (though it may come with some legal entanglements)
This one is interesting: "단 도박". (480 avg. searches / month, S.Korea). It translates to "sweet gambling". But what does 'sweet' mean? Is it just that people think it's 'pretty sweet', or instead is it a special type of Korean gambling using sweets (confectionery)? Tailoring your content to your market and audience, is incredibly important
Tread carefully. There are lots of keywords like this one "도박 사이트 처벌" (10 avg. searches / month, S.Korea) - translating to "punishment of gambling sites". There are lots of queries asking about 'cases' (legal assumed) of gambling addiction and gambling punishment. Although the web is free in South Korea, maybe laws on gambling are in fact stricter (needs more research)
Indeed, this keyword "도박 마" - translates to "do not gamble" (1,600 avg. searches / month, S.Korea). This is interesting, is it some kind of cryptic warning? Who knows
This keyword "합법 도박 사이트" translates to "legal gambling site" (140 avg. searches / month, S.Korea) - hinting that not all online gambling, even in South Korea, is legal
I found this which is a very interesting read: https://www.thekoreanlawblog.com/2017/02/koreas-gambling-law.html - mind how you proceed. Very murky waters
-
RE: 301 Redirect in breadcrumb. How bad is it?
Highly doubt that would be a reason to 'lose of lot of SEO ground'. If those URLs were 404-ing before, you had breadcrumb links to 404s and that's worse than breadcrumb links to 301s
The bigger problem was, you lost your category pages which got set to not visible. And by the way, even when you change them back to 'visible', if the 301 is still in effect - users and search engines still won't be able to access your category URLs (as they will be redirected instead!)
If the category pages have been restored and you're still redirecting them, yes that is a big problem. But it's not because you used a 301 in a link, it's because you took away your category URLs. That very well could impact performance (IMO)
-
RE: Will duplicate product information paragraphs negatively impact our site?
I wouldn't say there would be massive chances of a penalty here, that being said it's an area where you could be 'adding value' and uniqueness to your pages and you're not doing it. So your pages may be 'less competitive' and you may be missing out on an opportunity. It's more of a competitive missed opportunity than an 'error' per-se
In reality you should have one product page for each product and then just have 'product variants' for stuff like quantity, size, colour etc. On the modern web people find this easier to navigate and since many sites do offer that, they might seem like more competitive places to shop for paint cans than your site. Price does matter, but it's not the sole arbiter of how products are ranked on Google's search engine - other stuff matters too. Unless you have a virtual monopoly on the product (only you can sell it, or only you can sell it at a greatly discounted price due to a special relationship with the supplier) then I would consider the UX and design of your site. No one wants an 'arse-ache' of a browsing experience
Many tools will flag what you are about to do as duplicate content and they're technically right. But instead of going on some crazy copy-writing crusade, think about the architecture of your site. You can still have separate URLs for different product variations if you want, even via parameter-variables (though that's a bit of a 'basic' implementation). If you make it clear to Google through new, more streamlined architecture that they're all actually the same product, the duplicate description(s) won't matter 'as much' (though they'll still be a missed opportunity for more diverse rankings IMO)
You can make it even more apparent to Google that all the different variations are actually the 'same product' by utilising Product schema and some of the deeper stuff like ProductModel which will bind it all together. Whatever you implement, test it here. If this tool throws errors and warnings, keep working away until they're all fixed
Canonical tags are another option but they will decrease your ranking 'footprint' and in this case I wouldn't recommend them, despite 'slight' content duplication risk (which in reality, are mostly negligible)
Final note: you say you have 'unique' descriptions, but remember if they're used elsewhere online they're not unique. If they're unique internally that's great, but if you got them all from a supplier then... obviously loads of other sites are probably using them, which could easily be a big issue for you
-
RE: Does anyone experienced to rank a KOREAN Keyword here?
Unfortunately there are too many variables to analyse this much further. It could be a legal thing where in South Korea, gambling is allowed but only certain South Korean institutions have permission. If you're not one of those suppliers, it may be illegal for people even from South Korea to use your site, which may give them leverage to remove your search results (to stop their citizens from engaging in illegal gambling)
Gambling is allowed there, but it's very tightly regulated. So could it be a law reason you get removed, not a Google reason?
-
RE: Disallowed "Search" results with robots.txt and Sessions dropped
If you have a general site which happens to have a search facility, blocking search results is quite usual. If your site is all 'about' searching (e.g: Compare The Market, stuff like that) then the value-add of your site is how it helps people to find things. In THAT type of situation, you absolutely do NOT want to block all your search URLs
Also, don't rule out seasonality. Traffic naturally goes up and down, especially at this time of year when everyone is on holiday. How many people spend their holidays buying stuff or doing business stuff online? They're all at the beach - mate!
-
RE: Can I safely block my product listing from search? Does it even make sense?
Can you explain the difference between the 'products listings' and the 'actual products themselves'?
You say you still want products and product categories to rank, but not product listings. But to most readers, a product listing is usually a product category or product page (so the info seems to contradict itself, which actually it may not do - just needs more explaining)
-
RE: Does anyone experienced to rank a KOREAN Keyword here?
Usually that depends upon the significance of the edit and how much Google trusts the query-space (or whether it is a YMYL query-space). A lot of people say that gambling sites are YMYL sites
The real question is, do the rankings get removed and then come back again later? Or do they get removed and just stay gone period?