I came across this SERP Feature in a search today on a mobile device. It does not show for the same search query on desktop. What do we know about this "Shops" SERP feature?
Posts made by seoelevated
-
What do we know about the "Shops" SERP Feature?
-
RE: What happens to crawled URLs subsequently blocked by robots.txt?
@aspenfasteners To my understanding, disallowing a page or folder in robots.txt does not remove pages from Google's index. It merely gives a directive to not crawl those pages/folders. In fact, when pages are accidentally indexed and one wants to remove them from the index, it is important to actually NOT disallow them in robots.txt, so that Google can crawl those pages and discover the meta NOINDEX tags on the pages. The meta NOINDEX tags are the directive to remove a page from the index, or to not index it in the first place. This is different than a robots.txt directive, whcih is intended to allow or disallow crawling. Crawling does not equal indexing.
So, you could keep the pages indexable, and simply block them in your robots.txt file, if you want. If they've already been indexed, they should not disappear quickly (they might, over time though). BUT if they haven't been indexed yet, this would prevent them from being discovered.
All of that said, from reading your notes, I don't think any of this is warranted. The speed at which Google discovers pages on a website is very fast. And existing indexed pages shouldn't really get in the way of new discovery. In fact, they might help the category pages be discovered, if they contain links to the categories.
I would create a categories sitemap xml file, link to that in your robots.txt, and let that do the work of prioritizing the categories for crawling/discovery and indexation.
-
RE: Duplicate content question
@andykubrin I would like to add that another valid approach is to ignore the "issue". Are all 4 of your form pages currently indexed? If so, then this Moz-reported issue is not necessarily an actual issue. There is no "penalty" for duplicate content like this. The situation we all wish to avoid is for the search engine to choose one of the pages to index, because it sees them all as duplicates, and not necessarily index the one we want or associate with all our desired keywords we've individually targeted per page. But, if all 4 of your pages are currently indexed, and if they rank for the terms that you want, then it would be OK to ignore the issue.
As well, you might think about whether you want these pages to be indexed/rank at all. If your desire is for traffic to go to the service description pages and then flow to the forms, and if the service description pages are the ones which actually are ranking, the issue may not even matter to you. And so again, you might decide to ignore. And that would be a valid choice.
-
RE: Multiple H1s and Header Tags in Hero/Banner Images
While there is some level of uncertainty about the impact of multiple H1 tags, there are several issues about the structure you describe. On the "sub-pages", if you have an H1 tag on the site name, that means the same H1 tag is used on a bunch of pages. This is something you want to avoid. Instead, develop a strategy of which pages you would like to target to rank for which search queries, and then use the page's primary query in the H1 tag.
The other issue I see in your current structure is that it sounds like you have heading tags potentially out of sequence. Accessibility checker tools will flag this as an issue, and indeed it can cause frustration for people with vision disabilities accessing your pages with screen readers. You want to make sure that you preserve a hierarchy where an H1 is above the H2 is above the H3, etc.
-
RE: Is using a subheading to introduce a section before the main heading bad for SEO?
You will also find that you fail some accessibility standards (WCAG) if your heading structure tags are out of sequence. As GPainter pointed out, you really want to avoid styling your heading structure tags explicitly in your CSS if you want to be able to to style them differently in different usage scenarios.
Of course, for your pre-headings, you can just omit the structure tag entirely. You don't need all your important keywords to be contained in structure tags.
You'll want, ideally, just one H1 tag on the page and your most important keyword (or semantically related keywords) in that tag. If you can organize the structure of your page with lower-level heading tags after that, great. It does help accessibility too, just note that you shouldn't break the hierarchy by going out of sequence. But it's not a necessity to have multiple levels of heading tags after the h1.
-
RE: Are Old Backlinks Hurting Or Helping?
There are quite a few factors to consider. Let's start with why redirecting obsolete products can possibly hurt ranking. Most likely the scenario to which the SEO Expert was referring is that search engines may see a large volume of returns to the search results page, followed by clicking other results, as an indication of a result which does not meet users' needs.
In the case described above, it is possible for such redirects to "hurt", but only in the context of a specific query. For example, let's say you used to stock footballs. And now you no longer carry any of those. And you've redirected your old football product pages to a sporting goods category page, but where no footballs can be found. Then, yes, this might cause search visitors to return to the SERP and so it could potentially hurt. But if so, then it would be hurting for a search for "footballs". Because if the visitors were searching for something you DO have in stock, then after being redirected they wouldn't immediately return to the SERP. And so, if you no longer stock footballs, why would you worry about losing rank for the search query "footballs".
Now, with that same example, let's say that some of your football pages also ranked very highly for "gifts for athletes". And you redirect to a relevant category. Those visitors are more likely to convert. Same landing page, different search query.
For this reason, in general, it's usually a good idea to redirect the old products to relevant category pages, if you have any. There are exceptions, of course. So, the trick is to look at what kinds of terms those specific pages previously ranked for, and whether there is still value in ranking for any of those specific terms. Remember, you don't own an overall "rank" on Google. You rank for specific queries, independently.
-
RE: Will I get penalised from an SEO perspective for having redirects.
@rodrigor777 That is a common approach, and not cause for any kind of penalty. If these domains never really existed as pages on the web with inbound links, then the simple redirect to the desired domain's home page may be fine. If they did exist, and if there are pages indexed from these other domains, then you will want to redirect appropriately for each page, rather than pointing all to the home page.
-
RE: Brand Name Importance in SERPS
Although the impact of keyword match in domain names isn't as high as it once was, my current experience is that it still is a very significant ranking factor. I've recently (last year, and also about 4 years ago) completed two domain name changes, and the impact on searches where the query term is/was matched in the domain name definitely has an impact. That said, after an initial "honeymoon" period, you're likely going to see some negative ranking impact of a domain name change, regardless of the specific domain names. My recent experience has been that things get crazy for a week or so, then look really good for 1-3 months, then the negative impact hits, and then it takes quite a while (sometimes more than a year) to get everything back to where it was. So, if you do change domain names, it needs to be seen as a long-term strategy, not a "this year" one.
-
RE: Discontinued products on ecommerce store
Yes, I meant 301, server-side redirects.
Regarding performance, I currently have a little over 50,000 entries in my redirects file with no discernable performance impact. But, different platforms handle differently, and also we have a CDN which caches redirects too, so that could make a difference. I guess the safest approach would be to insert a hundred thousand or more dummy redirect entries into your redirect file, temporarily, and stress test it.
-
RE: Discontinued products on ecommerce store
Th approach of redirecting with an informative message is potentially a good one. I have not implemented nor seen this done. If you go this route, make sure it is a true server redirect, with a 301 response code. But I could see how the redirect could include a query param in the destination URL which could then be used to display a fairly generic message.
As far as better vs. worse, from my perspective that differs depending on the nature of the products. One good use case for keeping the old product page around would be like a consumer electronics product page which contained technical info or resources which would be hard to find otherwise (but an alternative could be to have a support library for that). Another example, when I was on the agency-side, I worked with an apparel brand which each season introduced and retired thematic prints. And they kept a library of retired prints, which visitors could upvote to try to get them returned into service.
You wrote in your OP that these pages are zero/low traffic, with few backlinks. So, I'm inferring that the actual user experience isn't going to be really experienced very much.
But the reason to redirect to the category page, is to preserve any link equity the product page might have built up over time. Again, even if each product has very few backlinks, if you add them all up redirected to a parent category page, that could make a difference in how that category page ranks. If you can accomplish this without confusing real visitors (if any).
To your last point, yes it's possible that the search engine might consider some of these redirects to be "soft 404s". In which case, the link equity wouldn't be preserved because it would be treated like a 404. But, that's exactly what you're proposing to do anyway. So, if even just some of them get treated as 301s, you're ahead of the game, as I see it.
-
RE: How important is Lighthouse page speed measurement?
My understanding is that "Page Experience" signals (including the new "core web vitals) will be combined with existing signals like mobile friendliness and https-security in May, 2021. This is according to announcements by Google.
https://developers.google.com/search/blog/2020/05/evaluating-page-experience
https://developers.google.com/search/blog/2020/11/timing-for-page-experience
So, these will be search signlas, but there are lots of other very important search signals which can outweigh these. Even if a page on John Deere doesn't pass the Core Web Vitals criteria, it is still likely to rank highly for "garden tractors".
If you are looking at Lighthouse, I would point out a few things:
- The Lighthouse audits on your own local machine are going to differ from those run on hosted servers like Page Speed Insights. And those will differ from "field data" from the Chrome UX Report
- In the end, it's the "field data" that will be used for the Page Experience validation, according to Google. But, lab-based tools are very helpful to get immediate feedback, rather than waiting 28 days or more for field data.
- If your concern is solely about the impact on search rankings, then it makes sense to pay attention specifically to the 3 scores being considered as part of CWV (CLS, FID, LCP)
- But also realize that while you are improving scores for criteria which will be validated for search signals, you're also likely improving the user experience. Taking CLS as an example, for sure users are frustrated when they attempt to click a button and end up clicking something else instead because of a layout shift. And frustrated users generally equals lower conversion rates. So, by focusing on improvements in measures like these (I do realize your question about large images doesn't necessarily pertain specifically to CLS), you are optimizing both for search ranking and for conversions.
-
RE: Discontinued products on ecommerce store
If you are keeping them, rather than redirecting them, I assume that means you have a reason for people to be able to find those pages so that they can get some information abot the discontinued product, or at least understand that it was discontinued. If that's the case, then I don't think you would want to noindex or 404 them. On the other hand, if there is no reason for those pages to still exist, from a visitor standpoint, then usually I would redirect them to a category page (generally the parent category the product belonged to), to preserve any link equity, even if the number of links are low. Especially if you have a lot of discontinued products from a category, even if each product had let's say on average 0.1 links, then if you have 1,000 of those pages you would end up with 10 backlinks to your category page, which could be valuable. Again, this is assuming that you don't want/need to preserve the pages for your users to be able to find the info.
-
RE: Should We Wait To Launch a Redesigned Site After Google's Core Web Vitals & Page Experience Algo Update
I don't believe there is any reason to wait for an algo update. Especially if your new site has improvements which could help your CWV scores. Google states that they will be using "field data" (from real users, not bots) over a 28-day period to assess CWV. So, if your new site is going to score better, you would want to build up those scores now. That said, if your new site is going to score worse than your current one, you might do well to fix it prior to launching it. There are plenty of tools (both lab data-based and field data-based) to assess your old and new pages. Page Speed Insights is helpful for public-facing pages. Whereas for not-yet-public pages, you might need to resort to using the Audits tab of Chrome Dev Tools, or other tools which allow for authentication, etc.
-
RE: Hlp with site setup
To my understanding, a redirect and a canonical are treated very similarly from an SEO standpoint. With either of these, only the end URL (either the one to which you are redirecting, or the one linked in the canonical reference) is the one which, if all directives are honored, gets indexed. So, unless I'm missing something, there is no benefit at all of having the category paths in the URLs if you are either redirecting from those to the flat one, or if you are pointing a canonical to the flat one. The benefit would be there if those keywords were in the final URL (redirected or canonical). But if the final URL is flat, then I don't think you get any benefits from the non-canonical URLs having keywords in their paths. So, if the flat URL is the final one, from either method, I would ensure that the "product name" is fully descriptive with the desired keywords.
-
RE: Hlp with site setup
The benefit of the directory paths approach is the additional keywords, if your product name (or ID) is not in itself descriptive enough. For example, if you have a sofa style named "Diana", you wouldn't want your URL to be domainname.com/diana.html. Something like domainname.com/furniture/sofas/diana.html would be better.
But, you can accomplish that with more descriptive product IDs. So, in the example above, if you could make your product name "furniture-sofas-diana", then your URL would be domainname.com/furniture-sofas-diana.html, which accomplishes the same keyword targeting.
And then that solves the issue of when products are in multiple categories, since it's a flat URL regardless of how the visitor arrived to the page.
But if your products are really almost entirely in a single category each (keeping in mind temporary categories like "sale", "new", etc.), and they will be that way forever, then there is an argument to be made for the paths. Because it does help the search engine to parse up your site, and to provide nice breadcrumbs on your listings.
This is really a perennial debate. And there's no one answer. For most of us, we do have to live with products being in multiple categories, as the norm (especially when considering categories like sale, new, best sellers, etc.). Canonical reference links help this issue, but aren't necessarily ideal.
But, what really struck me in your question was that you said the URL changes when you click on the product. Ideally, you don't want all your internal links to be redirects. That's something I would try to avoid.
-
RE: How can i check which inbound links to my site go to 404 pages?
Nick,
I went to that wiki page, and clicked through the link. While the page does redirect to a page which contains content stating "404 Page Not Found", in actuality that page is giving a 200 response, not a 404. In order for any broken link reports to work, the page would have to actually return a 4xx response code (404, 410, etc.).
Here is the redirect path log from the Ayima plugin:
Status Code URL IP Page Type Redirect Type Redirect URL
301 http://www.africatravelresource.com/africa/tanzania/c/zanzibar/nungwi/ 104.26.7.235 server_redirect permanent https://africatravelresource.com/africa/tanzania/c/zanzibar/nungwi/
200 https://africatravelresource.com/africa/tanzania/c/zanzibar/nungwi/ 104.26.6.235 normal none none -
Reducing cumulative layout shift for responsive images - core web vitals
In preparation for Core Web Vitals becoming a ranking factor in May 2021, we are making efforts to reduce our Cumulative Layout Shift (CLS) on pages where the shift is being caused by images loading. The general recommendation is to specify both height and width attributes in the html, in addition to the CSS formatting which is applied when the images load. However, this is problematic in situations where responsive images are being used with different aspect ratios for mobile vs desktop. And where a CMS is being used to manage the pages with images, where width and height may change each time new images are used, as well as aspect ratios for the mobile and desktop versions of those.
So, I'm posting this inquiry here to see what kinds of approaches others are taking to reduce CLS in these situations (where responsive images are used, with differing aspect ratios for desktop and mobile, and where a CMS allows the business users to utilize any dimension of images they desire).
-
RE: Is a page with links to all posts okay?
Depending on how many pages you have, you may eventually hit a limit to the number of links Google will crawl from one page. The usual recommendation is to have no more than 150 links, if you want all of them to be followed. That also includes links in your site navigation, header, footer, etc. (even if those are the same on every page). So, at that point, you might want to make that main index page into an index of indices, where it links to a few sub-pages, perhaps by topic or by date range.
-
RE: How can i check which inbound links to my site go to 404 pages?
In Moz, within your campaign you can go to Links > Top Pages, and then choose the filter "4xx".
-
RE: Web Core Vitals and Page Speed Insights Not Matching Scores
To my understanding, GSC is reporting based on "field data" (meaning the aggregate score of visitors to a specific page over a 28 day period). When you run Page Speed Insights, you can see both Field Data and "lab data". The lab data is your specific run. There are quite a few reasons why field data and lab data may not match. One reason is that changes have been made to the page, which are reflected in the lab data, but will not be reflected in the field data until the next month's set is available. Another reason is that the lab device doesn't run at the exact same specs as the real users in the field data.
The way I look at it is that I use the lab data (and I screen print my results over time, or use other Lighthouse-based tools like GTMetrix, with an account) to assess incremental changes. But the goal is to eventually get the field data (representative of the actual visitors) improved, especially since that's what appears to be what will be used in the ranking signals, as best I can tell.
-
RE: Advice needed on canonical paginated pages
A few bits of feedback:
- I wouldn't be so quick to dismiss Yoast. For Wordpress sites, it's really a pretty good plugin and takes care of a lot of SEO basics.
- Google used to have a specific solution for paginated series, the REL=PREV/NEXT, but that was deprecated about two years ago. Their official advice (albeit through Twitter) was to either treat each page as "standalone" (i.e. self-referencing canonical) or else to include a "view all" type of page with all content accessible without pagination.
- Ideally, when possible, a great solution is to make sure you have enough separate blog "categories" (can be by topic, for example) that each page has all its articles accessible without pagination and then concentrate on getting each blog category page indexed for its appropriate keywords.
- Otherwise, self-referencing canonicals are OK. The main thing is you want each blog article to be discovered, crawled and indexed. So, you don't want to do anything to prevent the discovery of each article. Even if that means that several blog article listing pages end up getting indexed. With this approach, you might even still want to keep (or implement) the rel=prev/next, so that other search engines can use it., and/or for accessibility. Yoast might still be useful for this, as would be some other options like WP-PageNavi
-
RE: Should I canonicalize URLs with no query params even though query params are always automatically appended?
I would recommend to canonicalize these to a version of the page without query strings, IF you are not trying to optimize different version of the page for different keyword searches, and/or if the content doesn't change in a way which is significant for purpose of SERP targeting. From what you described, I think those are the case, and so I would canonicalize to a version without the query strings.
An example where you would NOT want to do that would be on an ecommerce site where you have a URL like www.example.com/product-detail.jsp?pid=1234. Here, the query string is highly relevant and each variation should be indexed uniquely for different keywords, assuming the values of "pid" each represent unique products. Another example would be a site of state-by-state info pages like www.example.com/locations?state=WA. Once again, this is an example where the query strings are relevant, and should be part of the canonical.
But, in any case a canonical should still be used, to remove extraneous query strings, even in the cases above. For example, in addition to the "pid" or "state" query strings, you might also find links which add tracking data like "utm_source", etc. And you want to make sure to canonicalize just to the level of the page which you want in the search engine's index.
You wrote that the query strings and page content vary based on years and quarters. If we assume that you aren't trying to target search terms with the year and quarter in them, then I would canonicalize to the URL without those strings (or to a default set). But if you are trying to target searches for different years and quarters in the user's search phrase, then not only would you include those in the canonical URL, but you would also need to vary enough page content (meta data, title, and on-page content) to avoid being flagged as duplicates.
-
RE: Can i do Partial Multilang for same country but different language ? If yes then how ?
Yes. As I understand it, hreflang is a page-specific directive. Each page can have different hreflang tags, and this is a very common approach. You could also just tag those single language pages with a self-referencing hreflang tag (EN) and x-default. Those two tags could be on all your US pages, and then add additional tags to the pages whcih have siblings in other languages/countries.
-
RE: Snippet showed in google search is not from metaDescription
Zack,
My comments were specifically regarding meta descriptions, and the original poster's efforts to make sure their meta description would be used in the SERP snippets shown for various searches. Since a page might rank for many different keywords, not all of those can be in the meta description, and so for some searches, Google is going to instead show page content rather than the meta description. And I don't think this is necessarily a bad thing. For example, if you have a 10% CTR with your own meta description, and a 12% CTR with the one Google is showing, why would you want to try to influence Google to show something else?
So, that's why from a prioritization standpoint, I advised to focus meta description work (and only meta description work) on SERP listings with underperforming CTR. And then, within those, to focus first on listings with more impressions, just because those will have the most amplification of the efforts.
But for other SEO efforts, especially those where ranking factors are concerned (as opposed to meta descriptions which are not ranking factors, at least not directly), then a CTR and impression-based prioritization wouldn't necessarily make sense.
The approach you described seems very thorough and legit for keyword research. But then, as far as what to do with those keywords (i.e. whether to update meta descriptions), that's what I was focusing on with the OP's questions.
-
RE: Snippet showed in google search is not from metaDescription
You can't 100% control the snippet. Google owns the SERP experience, and in some cases their algorithm determines that content from your page will be a better snippet to show than your meta description. But in general, if your meta description contains the keyword being searched, and an appropriate length of content surrounding that, the chance that the meta description is used for the snippet is higher. Whether that will translate to a higher CTR though is not always the case, and since you can't include every possible search term in your meta descriptions, most of us prefer to focus on ones where we are getting a good number of impressions but not a great CTR. It's a prioritization thing.
-
RE: Do I need multiple 301s to preserve SEO
Yes, you generally should handle your version 1 URLs and your version 2 URLs to redirect to your version 3. That's assuming that there are still some backlinks out there pointing to your version 1 URLs.
But, a best way to do this is to eliminate the hops. So, instead of having a redirect from version 1 to version 2 to version 3, you would update all the oldest ones so they go from version 1 directly to version 3. And also version 2 would go directly to version 3. That way you reduce your "redirect chains".
-
RE: Why Google Is Changing our Title Tags?
It doesn't look like, to me, you have placed your title tag before your CSS includes. This below is from the page source of your first link, as of today. Please notice how there is a stylesheet include, a very large one, before your title tag. I don't know for sure that this is the problem, but I would try placing your title tag before that stylesheet reference.
<html<br>lang=en-US><meta<br>charset="UTF-8"><meta<br>name="viewport" content="width=device-width, initial-scale=1"><link<br>media=all href=https://drliminghaan.com/wp-content/cache/autoptimize/css/autoptimize_de91ffe89d4bc56be7552d25cabc178f.css rel=stylesheet><title>Heart Specialist Clinic Singapore | Cardiology Clinic | Dr. Lim Ing Haan</title></link<br></meta<br></meta<br></html<br>
-
RE: Snippet showed in google search is not from metaDescription
I wouldn't necessarily target the keywords for which you are already getting traffic. The purpose of your snippet is to help your listing stand out from the others around it, so it gets clicks. I would look in GSC for queries where you are getting impressions, and then check how your snippet looks for those, especially as compared with the other listings around yours. As well, I would also look for queries where you have a significant number of impressions, but not a good click through rate. That's another indication that your meta descriptions (or page content being pulled into the snippet) aren't compelling enough. You don't necessarily need for your meta description to be used as the snippet, but you do want your snippet to be compelling so it gets as high of a CTR as possible.
-
RE: To remove or not remove a redirected page from index
Zack,
All is good with this now. Here's how it actually went:
On January 6, I used GSC to request the cache be removed
On January 7, the page still showed for keyword searches, with the page title, but without any snippet.
On January 7 I requested a re-index.
From January 7-10, the page still showed in search results with only a title
On January 11, the redirected page finally shows in the keyword search results, instead of the original page (desired end result).
So, in summary, it was fairly quick (4-5 days), and in the end the redirected page took the place of the original page, and in the interim the original page showed up with its title but with no snippet.
-
RE: To remove or not remove a redirected page from index
Thanks Zack. That option is a bit hidden, and I hadn't noticed it. I'm trying now to see what happens with just clearing cached URL until the next recrawl might do. I'm kind of curious what it will do with the page title, which also currently has details about the offer.
-
To remove or not remove a redirected page from index
We have a promotion landing page which earned some valuable inbound links. Now that the promotion is over, we have redirected this page to a current "evergreen" page. But in the search results page on Google, the original promotion landing page is still showing as a top result. When clicked, it properly redirects to the newer evergreen page. But, it's a bit problematic for the original promo page to show in the search results because the snippet mentions specifics of the promo which is no longer active. So, I'm wondering what would be the net impact of using the "removal request " tool for the original page in GSC.
If we don't use that tool, what kind of timing might we expect before the original page drops out of the results in favor of the new redirected page?
And if we do use the removal tool on the original page, will that negate what we are attempting to do by redirecting to the new page, with regard to preserving inbound link equity?
-
RE: Why Google Is Changing our Title Tags?
I'm not sure why the search engine is morphing the page title. One think I would recommend to try would be to move the title tag up to a position in the page source at the very beginning of the head section. It is currently after the CSS source link, and perhaps specifying it earlier might possibly have an impact. Worth a try, I think. But I'm not certain that is the issue.
-
RE: 301 redirects
From my experience, there is not an issue with too many redirects in parallel. However, too many redirects in sequence, or a "redirect chain" definitely is problematic. If a page redirects too many times, the search engine might not follow until the very end.
With regard to redirects in general. On occasion, Google has stated that some small amount rank equity is lost with each redirect. So, we do tend to assume that a direct link is preferred over a redirected link. When we restructure our URLs, we generally clean up all the internal linking to point directly at the new links, rather than rely on redirects. With internal links, that's feasible. With external links, while you can reach out and request link updates, you do also need to rely on redirects.
-
RE: Snippet showed in google search is not from metaDescription
The meta description is not always used for snippets, depending on what keywords/phrases are searched. When the search term is not in the snippet, but is on the page (and "page" does include elements such as breadcrumbs, top navigation menus, etc.). The search engines will often show the verbiage on the "page" surrounding the search terms. But also, your meta description is pretty lengthy. So you might have better luck with a description in the 100-150 character range.
-
RE: Inconsistency between content and structured data markup
This is what they say, explicitly: https://developers.google.com/search/docs/guides/sd-policies. Specifically, see the "Quality Guidelines > Content" section.
In terms of actual penalties, ranking influence, or marking pages as spam , I can't say from experience as I've never knowingly used markup inconsistent with the information visible on the page.
-
RE: Do you think this case would be of a duplicated content and what would be the consequences in such case?
To my understanding, no. Duplicate content generally occurs when two or more pages have very similar content, and the search engine decides that only one of those should be indexed (after crawling each). The concept of "duplicate content" as a "penalty" I think is much misunderstood. There's not actually a penalty, like one would see for a manual action due to spam or hacked content. But rather, if those multiple pages really needed to be separately indexed, then the "penalty" is that only one will be indexed. And, if you don't make smart use of techniques like canonicalization, then the one the search engine chooses to index may not be the one you want indexed. But anyway, none of that really applies in the case you're put forward. As best I can tell from very quickly looking at your site, the pages are unique, and yes, multiple pages might include the same thumbnail(s). But as long as they also include enough unique content besides the thumbnails (and/or don't include exactly the same set of thumbnails as another page), they are likely to be treated as unique pages by the search engine and each indexed. But, you don't need to guess. Just look to see whcih of your site's pages are being indexed. If all the pages you want indexed are being indexed, and none that you don't want indexed, then you don't have a duplicate content issue to worry about.
-
RE: SEO Title Length
I would actually address this not as a title tag issue, but as an indexation/canonical issue. Sometimes, the issues which show up in a Moz crawl are actually helpful to identify unrelated issues, and to me, this seems like one of those. Ideally, we typically only want page 1 of our category pages to be indexed, BUT we want all the category page depth crawled (followed). There are several ways to do this. The traditional way has historically been to use rel = prev/next tagging. But there has been some discussion (and acknowledgement from Google) that this is no longer supported. And so another way is with canonical tagging to a view all page (it is not very recommended to canonical to your page 1, but rather to a "all" results page). And another option is to do nothing and hopefully search engines will serve the right page to the right queries. I would research GSC looking at queiries, impressions, and clicks and see if the indexation issue is something needing attention, and if so would probably go with some kind of canonicalization approach.
-
RE: Help with international targeting
If I understand the OP's intent, it is to target countries, not languages. Hreflang can specify alternates for a language, or a language-country combination, but unfortunately not just for a country. So, as the OP has proposed, yes you do need to specify the language and the country. And that does bring up a dilema faced by many of us in terms of what language to use. If your content is in all English, then yes you should use like "en-FR". BUT, you might also want to include an "fr-FR" as well, pointing to the same alternate URL. Because there are going to be a lot more France-based visitors on Google whose browser settings are for French language than English. For sure, both do exist (there are native English speakers in France too), but you don't have to choose one. You can include both. Google may not completely respect your directives since the content is in English (assuming that's the case), but it's what I would recommend. So, for each country (assuming the content is in English), include both an English and a language-specific hreflang tag (pointing to the same destination) for that country.
Since your last example uses "es-ES", I assume maybe that you're planning to also publish some content in Spanish language. But if not, again, realize you can include multiple hreflang tags for a single country, and pointing to the same page.
I also don't know where you are based. But if the business is US-based, I wouldn't duplicate US also as a localization. Rather, I would make that the default. Or, if you are based somewhere else, same thing, but with that country.
On question 2, you can set up a GSC property for folder paths (www.example.com/fr/), and target those. I would not target the root level (www.example.com) in your case, because that would also apply to all the subfolders. That's one of the advantages of using subdomains instead of subfolders, is that you can target each independently. But with subfolders, you can target all except the root (because it would cascade downward).
On question 3, you should do the same as you do in number 1, as long as you are duplicating those pages in each subfolder. Otherwise, if you don't give a directive of which page to index, since they are duplicates, Google is going to choose for you. And might not choose the one you prefer.
-
RE: Advice needed; Scrap mature .co.uk and move to .com, or run two separate domains?
There are definitely pros and cons of both approaches. I will say that as someone who currently manages 5 different regional variations of a site, no matter how well we implement our hreflang tags and canonical tags, etc. the search engines still don't index us exactly the way we want them to. That has been my experience elsewhere too. In theory, you should be able to maintain two very similar, or even identical sites, and as long as you properly implement hreflang and canonicals, you should be able to target one site to US and the other to UK. But in practice, I've found that even with these things well implemented, you end up with some pages from each site being served as results in the wrong country.
So, I think it depends on how significant that impact is to you. In the case of the sites I currently manage, it is very significant as our product assortment, inventory levels, fulfillment options, etc. are different in each region. So, when a customer clicks through to the wrong region's page, and doesn't follow the suggested geo-ip-based guidance we provide when we detect that, they have a bad experience and we lose customers.
In the case of the sites I manage, it is not an option to consolidate into a single site/page across these regions, because we operate independently in each country with different products, inventory, prices, etc.
If that's the case for you as well, then I would recommend to do your best at properly implementing hreflang and canonicals, and just recognize that the search engines won't perfectly respect your directives for indexation. But if, on the other hand, your business could serve the same content to both regions, you would avoid those indexation pitfalls by consolidating on one domain.
In your case, there is also another factor, which is the perception (in both markets) of the .co.uk domain as being local for UK. So, that helps you in the UK, and hurts you in the US. The .com domain will be better in the US, and not necessarily problematic in the UK, but also would lose some of the local credibility that your .co.uk domain has there. So, that's another consideration.
-
RE: How to deal with product mark up automatically generated by Yoast?
The SKU property doesn't need to be an official/registered identifier. It can be anything the merchant chooses. It should ideally be unique in your product catalog, but doesn't need to be unique across merchants. So, you can map to it whatever product ID your catalog has assigned. Can even be a simple database row number if needed.
-
RE: Duplicate, submitted URL not selected as canonical
Hi Eric. I took a look at your two pages. When I look at the page source (not with "inspect", but with "view page source"), I see that all of the content on your page is injected via javascript. There is almost no html for the page. To me, this looks like for whatever reason, Google isn't able to execute and parse the content being injected by javascript, and so when it crawls just the html, it is seeing the two pages as duplicate because the body of the content (in html page source) is mostly identical.
That does raise a question of why Google isn't able to parse the content of the scripts. Historically, Google just didn't execute the scripts. Now it does, but they acknowledge that content injected by scripts may not always ben indexed. As well, if scripts take too long to execute for the bot, then again, the content may not be indexed.
My recommendation would be to find some ways to have some unique html per page (not just the script content).
-
RE: How to optimise 2 (almost) identical ecommerce pages
One consideration is how you plan to acquire links and gain relevance. In many cases, one page is a better strategy because you will have twice as many links to the one page, in contrast to splitting half to one and half to the other. For example, you might find better results ranking #1 for at least one of the terms than #5 for both terms. Also, you should be able to rank one page for multiple terms. For example, if you can include both terms in the URL itself, and in the title tag, and within context on the page. There's not a definitive answer to your question, but I would say in general I would prefer one very strongly ranked page than 2 weaker ones (and splitting your product into 2 will usually result in two weaker ones, from an inbound links standpoint).
-
RE: Gradual roll out of new webpages on temporary subdomain
I'm not sure how the search engines look at 302 redirects which are in place for a prolonged time. I'll be interested to see if anyone else on this thread has additional insights about that. What I can say is that I've used 302 redirects in some cases for prolonged period of time (although not as long as 12-18 months, perhaps more like 4-6 months) and have not experienced issues from that approach. But others on this forum may have more experience with 302 redirects over that period of time.
The other thing I'll mention is that some tools like Moz Pro will report 302 Redirects as "issues". My perspective is to look through these because some might be unintentional, and then to ignore when they are inentional/strategic.
-
RE: Gradual roll out of new webpages on temporary subdomain
Unless I'm missing something in the thread here, it seems to me this would be better served by 302. My rationale is that you will be eventually going back to the www URL and you want that to retain the full equity of all its links. So, during the interim period, you would have 302 redirects, and then when you switch back to www, you would simply remove all the redirects.
The only downside I see to that is that during the interim period, one thing you won't be able to measure as the site is gradually updated, is the incremental impact of the new page designs on SEO. You will still be able to measure the new page design in terms of conversion rate and other UX factors, but measuring impact on SEO wouldn't really be feasible.
-
RE: Canonical for multi store
This is because Moz hasn't updated their crawling tool to consider hreflang in the equation of reporting "duplicates". They've acknowledged that. They might update it in the future. But for now, you just have to ignore pages being reported as duplicate if you know that they are properly linked by hreflang to distinguish countries or languages.
Self-referencing canonical tags are a best practice, and will give an important correct signal to the search engines, which is more important than cleaning up reported warnings in the Moz crawl.
-
RE: Canonical for multi store
You should stick with two different canonicals. Self-referencing in each case. And use hreflang tags to link the country-specific variations together.
Pointing both pages to one single canonical is telling the search engine to only index one of those pages.
The self-referencing canonical in this case is simply to deal with variations of the base URL, like in case it has query strings, or http vs. https, or www vs not, etc.
Where you would want to point two different pages to one canonical is when you only want one of those pages to be indexed. If the content is duplicate, the search engine would likely make that choice for you. So, including a canonical lets you give a directive to the search engine, instead of deferring to it on the choice of which.
-
RE: Traffic drop after hreflang tags added
Yes, that looks correct now. And in your specific case, x-default might indeed handle the rest since Europe is your default, and that's where the unspecified combinations are most likely to be for you.
I wouldn't be too concerned about site speed. These are just links. They don't load any resources or execute any scripts. For most intents, it's similar to text. The main difference is that they will be links that may be followed by the bots. But really, even though you'll have many lines, you only really have two actual links among them. So, I wouldn't be too concerned about this part.
Good luck.
-
RE: Traffic drop after hreflang tags added
Moon boots. It looks like you decided to target by language, rather than by country-language combinations. And that is acceptible. It has a few issues, for example if you target by FR you are going to send both France and Candaian French speakers to your Europe site (and I don't think you are wanting to do this). On the other hand, if you were instead thinking that you were specifying the country code, no, the code you pasted here does not do that. Per the spec on hreflang, you can specify a language code without a country code, but you cannot specify a country code without a language code. All of the hreflang values you used will be interpreted as language, not country. So, for example, CA will be interpreted as Catalan, not Canada.
Again, I know it's a giant pain to handle all the EU countries. All of us wish Google made it feasible to target Europe as an entity, or at least target y country even. But it's just not the case. Yet. So, the way we do this is generally with application code. Ideally, in your case, I would suggest for that code to generate, for each country, one entry for English in that country (like "en-DE"), and another entry for the most common language in the country (like "de-DE"). That will generate many entries. But it's the only way I know of to effectively target Europe with an English language site.
-
RE: Traffic drop after hreflang tags added
moon-boots. Pretty close now. You should add the x-default to each site too, and they should be identical (whichever one of your sites you want to present for any locales you've omitted).
But also, realize that "en-it" is a pretty fringe locale. Google woudl only propose this to a search visitor from Italy who happened to have preferences set for English in their browser. While there are plenty of people in ITaly who do speak English, there are far fewer who set their browser to "en".
I have the same issue in Europe. Germany is one of our largest markets. I initially targeted, like you've done, just English in each country. We previously (a year ago) had a German-language site, and that one we targeted to "de-de". When we stopped maintaining the German-language site, we changed our hreflang tag to "en-de". We quickly found that all of our rankings dropped off a cliff in Germany. I would recommend, at least for your largest addressable markets, to also include hreflang tags for the primary languages. Thsi is another thing whcih Google hasn't yet made easy. They allow to target by language without country, but not by country without language. At least in hreflang (which was really developed for language targeting). GSC (the legacy version) had country-level targeting there.
Lastly, you included URLs for your home page here. But I'm assuming you realize you need to make the tags page-specific, on every page. If you put these tags as-is on every page, then you would be sending a signal to google equivalent to pointing every one of your site pages to a canonical of your home page (and effectively de-indexing the remainder of your site's pages). I'm assuming you're just using home page as an example in your posts. But if not, then yes, you will need to do page-specific tags for each page (and the self-referencing ones need to match your canonical tag for the page).
-
RE: Traffic drop after hreflang tags added
So, that's exactly why I wrote that you should include all the EU countries as specified locales, pointing to the EU site. Only everything "unspecified" goes to x-default. Alternatively, you could point AU, CA, NZ to the US site, and make x-default point to your EU site. I don't think that is as good of an approach though. Like I said, everyone who has a EU site has this issue. It's a pain that EU isn't a valid "locale" for hreflang. Maybe something will eventually be in place to handle better. In the interim, we can add hreflang for all the EU countries (or just prioritize the markets you really serve).