I came across this SERP Feature in a search today on a mobile device. It does not show for the same search query on desktop. What do we know about this "Shops" SERP feature?
- Home
- seoelevated
seoelevated
@seoelevated
Job Title: E-Commerce Director
Company: n/a
Favorite Thing about SEO
huge community of smart people with all kinds of theories and learnings to share
Latest posts made by seoelevated
-
What do we know about the "Shops" SERP Feature?
-
RE: What happens to crawled URLs subsequently blocked by robots.txt?
@aspenfasteners To my understanding, disallowing a page or folder in robots.txt does not remove pages from Google's index. It merely gives a directive to not crawl those pages/folders. In fact, when pages are accidentally indexed and one wants to remove them from the index, it is important to actually NOT disallow them in robots.txt, so that Google can crawl those pages and discover the meta NOINDEX tags on the pages. The meta NOINDEX tags are the directive to remove a page from the index, or to not index it in the first place. This is different than a robots.txt directive, whcih is intended to allow or disallow crawling. Crawling does not equal indexing.
So, you could keep the pages indexable, and simply block them in your robots.txt file, if you want. If they've already been indexed, they should not disappear quickly (they might, over time though). BUT if they haven't been indexed yet, this would prevent them from being discovered.
All of that said, from reading your notes, I don't think any of this is warranted. The speed at which Google discovers pages on a website is very fast. And existing indexed pages shouldn't really get in the way of new discovery. In fact, they might help the category pages be discovered, if they contain links to the categories.
I would create a categories sitemap xml file, link to that in your robots.txt, and let that do the work of prioritizing the categories for crawling/discovery and indexation.
-
RE: Duplicate content question
@andykubrin I would like to add that another valid approach is to ignore the "issue". Are all 4 of your form pages currently indexed? If so, then this Moz-reported issue is not necessarily an actual issue. There is no "penalty" for duplicate content like this. The situation we all wish to avoid is for the search engine to choose one of the pages to index, because it sees them all as duplicates, and not necessarily index the one we want or associate with all our desired keywords we've individually targeted per page. But, if all 4 of your pages are currently indexed, and if they rank for the terms that you want, then it would be OK to ignore the issue.
As well, you might think about whether you want these pages to be indexed/rank at all. If your desire is for traffic to go to the service description pages and then flow to the forms, and if the service description pages are the ones which actually are ranking, the issue may not even matter to you. And so again, you might decide to ignore. And that would be a valid choice.
-
RE: Multiple H1s and Header Tags in Hero/Banner Images
While there is some level of uncertainty about the impact of multiple H1 tags, there are several issues about the structure you describe. On the "sub-pages", if you have an H1 tag on the site name, that means the same H1 tag is used on a bunch of pages. This is something you want to avoid. Instead, develop a strategy of which pages you would like to target to rank for which search queries, and then use the page's primary query in the H1 tag.
The other issue I see in your current structure is that it sounds like you have heading tags potentially out of sequence. Accessibility checker tools will flag this as an issue, and indeed it can cause frustration for people with vision disabilities accessing your pages with screen readers. You want to make sure that you preserve a hierarchy where an H1 is above the H2 is above the H3, etc.
-
RE: Is using a subheading to introduce a section before the main heading bad for SEO?
You will also find that you fail some accessibility standards (WCAG) if your heading structure tags are out of sequence. As GPainter pointed out, you really want to avoid styling your heading structure tags explicitly in your CSS if you want to be able to to style them differently in different usage scenarios.
Of course, for your pre-headings, you can just omit the structure tag entirely. You don't need all your important keywords to be contained in structure tags.
You'll want, ideally, just one H1 tag on the page and your most important keyword (or semantically related keywords) in that tag. If you can organize the structure of your page with lower-level heading tags after that, great. It does help accessibility too, just note that you shouldn't break the hierarchy by going out of sequence. But it's not a necessity to have multiple levels of heading tags after the h1.
-
RE: Are Old Backlinks Hurting Or Helping?
There are quite a few factors to consider. Let's start with why redirecting obsolete products can possibly hurt ranking. Most likely the scenario to which the SEO Expert was referring is that search engines may see a large volume of returns to the search results page, followed by clicking other results, as an indication of a result which does not meet users' needs.
In the case described above, it is possible for such redirects to "hurt", but only in the context of a specific query. For example, let's say you used to stock footballs. And now you no longer carry any of those. And you've redirected your old football product pages to a sporting goods category page, but where no footballs can be found. Then, yes, this might cause search visitors to return to the SERP and so it could potentially hurt. But if so, then it would be hurting for a search for "footballs". Because if the visitors were searching for something you DO have in stock, then after being redirected they wouldn't immediately return to the SERP. And so, if you no longer stock footballs, why would you worry about losing rank for the search query "footballs".
Now, with that same example, let's say that some of your football pages also ranked very highly for "gifts for athletes". And you redirect to a relevant category. Those visitors are more likely to convert. Same landing page, different search query.
For this reason, in general, it's usually a good idea to redirect the old products to relevant category pages, if you have any. There are exceptions, of course. So, the trick is to look at what kinds of terms those specific pages previously ranked for, and whether there is still value in ranking for any of those specific terms. Remember, you don't own an overall "rank" on Google. You rank for specific queries, independently.
-
RE: Will I get penalised from an SEO perspective for having redirects.
@rodrigor777 That is a common approach, and not cause for any kind of penalty. If these domains never really existed as pages on the web with inbound links, then the simple redirect to the desired domain's home page may be fine. If they did exist, and if there are pages indexed from these other domains, then you will want to redirect appropriately for each page, rather than pointing all to the home page.
-
RE: Brand Name Importance in SERPS
Although the impact of keyword match in domain names isn't as high as it once was, my current experience is that it still is a very significant ranking factor. I've recently (last year, and also about 4 years ago) completed two domain name changes, and the impact on searches where the query term is/was matched in the domain name definitely has an impact. That said, after an initial "honeymoon" period, you're likely going to see some negative ranking impact of a domain name change, regardless of the specific domain names. My recent experience has been that things get crazy for a week or so, then look really good for 1-3 months, then the negative impact hits, and then it takes quite a while (sometimes more than a year) to get everything back to where it was. So, if you do change domain names, it needs to be seen as a long-term strategy, not a "this year" one.
-
RE: Discontinued products on ecommerce store
Yes, I meant 301, server-side redirects.
Regarding performance, I currently have a little over 50,000 entries in my redirects file with no discernable performance impact. But, different platforms handle differently, and also we have a CDN which caches redirects too, so that could make a difference. I guess the safest approach would be to insert a hundred thousand or more dummy redirect entries into your redirect file, temporarily, and stress test it.
-
RE: Discontinued products on ecommerce store
Th approach of redirecting with an informative message is potentially a good one. I have not implemented nor seen this done. If you go this route, make sure it is a true server redirect, with a 301 response code. But I could see how the redirect could include a query param in the destination URL which could then be used to display a fairly generic message.
As far as better vs. worse, from my perspective that differs depending on the nature of the products. One good use case for keeping the old product page around would be like a consumer electronics product page which contained technical info or resources which would be hard to find otherwise (but an alternative could be to have a support library for that). Another example, when I was on the agency-side, I worked with an apparel brand which each season introduced and retired thematic prints. And they kept a library of retired prints, which visitors could upvote to try to get them returned into service.
You wrote in your OP that these pages are zero/low traffic, with few backlinks. So, I'm inferring that the actual user experience isn't going to be really experienced very much.
But the reason to redirect to the category page, is to preserve any link equity the product page might have built up over time. Again, even if each product has very few backlinks, if you add them all up redirected to a parent category page, that could make a difference in how that category page ranks. If you can accomplish this without confusing real visitors (if any).
To your last point, yes it's possible that the search engine might consider some of these redirects to be "soft 404s". In which case, the link equity wouldn't be preserved because it would be treated like a 404. But, that's exactly what you're proposing to do anyway. So, if even just some of them get treated as 301s, you're ahead of the game, as I see it.
Best posts made by seoelevated
-
RE: Redirect to http to https - Pros and Cons
If your current pages can be accessed by http and by https, and if you don't have canonicals or redirects pointing everything to one version or the other, then one very significant "con" for that approach is that you are splitting your link equity. So, if the http page has 50 inbound links, and the https has another 50, you would do better to have one page with 100 inbound links.
Another difference is how browsers show/warn about non-secure pages. As well as any ranking factor they may associate with secure. Again, in favor of redirecting http to https. The visual handling can also impact conversion rates and bounce rates, which can in turn impact ranking.
As far as cons to redirecting, one would be that you might expect a temporary disruption to rankings. There will likely be a bit of a dip, short term. Another is that you will need to remove and then be careful about accidentally adding any non-secure resources (like images) on the https pages, which will then issue a warning to visitors as well as possibly impacting ranks. There is some consensus that redirects (and canonical links) do leak a very small amount of link equity for each hop they take. So, that's another "con". But my recent experiences doing this with two sites have been that after the temporary "dip" of a couple of months, if done properly, the "pros" outweigh the "cons".
-
Reducing cumulative layout shift for responsive images - core web vitals
In preparation for Core Web Vitals becoming a ranking factor in May 2021, we are making efforts to reduce our Cumulative Layout Shift (CLS) on pages where the shift is being caused by images loading. The general recommendation is to specify both height and width attributes in the html, in addition to the CSS formatting which is applied when the images load. However, this is problematic in situations where responsive images are being used with different aspect ratios for mobile vs desktop. And where a CMS is being used to manage the pages with images, where width and height may change each time new images are used, as well as aspect ratios for the mobile and desktop versions of those.
So, I'm posting this inquiry here to see what kinds of approaches others are taking to reduce CLS in these situations (where responsive images are used, with differing aspect ratios for desktop and mobile, and where a CMS allows the business users to utilize any dimension of images they desire).
-
RE: Sitemap issue
You are missing the namespace for xhtml. There are multiple ways to format a sitemap, but you are using xhtml format for your hreflang tags. You can do it differently without using xhtml. But if you do it the way you are doing it, you need to declare the namespace up in the URLSET tag.
So, where you have:
<urlset <span="" class="crayon-h">xmlns</urlset>="http://www.sitemaps.org/schemas/sitemap/0.9">
It would instead need to be like
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml">
-
RE: Complicated Title Tag Issues. Experts, Please Help!
Primarily, the requirement is to be in the section. However, I have seen cases where a long-load-time resource in the above meta tags can cause meta tags to be ignored. These were fairly extreme cases, where the resource took multiple seconds to load, synchronously. But moving the meta tags (and Title) above those resources fixed the issue. Also, in another case, we had a snippet from a CDN provider which included an iframe, and in that iframe there was a section and a closing . It turned out that Google was ignoring all of our tags after the which was injected in that iframe, even though it was only supposedly closing the section opened within the iframe. Once we moved that iframe to the end of our own section, below all our meta data, the issues resolved. So, with all that, I do recommend putting meta data at the top of the section, but depending on what else is in there, it might not be an issue for you.
-
RE: Is there a way to get a list of urls on the website?
If all of the pages you are interested in are linked internally from somewhere in your site which can be reached through navigation or page links, you can run a simulated crawl with a tool like ScreamingFrog, whcih will discover all the "discoverable" pages.
The site you referenced is built with a platform called "Good Gallery", whcih generates a sitemap. This is at www.laskeimages.com/sitemap.xml. I'm not sure what criteria it might use to include/exclude pages, but that would likely be a good list. You will need to view the page source of that page to see the data in a structured way to extract it.
Another method is to use Google Analytics. Assuming that each page of your site has been viewed at least once in its history, you could extract the list from Google Analytics. Especially from an unfiltered view which includes visits by bots.
-
RE: Should Hreflang x-default be on every page of every country for an International company?
Yes, your understanding of x-default is correct. The purpose of including it everywhere you have alternate HREFLANG links, is to handle any locales you don't explicitly include (to tell the search engine which is the default version of the page for other non-specified locales). And it should be included on each version of the page, along with the other specified alternate links for each locale. Alternatively, you could collect all of these centrally into the sitemap file, rather than inserting into each page. Both types of implementation are valid (but anecdotally I've had better luck with on-page tags instead of sitemap implementation).
-
RE: Google SERP shows wrong (and inappropriate) thumbnail for Facebook videos?
This is very interesting, and I see from the threads you linked that multiple businesses are having the same problem and the same difficulty navigating both the Google or Facebook support communities. Out of curiosity, are you able to inspect one of your Facebook pages whcih still has the video, and see if any schema for the type "VideoObject" is included in the page, and if so, paste the markup here (redacted as necessary)? I don't think I'll probably be able to help much on this, but perhaps something in the schema data might give some clues to the community here to work with.
-
Is it ever OK to exclude googlebot from geoip experience?
We currently use geoip detection to present a dialog when customers access one of our country-specific sites other than the one matching their location, asking them to select whether to continue to that requested site, or else click a link to the site matching their IP location's country.The purpose is to diminish the likelihood of users having a bad experience if they accidentally shop a region's site only to discover when they get to checkout that they cannot complete the purchase and need to switch sites.
We are considering adding some logic to prevent this dialog from appearing if the user agent is a known search engine bot. The dialog serves no purpose to bots, and we are worried about its impact on crawling of our sites from servers outside the country-site's location.
That said, we don't want to incur any negative impact of perceived cloaking.
Is user agent-specific logic acceptable practice in this scenario?
-
RE: Brand Name Importance in SERPS
Although the impact of keyword match in domain names isn't as high as it once was, my current experience is that it still is a very significant ranking factor. I've recently (last year, and also about 4 years ago) completed two domain name changes, and the impact on searches where the query term is/was matched in the domain name definitely has an impact. That said, after an initial "honeymoon" period, you're likely going to see some negative ranking impact of a domain name change, regardless of the specific domain names. My recent experience has been that things get crazy for a week or so, then look really good for 1-3 months, then the negative impact hits, and then it takes quite a while (sometimes more than a year) to get everything back to where it was. So, if you do change domain names, it needs to be seen as a long-term strategy, not a "this year" one.
-
RE: How to Localise per Region (Europe, America, APAC, EMEI) and not per country as best SEO practise?
I currently manage a site which is localized per region, as opposed to country. For some regions, like US and Australia, it is 1:1 with country, so we do not have issues there. But for Europe, that is where we do have some issues currently. We took the following approach (below), but I have to first say that it is quite problematic and has not performed very well so far (implemented about 1 year ago).
The approach we took was to implement HREFLANG within our sitemap, and for Europe, we generate specific alternate locations for each of the countries where we do business in that region, all with the same URL. Here (below) is a redacted version of one page's LOC node in our sitemap (I've only included a partial list, and only showing English, as the full list of alternate URLs for this one LOC has 150 alternate links to cover every EU country x 5 languages we support). But, the general approach is that for Europe, we create one alternate link for each EU country, in each of our supported languages (we support 5 languages). So, we don't assume, for example, that German speakers are only in Germany, or that English speakers are only in the UK. We cover every country/language combination and point many of these to the exact same alternate link.
Again, as I mentioned, this hasn't achieved all we had hoped. But sharing the approach for a reference point here, as an option, and open to any other ideas from the community. We also struggle with EU in terms of Google Search Console geographic targeting. Unfortunately, Google does not allow a property to be targeted to "Europe". And they only allow one single country per property. In our case, we really need to target a single domain to "Europe", not to a specific country. But we can't, and that is a problem currently.
Here is the example from our Sitemap (partial cut-and-past of the first few entries from one URL node):
<loc>https://www.example.com/example-page-path</loc>
<priority>1</priority>
... remainder of alternate links removed to shorten list here
| |
E-Commerce Director with both agency and brand-side experience.
Looks like your connection to Moz was lost, please wait while we try to reconnect.