I came across this SERP Feature in a search today on a mobile device. It does not show for the same search query on desktop. What do we know about this "Shops" SERP feature?
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Posts made by seoelevated
-
What do we know about the "Shops" SERP Feature?
-
RE: What happens to crawled URLs subsequently blocked by robots.txt?
@aspenfasteners To my understanding, disallowing a page or folder in robots.txt does not remove pages from Google's index. It merely gives a directive to not crawl those pages/folders. In fact, when pages are accidentally indexed and one wants to remove them from the index, it is important to actually NOT disallow them in robots.txt, so that Google can crawl those pages and discover the meta NOINDEX tags on the pages. The meta NOINDEX tags are the directive to remove a page from the index, or to not index it in the first place. This is different than a robots.txt directive, whcih is intended to allow or disallow crawling. Crawling does not equal indexing.
So, you could keep the pages indexable, and simply block them in your robots.txt file, if you want. If they've already been indexed, they should not disappear quickly (they might, over time though). BUT if they haven't been indexed yet, this would prevent them from being discovered.
All of that said, from reading your notes, I don't think any of this is warranted. The speed at which Google discovers pages on a website is very fast. And existing indexed pages shouldn't really get in the way of new discovery. In fact, they might help the category pages be discovered, if they contain links to the categories.
I would create a categories sitemap xml file, link to that in your robots.txt, and let that do the work of prioritizing the categories for crawling/discovery and indexation.
-
RE: Multiple H1s and Header Tags in Hero/Banner Images
While there is some level of uncertainty about the impact of multiple H1 tags, there are several issues about the structure you describe. On the "sub-pages", if you have an H1 tag on the site name, that means the same H1 tag is used on a bunch of pages. This is something you want to avoid. Instead, develop a strategy of which pages you would like to target to rank for which search queries, and then use the page's primary query in the H1 tag.
The other issue I see in your current structure is that it sounds like you have heading tags potentially out of sequence. Accessibility checker tools will flag this as an issue, and indeed it can cause frustration for people with vision disabilities accessing your pages with screen readers. You want to make sure that you preserve a hierarchy where an H1 is above the H2 is above the H3, etc.
-
RE: Is using a subheading to introduce a section before the main heading bad for SEO?
You will also find that you fail some accessibility standards (WCAG) if your heading structure tags are out of sequence. As GPainter pointed out, you really want to avoid styling your heading structure tags explicitly in your CSS if you want to be able to to style them differently in different usage scenarios.
Of course, for your pre-headings, you can just omit the structure tag entirely. You don't need all your important keywords to be contained in structure tags.
You'll want, ideally, just one H1 tag on the page and your most important keyword (or semantically related keywords) in that tag. If you can organize the structure of your page with lower-level heading tags after that, great. It does help accessibility too, just note that you shouldn't break the hierarchy by going out of sequence. But it's not a necessity to have multiple levels of heading tags after the h1.
-
RE: How important is Lighthouse page speed measurement?
My understanding is that "Page Experience" signals (including the new "core web vitals) will be combined with existing signals like mobile friendliness and https-security in May, 2021. This is according to announcements by Google.
https://developers.google.com/search/blog/2020/05/evaluating-page-experience
https://developers.google.com/search/blog/2020/11/timing-for-page-experience
So, these will be search signlas, but there are lots of other very important search signals which can outweigh these. Even if a page on John Deere doesn't pass the Core Web Vitals criteria, it is still likely to rank highly for "garden tractors".
If you are looking at Lighthouse, I would point out a few things:
- The Lighthouse audits on your own local machine are going to differ from those run on hosted servers like Page Speed Insights. And those will differ from "field data" from the Chrome UX Report
- In the end, it's the "field data" that will be used for the Page Experience validation, according to Google. But, lab-based tools are very helpful to get immediate feedback, rather than waiting 28 days or more for field data.
- If your concern is solely about the impact on search rankings, then it makes sense to pay attention specifically to the 3 scores being considered as part of CWV (CLS, FID, LCP)
- But also realize that while you are improving scores for criteria which will be validated for search signals, you're also likely improving the user experience. Taking CLS as an example, for sure users are frustrated when they attempt to click a button and end up clicking something else instead because of a layout shift. And frustrated users generally equals lower conversion rates. So, by focusing on improvements in measures like these (I do realize your question about large images doesn't necessarily pertain specifically to CLS), you are optimizing both for search ranking and for conversions.
-
Reducing cumulative layout shift for responsive images - core web vitals
In preparation for Core Web Vitals becoming a ranking factor in May 2021, we are making efforts to reduce our Cumulative Layout Shift (CLS) on pages where the shift is being caused by images loading. The general recommendation is to specify both height and width attributes in the html, in addition to the CSS formatting which is applied when the images load. However, this is problematic in situations where responsive images are being used with different aspect ratios for mobile vs desktop. And where a CMS is being used to manage the pages with images, where width and height may change each time new images are used, as well as aspect ratios for the mobile and desktop versions of those.
So, I'm posting this inquiry here to see what kinds of approaches others are taking to reduce CLS in these situations (where responsive images are used, with differing aspect ratios for desktop and mobile, and where a CMS allows the business users to utilize any dimension of images they desire).
-
RE: Is a page with links to all posts okay?
Depending on how many pages you have, you may eventually hit a limit to the number of links Google will crawl from one page. The usual recommendation is to have no more than 150 links, if you want all of them to be followed. That also includes links in your site navigation, header, footer, etc. (even if those are the same on every page). So, at that point, you might want to make that main index page into an index of indices, where it links to a few sub-pages, perhaps by topic or by date range.
-
RE: Web Core Vitals and Page Speed Insights Not Matching Scores
To my understanding, GSC is reporting based on "field data" (meaning the aggregate score of visitors to a specific page over a 28 day period). When you run Page Speed Insights, you can see both Field Data and "lab data". The lab data is your specific run. There are quite a few reasons why field data and lab data may not match. One reason is that changes have been made to the page, which are reflected in the lab data, but will not be reflected in the field data until the next month's set is available. Another reason is that the lab device doesn't run at the exact same specs as the real users in the field data.
The way I look at it is that I use the lab data (and I screen print my results over time, or use other Lighthouse-based tools like GTMetrix, with an account) to assess incremental changes. But the goal is to eventually get the field data (representative of the actual visitors) improved, especially since that's what appears to be what will be used in the ranking signals, as best I can tell.
-
RE: Should I canonicalize URLs with no query params even though query params are always automatically appended?
I would recommend to canonicalize these to a version of the page without query strings, IF you are not trying to optimize different version of the page for different keyword searches, and/or if the content doesn't change in a way which is significant for purpose of SERP targeting. From what you described, I think those are the case, and so I would canonicalize to a version without the query strings.
An example where you would NOT want to do that would be on an ecommerce site where you have a URL like www.example.com/product-detail.jsp?pid=1234. Here, the query string is highly relevant and each variation should be indexed uniquely for different keywords, assuming the values of "pid" each represent unique products. Another example would be a site of state-by-state info pages like www.example.com/locations?state=WA. Once again, this is an example where the query strings are relevant, and should be part of the canonical.
But, in any case a canonical should still be used, to remove extraneous query strings, even in the cases above. For example, in addition to the "pid" or "state" query strings, you might also find links which add tracking data like "utm_source", etc. And you want to make sure to canonicalize just to the level of the page which you want in the search engine's index.
You wrote that the query strings and page content vary based on years and quarters. If we assume that you aren't trying to target search terms with the year and quarter in them, then I would canonicalize to the URL without those strings (or to a default set). But if you are trying to target searches for different years and quarters in the user's search phrase, then not only would you include those in the canonical URL, but you would also need to vary enough page content (meta data, title, and on-page content) to avoid being flagged as duplicates.
-
RE: Inconsistency between content and structured data markup
This is what they say, explicitly: https://developers.google.com/search/docs/guides/sd-policies. Specifically, see the "Quality Guidelines > Content" section.
In terms of actual penalties, ranking influence, or marking pages as spam , I can't say from experience as I've never knowingly used markup inconsistent with the information visible on the page.
-
RE: Duplicate, submitted URL not selected as canonical
Hi Eric. I took a look at your two pages. When I look at the page source (not with "inspect", but with "view page source"), I see that all of the content on your page is injected via javascript. There is almost no html for the page. To me, this looks like for whatever reason, Google isn't able to execute and parse the content being injected by javascript, and so when it crawls just the html, it is seeing the two pages as duplicate because the body of the content (in html page source) is mostly identical.
That does raise a question of why Google isn't able to parse the content of the scripts. Historically, Google just didn't execute the scripts. Now it does, but they acknowledge that content injected by scripts may not always ben indexed. As well, if scripts take too long to execute for the bot, then again, the content may not be indexed.
My recommendation would be to find some ways to have some unique html per page (not just the script content).
-
RE: Traffic drop after hreflang tags added
Yes, that looks correct now. And in your specific case, x-default might indeed handle the rest since Europe is your default, and that's where the unspecified combinations are most likely to be for you.
I wouldn't be too concerned about site speed. These are just links. They don't load any resources or execute any scripts. For most intents, it's similar to text. The main difference is that they will be links that may be followed by the bots. But really, even though you'll have many lines, you only really have two actual links among them. So, I wouldn't be too concerned about this part.
Good luck.
-
RE: Traffic drop after hreflang tags added
Moon boots. It looks like you decided to target by language, rather than by country-language combinations. And that is acceptible. It has a few issues, for example if you target by FR you are going to send both France and Candaian French speakers to your Europe site (and I don't think you are wanting to do this). On the other hand, if you were instead thinking that you were specifying the country code, no, the code you pasted here does not do that. Per the spec on hreflang, you can specify a language code without a country code, but you cannot specify a country code without a language code. All of the hreflang values you used will be interpreted as language, not country. So, for example, CA will be interpreted as Catalan, not Canada.
Again, I know it's a giant pain to handle all the EU countries. All of us wish Google made it feasible to target Europe as an entity, or at least target y country even. But it's just not the case. Yet. So, the way we do this is generally with application code. Ideally, in your case, I would suggest for that code to generate, for each country, one entry for English in that country (like "en-DE"), and another entry for the most common language in the country (like "de-DE"). That will generate many entries. But it's the only way I know of to effectively target Europe with an English language site.
-
RE: Traffic drop after hreflang tags added
moon-boots. Pretty close now. You should add the x-default to each site too, and they should be identical (whichever one of your sites you want to present for any locales you've omitted).
But also, realize that "en-it" is a pretty fringe locale. Google woudl only propose this to a search visitor from Italy who happened to have preferences set for English in their browser. While there are plenty of people in ITaly who do speak English, there are far fewer who set their browser to "en".
I have the same issue in Europe. Germany is one of our largest markets. I initially targeted, like you've done, just English in each country. We previously (a year ago) had a German-language site, and that one we targeted to "de-de". When we stopped maintaining the German-language site, we changed our hreflang tag to "en-de". We quickly found that all of our rankings dropped off a cliff in Germany. I would recommend, at least for your largest addressable markets, to also include hreflang tags for the primary languages. Thsi is another thing whcih Google hasn't yet made easy. They allow to target by language without country, but not by country without language. At least in hreflang (which was really developed for language targeting). GSC (the legacy version) had country-level targeting there.
Lastly, you included URLs for your home page here. But I'm assuming you realize you need to make the tags page-specific, on every page. If you put these tags as-is on every page, then you would be sending a signal to google equivalent to pointing every one of your site pages to a canonical of your home page (and effectively de-indexing the remainder of your site's pages). I'm assuming you're just using home page as an example in your posts. But if not, then yes, you will need to do page-specific tags for each page (and the self-referencing ones need to match your canonical tag for the page).
-
RE: Traffic drop after hreflang tags added
So, that's exactly why I wrote that you should include all the EU countries as specified locales, pointing to the EU site. Only everything "unspecified" goes to x-default. Alternatively, you could point AU, CA, NZ to the US site, and make x-default point to your EU site. I don't think that is as good of an approach though. Like I said, everyone who has a EU site has this issue. It's a pain that EU isn't a valid "locale" for hreflang. Maybe something will eventually be in place to handle better. In the interim, we can add hreflang for all the EU countries (or just prioritize the markets you really serve).