Does non-critical AMP errors prevent you from being featured on Top Stories Carousel?
-
Consider site A which is a news publishing site that has valid AMP pages with non-critical AMP pages (as notified within Search Console). Also, Site A publishes news articles from site B (its partner site) and posts it on site A which have AMP pages too but most of them are not valid AMP pages with critical AMP errors.
For brand terms like Economic Times, it does show a top stories carousel for all articles published by Economic Times, however it doesn't look the same for site A (inspite of it having valid AMP pages).
Image link: http://tinypic.com/r/219bh9j/9
Now that there are valid AMP pages from site A and invalid AMP pages from site B on site A, there have been instances wherein a news article from site A features on the top stories carousel on Desktop for a certain query whereas it doesn't feature on the mobile SERPs in spite of the page being a valid AMP page. For example, as mentioned in the screenshot below: Business Today ranks on the Top Stories carousel for a term like “jio news” on Desktop, but on Mobile although the page is a valid AMP page, it doesn’t show as an AMP page within the Top Stories Carousel.
Image Link: http://tinypic.com/r/11sc8j6/9
There have been some cases where although the page is featured on the top carousel on desktop, the same article doesn't show up on the mobile version for the same query on the Top Stories Carousel.
What could be the reason behind this? Also, would it be necessary to solve both critical and non-critical errors on site A (including those published from site B on site A)?
-
Thanks for this!
2 things:
-
I'd suggest that if Site A republishes duplicate (syndicated) content from Site B and references Site B as the original source, you might want to consider simply blocking that content from search engines (on Site A). This will ensure that Google doesn't penalize for dupe content and also will prevent them from seeing the critical errors on the Site B AMP pages.
-
Overall I've tested your example page and couldn't find anything seriously wrong, but one thing I did notice was that in your structured data markup (NewsArticle) you have an error:
On the page: http://m.businesstoday.in/lite/story/reliance-jio-is-preparing-new-tariffs-and-exciting-offers-for-you/1/249662.html
You list "mainEntityOfPage" as "http://m.businesstoday.in/"
However, the Google guidelines state that "mainEntityOfPage" should be the canonical URL of the article page: https://developers.google.com/search/docs/data-types/articles#type_definitions (in this case: http://www.businesstoday.in/sectors/telecom/reliance-jio-is-preparing-new-tariffs-and-exciting-offers-for-you/story/249662.html)
Although the markup does pass the structured data testing tool validation, it is possible that this is breaking the structured data and using NewsArticle markup is something that Google states you must have implemented to feature in the News Carousel.
If fixing this doesn't help, I'd suggest cleaning up the non-critical errors next to see if that fixes the issue.
-
-
1. Yes, Site A republishes content from Site B on a daily basis as Site A has an exclusive partnership with Site B for republication (with site B being the parent publisher). To address content duplication errors we use the original-source tag to point towards the articles present on site B.
2. Yes, it performed well a couple of months ago until the non-critical errors started building up recently. One of them being "Use of deprecated tags or attributes". Also, as mentioned previously, AMP pages site A has non-critical errors and AMP pages from site B have critical as well as non-critical errors.
3. Yes.
4. Yes. Site A has about 15,000+ Indexed AMP pages, 4000+ Critical AMP Errors and 22,000+ non-critical AMP errors.
-
Hey! I have a few questions first of all, just to clarify the situation.
-
has Site A always published the articles from Site B? How do you currently handle the duplicate content aspect?
-
Has Site A's valid AMP content previously performed as expected, and this is a new issue? Or have you always had this issue?
-
Are you verified in Google News?
-
Are you seeing errors in Google Search Console for any of these AMP pages?
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Menu Structure & SEO
Hi I have been trying to decide whether we need to change our menu structure http://www.key.co.uk/en/key/ We have a lot of subcategories which are not in the menu structure and for SEO I wonder whether its best to have menu drop downs, so if a customer hovers over one category, it will display all the subcategories within this. I am concerned that sub categories we are trying to rank are many levels away from the homepage e.g If you want to find leather office chairs from the homepage, you have to go to the 'More categories' link, then choose seating > office seating > leather office seating. Users need to do a lot of navigating before seeing what we offer. I would prefer if a user could see these options in the menu when they hover over it. Does anyone think this would help SEO or just customer journey? Thank you
Intermediate & Advanced SEO | | BeckyKey0 -
301's & Link Juice
So lets say we have a site that has 0 page rank (kind of new) has few incoming links, nothing significant compared to the other sites. Now from what I understand link juice flows throughout the site. So, this site is a news site, and writes sports previews and predictions and what not. After a while, a game from 2 months gets 0 hits, 0 search queries, nobody cares. Wouldn't it make sense to take that type of expired content and have it 301 to a different page. That way the more relevant content gets the juice, thus giving it a better ranking... Just wondering what everybody's thought its on this link juice thing, and what am i missing..
Intermediate & Advanced SEO | | ravashjalil0 -
Www vs. non-www differences in crawl errors in Webmaster tools...
Hey All, I have been working on an eCommerce site for a while that to no avail, continues to make me want to hang myself. To make things worth the developers just do not understand SEO and it seems every change they make just messes up work we've already done. Job security I guess. Anywho,most recently we realized they had some major sitemap issues as almost 3000 pages were submitted by only 20 or so were indexed. Well, they updated the sitemap and although all the pages are properly indexing, I now have 5000+ "not found" crawl errors in the non-www version of WMT and almost none in the www version of the WMT account. Anyone have insight as to why this would be?
Intermediate & Advanced SEO | | RossFruin0 -
How important is it to fix Server Errors?
I know it is important to fix server errors. We are trying to figure out how important because after our last build we have over 19,646 of them and since google only gives us a 1000 at a time the fastest way to tell them we have fixed them all is to use the api etc which will take time. WE are trying to decide is it more important to fix all these errors right now or focus on other issues and fix these errors when we have time, they are mostly ajax errors. Could this hurt our rankings? Any thoughts would be great!
Intermediate & Advanced SEO | | DoRM0 -
Could the top SEO's such as Rand enter any arena?
This is just a post for fun really. Do you think the top 3 SEO's in the world could be in the top 3 results of any industry in 6 months? I would love to see this in action really, a couple of guys against industry giants in insurance or something.
Intermediate & Advanced SEO | | activitysuper0 -
Site Search Tracking Of Non Existing Products
I am working towards optimizing the site search box of an ecommerce website and I wish to track the keywords which users are searching but which are yielding no results. Please see the image for the same. I wish to assimilate data on the same which would then allow me to add products which users are searching but which the site doesn't have. However my problem is that I don't know how you could obtain this data in analytics because these results manifest itself in the form of searchresults.php. I know that analyzing search refinements and percentage of exits in Google Analytics is an option but I want a more compact and simpler solution to the problem where I could see exactly all the data in one place. Does anyone have suggestions on how this can be done? Thanks in advance, Y35Mj.png
Intermediate & Advanced SEO | | pulseseo0 -
Should we block urls like this - domainname/shop/leather-chairs.html?brand=244&cat=16&dir=ascℴ=price&price=1 within the robots.txt?
I've recently added a campaign within the SEOmoz interface and received an alarming number of errors ~9,000 on our eCommerce website. This site was built in Magento, and we are using search friendly url's however most of our errors were duplicate content / titles due to url's like: domainname/shop/leather-chairs.html?brand=244&cat=16&dir=asc&order=price&price=1 and domainname/shop/leather-chairs.html?brand=244&cat=16&dir=asc&order=price&price=4. Is this hurting us in the search engines? Is rogerbot too good? What can we do to cut off bots after the ".html?" ? Any help would be much appreciated 🙂
Intermediate & Advanced SEO | | MonsterWeb280 -
How to prevent Google from crawling our product filter?
Hi All, We have a crawler problem on one of our sites www.sneakerskoopjeonline.nl. On this site, visitors can specify criteria to filter available products. These filters are passed as http/get arguments. The number of possible filter urls is virtually limitless. In order to prevent duplicate content, or an insane amount of pages in the search indices, our software automatically adds noindex, nofollow and noarchive directives to these filter result pages. However, we’re unable to explain to crawlers (Google in particular) to ignore these urls. We’ve already changed the on page filter html to javascript, hoping this would cause the crawler to ignore it. However, it seems that Googlebot executes the javascript and crawls the generated urls anyway. What can we do to prevent Google from crawling all the filter options? Thanks in advance for the help. Kind regards, Gerwin
Intermediate & Advanced SEO | | footsteps0