How to solve this merchant error?
-
Hello All,
In my google merchant suddenly lots of warning appeared i.e. 1) Automatic item updates: Missing schema.org microdata price information 2) Missing microdata for condition
Can you please tell me how to solve this errors?
Thanks!
John -
- Unspecified Type : http://abcd.com
Node is empty. Double check that this is desired and consider removing.
How to solve this error?
- Field ratingValue may not be empty.
One of ratingCount or reviewCount must be provided.
For any new product there will not be any rating so how to solve this error?
-
dear have a look at the following post
http://www.datafeedwatch.com/blog/10-common-errors-google-merchant-center/
-
Hi John,
In order to fix Missing microdata for condition please check @ https://support.google.com/merchants/answer/6183460?hl=en-GB
& for Automatic item updates: Missing schema.org microdata price information please check @ https://support.google.com/merchants/answer/3246284
Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to explain "No Return Tags" Error from non-existing page?
In the Search Console of our Google Webmaster account we see 3 "no return tags" errors. The attached screenshot shows the detail of one of these errors. I know that annotations must be confirmed from the pages they are pointing to. If page A links to page B, page B must link back to page A, otherwise the annotations may not be interpreted correctly. However, the originating URL (/#!/public/tutorial/website/joomla) doesn't exist anymore. How could these errors still show up? Screenshot%202016-07-11%2017.36.27.png?dl=0
Technical SEO | | Maximuxxx0 -
Yoast and Standard theme: Fatal error
Hi all- A client has tried installing Yoast on her site and received at fatal error (below). She's been able to restore her site and get it functioning again, but I'm wondering if there's a work around so we can use the plugin. It's a Wordpress site using the Standard Theme. I've searched the forums (and here!) and haven't found anything helpful yet. Do you have any suggestions? Thanks! "Fatal error: Cannot redeclare yoast_breadcrumb() (previously declared in /vservers/nwconstructi/htdocs/NWCL/wp-content/plugins/wordpress-seo/inc/wpseo-functions.php:108) in /vservers/nwconstructi/htdocs/NWCL/wp-content/themes/StandardTheme_272/lib/standard_yoast_breadcrumbs.php on line 280"
Technical SEO | | DonnaDuncan0 -
Responsive web design has a crawl error of redirecting to HTTP instead of HTTPS ? is this because of the new update of google that appreciates the HTTPs more?
We at yamsafer.me are using a Repsonsive web design! A crawl errors occured which redirects the hompage to an HTTP version instead of HTTPS? Any ideas on why this happened?
Technical SEO | | Yamsafer.com0 -
Are W3C Validators too strict? Do errors create SEO problems?
I ran a HTML markup validation tool (http://validator.w3.org) on a website. There were 140+ errors and 40+ warnings. IT says "W3C Validators are overly strict and would deny many modern constructs that browsers and search engines understand." What a browser can understand and display to visitors is one thing, but what search engines can read has everything to do with the code. I ask this: If the search engine crawler is reading thru the code and comes upon an error like this: …ext/javascript" src="javaScript/mainNavMenuTime-ios.js"> </script>');}
Technical SEO | | INCart
The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element
in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed).
One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create
cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer
the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error). and this... <code class="input">…t("?");document.write('>');}</code> ✉ The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed). One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error). Does this mean that the crawlers don't know where the code ends and the body text begins; what it should be focusing on and not?0 -
Duplicate title/content errors for blog archives
Hi All Would love some help, fairly new at SEO and using SEOMoz, I've looked through the forums and have just managed to confuse myself. I have a customer with a lot of duplicate page title/content errors in SEOMoz. It's an umbraco CMS and a lot of the errors appear to be blog archives and pagination. i.e. http://example.com/blog http://example.com/blog/ http://example.com/blog/?page=1 http://example.com/blog/?page=2 and then also http://example.com/blog/2011/08 http://example.com/blog/2011/08?page=1 http://example.com/blog/2011/08?page=2 http://example.com/blog/2011/08?page=3 (empty page) http://example.com/blog/2011/08?page=4 (empty page) This continues for different years and months and blog entries and creates hundreds of errors. What's the best way to handle this for the SEOMoz report and the search engines. Should I rel=canonical the /blog page? I think this would probably affect the SEO of all the blog entries? Use robots.txt? Sitemaps? URL parameters in the search engines? Appreciate any assistance/recommendations Thanks in advance Ian
Technical SEO | | iragless0 -
Help with Webmaster Tools "Not Followed" Errors
I have been doing a bunch of 301 redirects on my site to address 404 pages and in each case I check the redirect to make sure it works. I have also been using tools like Xenu to make sure that I'm not linking to 404 or 301 content from my site. However on Friday I started getting "Not Followed" errors in GWT. When I check the URL that they tell me provided the error it seems to redirect correctly. One example is this... http://www.mybinding.com/.sc/ms/dd/ee/48738/Astrobrights-Pulsar-Pink-10-x-13-65lb-Cover-50pk I tried a redirect tracer and it reports the redirect correctly. Fetch as googlebot returns the correct page. Fetch as bing bot in the new bing webmaster tools shows that it redirects to the correct page but there is a small note that says "Status: Redirection limit reached". I see this on all of the redirects that I check in the bing webmaster portal. Do I have something misconfigured. Can anyone give me a hint on how to troubleshoot this type of issue. Thanks, Jeff
Technical SEO | | mybinding10 -
Wordpress SEO Errors - Any advice?
Hi all! My site is on the WP platform and I'm having a crawl error. Wondering if you guys could possibly help me figure out what's going on? I have a good number of 404 errors where the links seems to be appended and I can't figure out why. I've scoured my individual posts and cannot seem to find the broken link? The crawl error looks a bit like this: http://preciousthingsphotography.com/2007/12/10/chicago-family-photographer-welcome/http:%2F%2Fpreciousthingsphotography.com%2F2007%2F12%2F10%2Fchicago-family-photographer-welcome%2F You can see that my original link is somehow being doubled with the slashes being replaced? This is happening on all of my posts. Any ideas as to what could be going on? Thanks so much!
Technical SEO | | ptpgen0 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0