Why won't the Moz plug in "Analyze Page" tool read data on a Big Commerce site?
-
We love our new Big Commerce site, just curious as to what the hang up is.
-
I know several developers but the main concern is the platform, Big Commerce. I am not offering feedback regarding the platform, but the first decision you need to make is whether you are committed to sticking with Big Commerce.
If you wish to keep the site built on Big Commerce, my recommendation would be to seek out a developer who specifically has experience working with that platform. There are tons of developers and companies who are all to willing to accept any web development work. You want a specialist who can say "I have built dozens of Big Commerce sites, that's mainly what I do."
-
Thanks Ryan. As I'm not a developer I wouldn't have known how to troubleshoot this. I had suspicions that things were not all good, as I noticed some slow slow page load speeds.
So basically, my client's developer hacked up the code very nicely.
Know any developers interested in getting involved with this project? Seems like I'll need to advise my client to fire yet another developer.
Best, Stephen
-
The AnalyzePage function works fine on Big Commerce sites. I checked a couple other sites and it worked perfectly. For example: http://tricejewelers.com/ is a Big Commerce site.
The difference I see on the particular site you shared is it has the largest number of coding errors I have ever seen on a web page. http://validator.w3.org/check?uri=http%3A%2F%2Fwww.asseenontvfrenzies.com%2Fyonanas%2F&charset=%28detect+automatically%29&doctype=Inline&group=0&user-agent=W3C_Validator%2F1.2
When I try to use AnalyzePage via FF, it hangs. When I use Chrome, I see results but it is for the social plugins, not the page itself. I suspect the root issue is the coding errors. For a more definitive answer you can open a ticket with the help desk help@seomoz.org.
Good luck.
-
Can you share the link to the page?
Analyze Page does not work if a page is not fully loaded. I have experienced issues in that regard, but then I refresh the page and it works fine.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is My Site Structure Suppressing Product Pages
Hey Guys, I've built some ecommerce sites using WooCommerce, and I've been auditing some of the sites to see why I'm not getting more traffic to my product pages. I have several informational blog posts and resources that are getting a lot of traffic, but my product pages aren't ranking very well. There are two things that I think could be causing the issue, but I could use some extra eyes on this. Products are listed several sub-categories down in the structure of the site. For example, this product is listed under a fifth level sub-category: /product-category/ ->FIRE SAFETY » FIRE EXTINGUISHERS » PORTABLE FIRE EXTINGUISHERS » FIRE EXTINGUISHER ACCESSORIES » FIRE EXTINGUISHER BRACKETS Also, I checked to see what Google's indexed under the /product/ directory, which is the default format for WooCommerce products. It looks like all of my products are given lower authority than other top-level directories, including /product-tag/ and /product-category/ It seems like an adjustment to how my products are structured in the site might go a long way. If you have any experience with this and could weigh in on it, I'd appreciate it.
Technical SEO | | robbinsinternational0 -
Using http: shorthand inside canonical tag ("//" instead of "http:") can cause harm?
HI, I am planning to launch a new site, and shortly after to move to HTTPS. to save the need to change over 5,000 canonical tags in pages the webmaster suggested we implement inside the rel canonical "//" instead of the absolute path, would that do any damage or be a problem? oranges-south-dakota" />
Technical SEO | | Kung_fu_Panda0 -
Why Can't Googlebot Fetch Its Own Map on Our Site?
I created a custom map using google maps creator and I embedded it on our site. However, when I ran the fetch and render through Search Console, it said it was blocked by our robots.txt file. I read in the Search Console Help section that: 'For resources blocked by robots.txt files that you don't own, reach out to the resource site owners and ask them to unblock those resources to Googlebot." I did not setup our robtos.txt file. However, I can't imagine it would be setup to block google from crawling a map. i will look into that, but before I go messing with it (since I'm not familiar with it) does google automatically block their maps from their own googlebot? Has anyone encountered this before? Here is what the robot.txt file says in Search Console: User-agent: * Allow: /maps/api/js? Allow: /maps/api/js/DirectionsService.Route Allow: /maps/api/js/DistanceMatrixService.GetDistanceMatrix Allow: /maps/api/js/ElevationService.GetElevationForLine Allow: /maps/api/js/GeocodeService.Search Allow: /maps/api/js/KmlOverlayService.GetFeature Allow: /maps/api/js/KmlOverlayService.GetOverlays Allow: /maps/api/js/LayersService.GetFeature Disallow: / Any assistance would be greatly appreciated. Thanks, Ruben
Technical SEO | | KempRugeLawGroup1 -
Merging sites, ensuring traffic doesn't die
Wondering if I could get a second opinion on this, please. I have just taken on a new client, they own about 6 different niched car experience websites (hire an Aston Martin for the day, type thing). All the six sites they have seem to perform reasonably well for the brand of car they deal with, the average DA of the sites is about 24. The client wishes to move all of these different manufacturers into one site and have sections of the site, they can then also target more generic experience day type keywords. The obvious way of dealing with this move would be to 301 the old sites to the relevant places on the new site and wait for that to rank. However, looking at the backlinks profile of the niched sites, they seem to have very few backlinks and i feel the reason they are ranking so well for all the individual manufacturers is because they all feature the name in the domain. Not exact match, but the name is there. If I am thinking right, with the 301 we want to tell Google page x is now page y, index this one instead. Because the new site has a more generic name I don't think it will enjoy any of the domain keyword benefits which are helping the sub sites, and as a result I expect the rankings and traffic to drop (at least in the short term). Am I reading this correct. Would people use a 301 in this case? The easiest thing to do would be to leave the 6 sub sites up and running on their own domain and launch the new site to run alongside them, however the client doesn't want this. Thanks, Carl
Technical SEO | | GrumpyCarl0 -
Moving articles to new site, can't 301 redirect because of panda
I have a site that is high quality, but was hit by penguin and perhaps panda. I want to remove some of the articles from my old site and put them on my new site. I know I can't 301 redirect them because I will be passing on the bad google vibes. So instead, I was thinking of redirecting the old articles to a page on the old site which explains that the article is moved over to the new site. I assume that's okay? I'm wondering how long I should wait between the time I take them down from the old site to the time I repost them on the new site. Do I need to wait for Google to de-index them in order to not be considered duplicate content/syndication? We'll probably reword them a bit, too - we really want to avoid panda. Thanks!
Technical SEO | | philray
Phil0 -
Does a CMS inhibit a site's crawlability?
I smell baloney but I could use a little backup from the community! My client was recently told by an SEO that search engines have a hard time getting to their site because using a CMS (like WordPress) doesn't allow "direct access to the html". Here is what they emailed my client: "Word Press (like your site is built with) and other similar “do it yourself” web builder programs and websites are not good for search engine optimization since they do not allow direct access to the HTML. Direct HTML access is needed to input important items to enhance your websites search engine visibility, performance and creditability in order to gain higher search engine rankings." Bots are blind to CMSs and html is html, correct? What do you think about the information given by the other SEO?
Technical SEO | | Adpearance0 -
Google Has Indexed Most of My Site, why won't Bing?
We've got 600K+ pages indexed by Google and have submitted our same sitemap.xml's to Bing, but have only seen 100-200 pages get indexed by Bing. Is this fairly typical? Is there anything further we can do to increase indexation on Bing?
Technical SEO | | jamesti0 -
Does it matter that our cached pages aren't displaying style
We've got pages that, when I search for them in Google and click on Cache, show NO styles, nothing from the CSS. Is there any way that could effect rankings? I don't think so, but it does fall into the category of showing one thing to the bots and another to the user, which is bad. Also, could blocking /scripts in robots.txt be preventing bots from accessing the CSS? Thanks
Technical SEO | | poolguy0