Why won't the Moz plug in "Analyze Page" tool read data on a Big Commerce site?
-
We love our new Big Commerce site, just curious as to what the hang up is.
-
I know several developers but the main concern is the platform, Big Commerce. I am not offering feedback regarding the platform, but the first decision you need to make is whether you are committed to sticking with Big Commerce.
If you wish to keep the site built on Big Commerce, my recommendation would be to seek out a developer who specifically has experience working with that platform. There are tons of developers and companies who are all to willing to accept any web development work. You want a specialist who can say "I have built dozens of Big Commerce sites, that's mainly what I do."
-
Thanks Ryan. As I'm not a developer I wouldn't have known how to troubleshoot this. I had suspicions that things were not all good, as I noticed some slow slow page load speeds.
So basically, my client's developer hacked up the code very nicely.
Know any developers interested in getting involved with this project? Seems like I'll need to advise my client to fire yet another developer.
Best, Stephen
-
The AnalyzePage function works fine on Big Commerce sites. I checked a couple other sites and it worked perfectly. For example: http://tricejewelers.com/ is a Big Commerce site.
The difference I see on the particular site you shared is it has the largest number of coding errors I have ever seen on a web page. http://validator.w3.org/check?uri=http%3A%2F%2Fwww.asseenontvfrenzies.com%2Fyonanas%2F&charset=%28detect+automatically%29&doctype=Inline&group=0&user-agent=W3C_Validator%2F1.2
When I try to use AnalyzePage via FF, it hangs. When I use Chrome, I see results but it is for the social plugins, not the page itself. I suspect the root issue is the coding errors. For a more definitive answer you can open a ticket with the help desk help@seomoz.org.
Good luck.
-
Can you share the link to the page?
Analyze Page does not work if a page is not fully loaded. I have experienced issues in that regard, but then I refresh the page and it works fine.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Webmaster tools not showing links but Moz OSE is showing links. Why can't I see them in the Google Search Console
Hi, Please see attached photos. I have a website that shows external follow links when performing a search on open site explorer. However, they are not recognised or visible in search console. This is the case for both internal and external links. The internal links are 'no follow' which I am getting developer to rectify. Any ideas why I cant see the 'follow' external links? Thanks in advance to those who help me out. Jesse T7dkL5s T7dkL5s OkQmPL4 3qILHqS
Technical SEO | | jessew0 -
Does my "spam" site affect my other sites on the same IP?
I have a link directory called Liberty Resource Directory. It's the main site on my dedicated IP, all my other sites are Addon domains on top of it. While exploring the new MOZ spam ranking I saw that LRD (Liberty Resource Directory) has a spam score of 9/17 and that Google penalizes 71% of sites with a similar score. Fair enough, thin content, bunch of follow links (there's over 2,000 links by now), no problem. That site isn't for Google, it's for me. Question, does that site (and linking to my own sites on it) negatively affect my other sites on the same IP? If so, by how much? Does a simple noindex fix that potential issues? Bonus: How does one go about going through hundreds of pages with thousands of links, built with raw, plain text HTML to change things to nofollow? =/
Technical SEO | | eglove0 -
New Page Showing Up On My Reports w/o Page Title, Words, etc - However, I didn't create it
I have a WordPress site and I was doing a crawl for errors and it is now showing up as of today that this page : https://thinkbiglearnsmart.com/event-registration/?event_id=551&name_of_event=HTML5 CSS3 is new and has no page title, words, etc. I am not even sure where this page or URL came from. I was messing with the robots.txt file to allow some /category/ posts that were being hidden, but I didn't re-allow anything with the above appendages. I just want to make sure that I didn't screw something up that is now going to impact my rankings - this was just a really odd message to come up as I didn't create this page recently - and that shouldnt even be a page accessible to the public. When I edit the page - it is using an Event Espresso (WordPress plugin) shortcode - and I don't want to noindex this page as it is all of my events. Sorry this post is confusing, any help or insight would be appreciated! I am also interested in hiring someone for some hourly consulting work on SEO type issues if anyone has any references. Thank you!
Technical SEO | | webbmason0 -
"Extremely high number of URLs" warning for robots.txt blocked pages
I have a section of my site that is exclusively for tracking redirects for paid ads. All URLs under this path do a 302 redirect through our ad tracking system: http://www.mysite.com/trackingredirect/blue-widgets?ad_id=1234567 --302--> http://www.mysite.com/blue-widgets This path of the site is blocked by our robots.txt, and none of the pages show up for a site: search. User-agent: * Disallow: /trackingredirect However, I keep receiving messages in Google Webmaster Tools about an "extremely high number of URLs", and the URLs listed are in my redirect directory, which is ostensibly not indexed. If not by robots.txt, how can I keep Googlebot from wasting crawl time on these millions of /trackingredirect/ links?
Technical SEO | | EhrenReilly0 -
Duplicate Page Title for multilingual wordpress site
Hello all, I have received my first crawl reports and I find a lot of errors of duplicate page title. In the wordpress site I use the qtranslate plugin in order to have the site in 2 languages. I also use the Yoast SEO plugin in order to put titles, description and keywords to each web page. By looking deeply in the duplicate page title errors I think I found that the problem is that every web page takes the same SEO Title for each language. But I am not 100% sure. I tried to use some shortcodes of the qtranslate plugin like the following ABOUT [:en]About in order to indicate and give different titles per language for one web page but that doesn't seem to work. Does anybody here has experienced the same problem as me? Do you have any suggestions about how to ressolve the problem of the duplicate page title? I can give you the URL of the website if you need it to have a look. Thank you in advanced for your help. I really appreciate that. Regards, Lenia
Technical SEO | | tevag0 -
I always get this error "We have detected that the domain or subfolder does not respond to web requests." I don't know why. PLEASE help
subdomain www.nwexterminating.com subfolder www.nwexterminating.com/pest_control www.nwexterminating.com/termite_services www.nwexterminating.com/bed_bug_services
Technical SEO | | NWExterminating0 -
We registered with Yahoo Directory. Why won't this show up as a a linking root domain in our link analysis??
Recently checked our link analysis report for 2 of our campaigns who are registered in the dir.yahoo.com (yahoo directory). For some reason, we don't see this being a domain that shows up as linking to our website - why is this?
Technical SEO | | MMP0 -
E-Commerce site and blogs
We have e-Commerce site and an official blog to give advice about our products. This blog exists under our domain. Usually we build links directly to our site. Recently our ranking started going down. Also, we have been experiencing backlash for spam based on our link building (we are working on this, including a change of staff,but we cannot be sure that this will not happen again). This backlash has come through our social networking outlets (Facebook) in the form of very negative posts to our pages. One of our "SEOs" has devised a plan to use secondary blogs which we would start building links for. This blog would contain links back to our website. The idea is that the blog acts as a gate in a sense, in this way backlash is either posted on the blog or is directed at the blog. Also, we would be attempting to raise the page authority of these secondary blogs so in essence they act as high page authority links back to our website. The concern is that these secondary blogs may undermine the legitimacy of the official primary blog, which is still in its early stages as far as ranking and authority goes. Also, we are concerned that this technique would further undermine the legitimacy of the website itself by creating a larger "spam-like" presence, since visitors may see through the use of the secondary link through blogs.
Technical SEO | | ctam0