Googlebot crawl error Javascript method is not defined
-
Hi All,
I have this problem, that has been a pain in the ****. I get tons of crawl errors from "Googlebot" saying a specific Javascript method does not exist in my logs. I then go to the affected page and test in a web browser and the page works without any Javascript errors.
Can some help with resolving this issue?
Thanks in advance.
-
Can you post a log file?
if you don’t want the domain shown search & replace it with example.com ?
or show us a photo of the problem?
or use screaming Frog to run a Java test?
-
I agree With Effectdigital we would need to see a copy of the page to be able to help with that.
It's more common than you think.
Maybe share the page from Google search console?
Hope this helps,
Tom
-
I think with this being such a niche query, we'd really need to see an example of a page which is triggering the error - to even attempt to help!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Spike in server errors
Hi, we've recently changed shopping cart platforms. In doing so a lot of our URL's changed, but I 301'ed all of the significant landing pages (as determined by G Analytics) prior to the switch. However, WMT is warning me about this spike in server errors now with all the pages that no longer exist. However they are only crawling them because they used to exist/are linked from pages that used to exist. and no longer actually exist. Is this something I should worry about? Or let it run its course?
Technical SEO | | absoauto0 -
Sitemap issue - Tons of 404 errors
We've recreated a client site in a subdirectory (mysite.com/newsite) of his domain and when it was ready to go live, added code to the htaccess file in order to display the revamped website on the main url. These are the directions that were followed to do this: http://codex.wordpress.org/Giving_WordPress_Its_Own_Directory and http://codex.wordpress.org/Moving_WordPress#When_Your_Domain_Name_or_URLs_Change. This has worked perfectly except that we are now receiving a lot of 404 errors am I'm wondering if this isn't the root of our evil. This is a WordPress self-hosted website and we are actively using the WordPress SEO plugin that creates multiple folders with only 50 links in each. The sitemap_index.xml file tests well in Google Analytics but is pulling a number of links from the subdirectory folder. I'm wondering if it really is the manner in which we made the site live that is our issue or if there is another problem that I cannot see yet. What is the best way to attack this issue? Any clues? The site in question is www.atozqualityfencing.com https://wordpress.org/plugins/wordpress-seo/
Technical SEO | | JanetJ0 -
Strange 404 Error(Answered)
Hi everyone! I recently took over a new account and I was running an initial crawl on the site and a weird 404 error popped up. http://www.directcolors.com/products/liquid-colored-antique/top
Technical SEO | | rblake
http://www.directcolors.com/applications/concrete-antiquing/top
http://www.directcolors.com/applications/concrete-countertops/top I understand that the **top **could be referring to an actual link that brings users to the top of a page, but on these pages there is no such link. Am I missing something?1 -
Website not crawled
i added website www.nsale.in in add campaign, it shows only 1 page crawled. but its working fine for other sites, any idea why it failed ?
Technical SEO | | Dhinesh0 -
Webmaster tools crawl stats
Hi I have a clients site that was having aprox 30 - 50 pages crawled regularly since site launch up until end of Jan. On the 21st Jan the crawled pages dropped significantly from this average to about 11 - 20 pages per day. This also coincided with a massive rankings drop on the 22nd which i thought was something to do with panda although it later turned out the hosts had changed the DNS and exactly a week after fixing it the rankings returned so i think that was the cause not panda. However i note that the crawl rate still hasn't returned to what it was/previous average and is still following the new average of 10-20 pages per day rather than the 30-50 pages per day. Does anyone have any ideas why this is ? I have since added a site map but hasnt increased crawl rate since A bit of further info if it helps in any way is that In the indexed status section says 48 pages ever crawled with 37 pages indexed. There are 48 pages on the site. The site map section says 37 submitted with 35 indexed. I would have thought that since dynamic site map would submit all urls Any clarity re the above much appreciated ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Help with strange 404 Errors.
For the most part I have never had trouble tracking down 404's. Usually it's simply a broken link, but lately I have been getting these strange errors http://gridironexperts.com/http%3A/www.nfl.com/gamecenter?game_id=29528&season=2008&displayPage=tab_gamecenter/ What does; %C2%94 repersent? The error always points to NFL.com, but we don't link to them...like ever? Can I just 404: http://gridironexperts.com// to fix the problem, as all 404's start with this weird %C2%94 error. Is this error even on my site? Is in the backend...virus? thanks -Mike
Technical SEO | | MikePatch0 -
Location Based Content / Googlebot
Our website has local content specialized to specific cities and states. The url structure of this content is as follows: www.root.com/seattle www.root.com/washington When a user comes to a page, we are auto-detecting their IP and sending them directly to the relevant location based page - much the way that Yelp does. Unfortunately, what appears to be occurring is that Google comes in to our site from one of its data centers such as San Jose and is being routed to the San Jose page. When a user does a search for relevant keywords, in the SERPS they are being sent to the location pages that it appears that bots are coming in from. If we turn off the auto geo, we think that Google might crawl our site better, but users would then be show less relevant content on landing. What's the win/win situation here? Also - we also appear to have some odd location/destination pages ranking high in the SERPS. In other words, locations that don't appear to be from one of Google's data center. No idea why this might be happening. Suggestions?
Technical SEO | | Allstar0 -
Video Link Bait - Protections, Formats, Methods
I wanted to find out if anyone could recommend the best technique for protecting a video on a website. We have been able to secure a private interview with one of the largest manufacturers in this customers industry. Unfortunately the interview was done impromptu with an iphone, but the recording is not too bad. The volume is low, but we will also type out the content of the interview on our blog, right beneath where the video is posted for viewing. We were hoping to only post this video on our blog, so that people within our industry will link to that page, increasing our link power. But we wanted to make sure that people couldn't steal the video and post it in other places to avoid giving us our link. What is the best method for accomplishing this? What video format would be ideal to hopefully accommodate windows, mac, IE, Firefox, Safari, mobile phones, ipads etc.? Are we on the right track on how to most effectively use this video for our own benefit?
Technical SEO | | JerDoggMckoy0