Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
404 Crawl Diagnostics with void(0) appended to URL
-
Hello
I am getting loads of 404 reported in my Crawl report, all appended with void(0) at the end. For example:
http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/void(0)
The site is running on Drupal 7,Has anyone come across this before?
Kind Regards
Moshe
| http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/void(0) |
-
I think void(0) problem from WordPress theme if you use WordPress. Or, You can't setup perfectly javascript void(0) code on your template file.
See the perfect javascript void(0) link examples on this page of wikihat => Kickass Torrents
See the "click to open" button there.
-
Hi Moshe! Did this ever work out for you?
-
Hi Kane
Many thanks for the links. The Google forum link seems like to be the direction. I am not the developer of the site, but will forward the link to them hoping they will help. (been 3 years since site went live).
Many thanks
Moshe
-
Hi Dimitri
I am pretty sure that it is only the fact that something is producing links with void(0) at the end. The link I used in my original post should actually be:
http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/tongues
The MOZ crawl report is saying that the above page is the Referrer to the
http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/void(0)
This repeats itself in many pages on the site.
Many thanks
Moshe -
Hi Moshen,
My guess is that somewhere on the site, someone created a pop up window or another load effect and they used void(0) to create a link. The better practice is to create a normal link and control what happens when it's clicked by using Javascript. You could also add rel="nofollow" to those links, but it's less ideal than the first option.
These explain the issue as well for additional reference:
https://productforums.google.com/forum/#!topic/webmasters/3ShUdX7_GqQ
This answer (http://stackoverflow.com/posts/134957/revisions) to this question.
-
Hi there.
It seems to be that there is something wrong with javascript. Because it seems like piece of JS code. However, even if i remove void part, the page still doesn't exist. You sure it's just "void(0)" problem?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the best way to treat URLs ending in /?s=
Hi community, I'm going through the list of crawl errors visible in my MOZ dashboard and there's a few URLs ending in /?s= How should I treat these URLs? Redirects? Thanks for any help
Moz Pro | | Easigrass0 -
Pages with URL Too Long
Hello Mozzers! MOZ keeps kindly telling me the URLs are too long. However, this is largely due to the structure of E-commerce site, which has to include 'brand' 'range' and 'products' keyword. For example -
Moz Pro | | tigersohelll
https://www.choicefurnituresuperstore.co.uk/Devonshire-Rustic-Oak-Bedside-Cabinet-1-Drawer-p40668.html MOZ recommends no more than 75 characters. This means we have 25-30 characters for both the brand name and product name. Questions:
If it is an issue, how to fix it on my site?
If it's not an issue, how can we turn off this alert from MOZ?
Anyone know how big an issue URLs are as a ranking factor? I thought pretty low.0 -
Woocommerce filter urls showing in crawl results, but not indexed?
I'm getting 100's of Duplicate Content warnings for a Woocommerce store I have. The urls are
Moz Pro | | JustinMurray
etc These don't seem to be indexed in google, and the canonical is for the shop base url. These seem to be simply urls generated by Woocommerce filters. Is this simply a false alarm from Moz crawl?0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
How to track data from old site and new site with the same URL?
We are launching a new site within the next 48 hours. We have already purchased the 30 day trial and we will continue to use this tool once the new site is launched. Just looking for some tips and/or best practices so we can compare the old data vs. the new data moving forward....thank you in advance for your response(s). PB3
Moz Pro | | Issuer_Direct0 -
Problem crawling a website with age verification page.
Hy every1, Need your help very urgent. I need to crawl a website that first has a page where you need to put your age for verification and after that you are redirected to the website. My problem is that SEOmoz, crawls only that first page, not the whole website. How can I crawl the whole website?, do you need me to upload a link to the website? Thank you very much Catalin
Moz Pro | | catalinmoraru0 -
Increase of 404 error after change of encoding
Hello, We just have launch a new version of our website with a new utf-8 encoding. Thing is, we use comma as a separator and since the new website went live, I have a massive increase of 404 error of comma-encoded URL. Here is an example : http://web.bons-de-reduction.com/annuaire%2C321-sticker%2Csite%2Cpromotions%2C5941.html instead of : http://web.bons-de-reduction.com/annuaire,321-sticker,site,promotions,5941.html I check with Screaming Frog SEO and Xenu, I can't manage to find any encoded URL. Is anyone have a clue on how to fix that ? Thanks
Moz Pro | | RetailMeNotFr0