"/blogroll" causing 404 error
-
I'm running a campaign, and the crawling report for my site returned a lot of 4xx errors. When I look at the URLs, they all have a "/blogroll" in the end, like:
mysite.com/post-number-1/blogroll
mysite.com/post-number-2/blogroll
And so on, for pretty much all the pages. The thing is, I removed the blogroll widget completely, so I really wouldn't know what can possibly point to links like that.
Is there anything to fix on the site?
Thanks
-
Hi Andrea
Are you all set with this? The transfer may have had to do with it, but the main importance now is to follow Adam's good advice - find the source of the 404 links and change them on your site. If they're indexed or backlinked to from elsewhere on the web, you need to 301 them to an existing page.
Let us know if you still need help!
-Dan
-
OK, so, I crawled my site with Screaming Frog and found the same errors. Actually I found out that the "privacy policy" page is causing the same 404 with the same type of URL "mysite.com/post-number-1/privacy-policy" (SEOmoz crawler had detected those as well, I just hadn't noticed).
The privacy policy page is actually published, but I cannot remove it, as I wouldn't be compliant with Google Adsense policy.
A couple of more things though:
-
I checked a couple of those 404 pages in Google with the "site:" command, and they're not indexed. I think those pages simply don't exist.
-
the blogroll was in the sidebar, and the privacy policy page is in the footer, which means, both of them are site-wide
-
I had a site before, then I deleted it and started my current one from scratch, importing all the content from Wordpress to Wordpress. Maybe this transfer has something to do with the issue?
-
-
Sorry Ben but I have to disagree with you here. That is very bad practice and also very poor advice. You shouldn't just ignore 404 pages from a site crawl.
Really the only time you should let pages just 404 is when Google has indexed them, there is no relevant page on your site to redirect them to, there are no high value links pointing to them and they are not being linked to from within your site.
However, in this case the 404 pages are being linked to from within the site. This means that value is being passed to these pages from within the site that could otherwise be passed to other pages.
Best practice in this situation is to fix the links that point to the 404 pages and 301 redirect the 404 pages to relevant pages on the site.
P.s. running a quick site crawl and fixing the 404s should only take minutes and not hours to do!
-
Check GA (Google Analytics)
- Are the 404d pages receiving search traffic?
- Are the 404d pages ruining your user experience? (Are they accessible via your site links)
If no to both, is this really worth a couple hours of your time?
-
Hi Andrea,
If the crawl is returning 404 errors then this means, although you have removed the widget, the pages are still being linked to somewhere on your site.
My advice would be to use the Screaming Frog crawler or if you have access to another crawler then use that. Once you have crawled the site using a crawler, you should be able to find out which pages are still linking to the 404 pages. Once you have found these, you will get a better idea of how to fix the issue.
Remember, a crawler will crawl your entire site, including all links, and if 404s are found then these are being linked to internally.
Hope that helps,
Adam.
-
Hei Don,
thanks for the quick help.
Yes, I'm running Wordpress, with the Catalyst framework.
I was using the blogroll widget in the sidebar, but when I started to see the crawling errors I removed it just in case. The crawl is now complete, but even more errors of the same type have come out.
-
Hi Andrea
I'm not sure about the issue, but it may help others if you mention what type of software you're running.
I would assume Wordpress since you said widget but could also be Joomla or another CMS.
Good Luck,
Don
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have duplicate content but // are causing them
I have 3 pages duplicated just by a / Example: https://intercallsystems.com/intercall-nurse-call-systems**//**
Technical SEO | | Renalynd
https://intercallsystems.com/intercall-nurse-call-systems**/** What would cause this?? And how would I fix it? Thanks! Rena0 -
Redundant categorization - "boys" and "girls" category. Any other suggestions than implementing filtering?
One of our clients (a children's clothing company) has split their categories (outwear, tops, shoes) between boys and girls - There's one category page for girls outwear, and one category for boys outwear. I am suspecting that this redundant categorisation is diluting link juice and rankings for the related search queries. Important points: The clothes themselves are rather gender-neutral, girl's sweaters don't differ that much from the boy's sweaters. Our keyword research indicates that norwegians' search queries are also pretty gender neutral - people are generally searching after "children's dresses", "shoes for kids", "snowsuits", etc. So these gender specific categories are not really reflective of people's search behavior. I acknowledge that implementing a filter for "boys" and "girls" would be the best way to solve this redundant categorization, but that would simply be to expensive for our client. I'm thinking that some sort of canonicalisation would be the best approach to solve this issue. Are there any other suggestions or comments to this?
Technical SEO | | Inevo0 -
How should I deal with "duplicate" content in an Equipment Database?
The Moz Crawler is identifying hundreds of instances of duplicate content on my site in our equipment database. The database is similar in functionality to a site like autotrader.com. We post equipment with pictures and our customers can look at the equipment and make purchasing decisions. The problem is that, though each unit is unique, they often have similar or identical specs which is why moz (and presumably google/bing) are identifying the content as "duplicate". In many cases, the only difference between listings are the pictures and mileage- the specifications and year are the same. Ideally, we wouldn't want to exclude these pages from being indexed because they could have some long-tail search value. But, obviously, we don't want to hurt the overall SEO of the site. Any advice would be appreciated.
Technical SEO | | DohenyDrones0 -
"Links to your site" in google webmaster tools not showing any data
Hello All I have a very strange query regarding the "Links to your site" section in webmaster's account my account does not show the Link data after so many days (more then 30 days) of verification. Can you please help me out how can I get my data in the webmaster's account?
Technical SEO | | barnesdorf
Please note I have verified the account using Google Analytic verification process. (does this affect?) I have seen this issue in my two websites which I have verified by Google Analytics. Please help me out.0 -
Which forum platform has the best "SEO Functionality"?
I have used vBulletin with vBSEO for a number of years and have been happy with the SEO results that I have achieved. However, vBulletin's recent releases have become unstable, full of bugs and are not secure. I am intending on starting some new forums in the near future and would like to move away from vBulletin. I have heard good things about Xenforo and IP Boards. Does anybody have any experience with either platforms built in SEO functionality?
Technical SEO | | statman870 -
How to avoid 404 errors when taking a page off?
So... We are running a blog that was supposed to have great content. Working at SEO for a while, I discovered that is too much keyword stuffing and some SEO shits for wordpress, that was supposed to rank better. In fact. That worked, but I'm not getting the risk of getting slaped by the Google puppy-panda. So we decided to restard our blog from zero and make a better try. So. Every page was already ranking in Google. SEOMoz didn't make the crawl yet, but I'm really sure that the crawlers would say that there is a lot of 404 errors. My question is: can I avoid these errors with some tool in Google Webmasters in sitemaps, or shoud I make some rel=canonicals or 301 redirects. Does Google penalyses me for that? It's kinda obvious for me that the answer is YES. Please, help 😉
Technical SEO | | ivan.precisodisso0 -
Google is Showing Website as "Untitled"
My freelance designer made some changes to my website and all of a sudden my homepage was showing the title I have in Dmoz. We thought maybe the NOODP tag was not correct, so we edited that a little and now the site is showing as "Untitled". The website is http://www.chemistrystore.com/. Of course he didn't save an old copy that we can revert to. That is a practice that will end. I have no idea why the title and description that we have set for the homepage is not showing in google when it previously was. Another weird thing that I noticed is that when I do ( site:chemistrystore.com ) in Google I get the https version of the site showing with the correct title and description. When I do ( site:www.chemistrystore.com ) in Google I don't have the hompage showing up from what I can tell, but there are 4,000+ pages to the site. My guess is that if it is showing up, it is showing up as "Untitled". My question is.... How can we get Google to start displaying the proper title and description again?
Technical SEO | | slangdon0 -
Rel="canonical" for PFDs?
Hello there, We have a lot of PDFs that seem to end up on other websites. I was wondering if there was a way to make sure that our website gets the credit/authority as the original creator. Besides linking directly from the PDF copy to our pages, is anyone aware of strategy for letting Google know that we are the original publishers? I know search engines can index HTML versions of PDFs, so is there anyway to get them to index a rel="canonical" tag as well? Thoughts/Ideas?
Technical SEO | | Tektronix0