We have not detected anything wrong with our site from a user perspective-- that is what is so frustrating. Thanks for your time and response!
Posts made by Eric_R
-
RE: Mysterious ave pageload time spikes since major redesign
-
RE: Mysterious ave pageload time spikes since major redesign
Thank you Vijay, I am having our developer take a look at all of our scripts.
-
Mysterious ave pageload time spikes since major redesign
Hello Moz Community,
About six months ago, we completely redesigned our heavily trafficked website. We updated the navigation, made the site responsive, and refreshed all the site's content. We were hoping to get a rankings boost from all the hard work we put in, but sadly our traffic began to steadily decline. We started to notice that although overall page load speeds were comparable before and after the redesign if you compared them on an hourly basis, we saw random hourly spikes in ave page load speed post redesign.
Here is a pic of our analytics comparing our hourly ave. page load speeds pre vs. post redesign: https://screencast.com/t/8WQeyhquHN (after is in blue, before in orange)
We have spent around 3 months trying to figure out the underlying cause of the new load time spikes. My question is has anyone seen anything like this before? Does anyone have any suggestions what might be causing the spikes? As far as we can tell, the spikes are indeed random and are not correlated to any particular time of day, our traffic, or other activity we are doing. Any help would be greatly appreciated!
Thanks,
Eric
-
Soft 404 error for a big, longstanding 301-redirected page
Hi everyone,
Years ago, we acquired a website that had essentially 2 prominent homepages - one was like example.com and the other like example.com/htm... They served the same purpose basically, and were both very powerful, like PR7 and often had double listings for important search phrases in Google. Both pages had amassed considerable powerful links to them.
About 4 years ago, we decided to 301 redirect the example.com/htm page to our homepage to clean up the user experience on our site and also, we hoped, to make one even stronger page in serps, rather than two less strong pages.
Suddenly, in the past couple weeks, this example.com/htm 301-ed page started appearing in our Google Search Console as a soft 404 error. We've never had a soft 404 error before now. I tried marking this as resolved, to see if the error would return or if it was just some kind of temporary blip. The error did return.
So my questions are:
1. Why would this be happening after all this time?
2. Is this soft 404 error a signal from Google that we are no longer getting any benefit from link juice funneled to our existing homepage through the example.com/htm 301 redirect?The example.com/htm page still has considerable (albeit old) links pointing to it across the web. We're trying to make sense of this soft 404 observation and any insight would be greatly appreciated.
Thanks!
Eric -
RE: Facebook Likes count and site migration to HTTPS
Thanks for your help Peter!
-
Facebook Likes count and site migration to HTTPS
Hi everyone,
We are currently migrating our website to HTTPS and, in the process, have read lots of different things about what happens to the social share counts afterward. My primary question is about the Facebook "Likes" count that is displayed on a website's Facebook page - does anything happen to that data when a website switches to HTTPS?
My hope is that this data would not suffer at all, but I wanted to pose the question, because i can't seem to find info about that. Thank you in advance for your help!
Eric
-
Tips on finding the right Senior Designer / Design Director
Hello Everyone,
I manage a fairly large educational website that we are looking to completely redesign to improve the overall site user experience and usability. In the past I, a non-designer, business person, would just roughly draw how I thought the site should look and our developer (who is good, but not a web designer) would just try his best to make everything look profession. Over the years, it has become painfully obvious that we need to invest in more design expertise and move towards a modern, smartly designed website.
So my question: Where are the best places to find good freelance designers?
I have of course conducted web searches, browsed elance, and asked my network for referrals. However, I am finding that most of the really good ones, ones who have to ability to take charge and lead us through this entire process, and who have at least a basic understanding of SEO principles, work for larger integrated development shops who also expect their people to develop the new site as well. We already have a developer and are primarily looking for the design expertise. Does anyone in the Moz community have any suggestions or even referrals?
Thanks!
Eric
-
Interlinking vs. 'orphaning' mobile page versions in a dynamic serving scenario
Hi there,
I'd love to get the Moz community's take on this.
We are working on setting up dynamic serving for mobile versions of our pages. During the process of planning the mobile version of a page, we identified a type of navigational links that, while useful enough for desktop visitors, we feel would not be as useful to mobile visitors. We would like to remove these from our mobile version of the page as part of offering a more streamlined mobile page. So we feel that we're making a fine decision with user experience in mind. On any single page, the number of links removed in the mobile version would be relatively few.
The question is: is there any danger in “orphaning” the mobile versions of certain pages because links don’t exist pointing to those pages on our mobile pages? Is this a legitimate concern, or is it enough that none of the desktop versions of pages are orphaned? We were not sure whether it’s even possible, in Googlebot’s eyes, to orphan a mobile version of a page if we use dynamic serving and if there are no orphaned desktop versions of our pages. (We also plan to link to "full site" in the footer.)
Thank you in advance for your help,
Eric -
RE: Removing Content 301 vs 410 question
Hey there mememax - thank you for the reply! Reading your post and thinking back to our methodology, yes I think in hindsight we were a bit too afraid about generating errors when we removed content - we should have considered the underlying meaning of the different statuses more carefully. I appreciate your advice.
Eric
-
RE: Removing Content 301 vs 410 question
Hello Dr. Pete – thank you for the great info and advice!
I do have one follow-up question if that's ok – as we move forward cutting undesirable content and generate 4xx status for those pages, is there a difference in impact/effectiveness between a 403 and a 404? We use a CMS and un-publishing a page creates a 403 “Access denied” message. Deleting a page will generate a 404. I would love to hear your opinion about any practical differences from a Googlebot standpoint… does a 404 carry more weight when it comes to content removal, or are they the same to Googlebot? If there’s a difference and the 404 is better, we’ll go the 404 route moving forward.
Thanks again for all your help,
Eric
-
RE: Panda Recovery - What is the best way to shrink your index and make Google aware?
Hi Dr. Pete,
I know this is a late entry into this thread, but.. what if we did all our content cutting in the wrong ways over the past year – is there something we could/should do now to correct for this? Our site was hit by panda back in March 2012, and since then we've cut content several times. But we didn’t use this good process you advocate – here’s what we did when we cut pages:
1. We set up permanent 301 redirects for all of them immediately
2. Simultaneously, we always removed all links pointing to cut pages (we wanted to make sure users didn’t get redirected all the time)This is a far cry from what you recommend and what Kerry22 did to recover successfully. If you have some advice on the following questions, I’d definitely appreciate it:
- Is it possible Google still thinks we have this content on our site or intend to bring it back, and as a result we continue to suffer?
- If that is a possibility, then what can we do now (if anything) to correct the damage we did?
We're thinking about removing all of those 301s now, letting all cut content return 404s and making a separate sitemap of cut content to submit it to Google. Do you think it's too late or otherwise inadvisable for us to do this kind of thing?
Thanks in advance,
Eric -
Removing Content 301 vs 410 question
Hello,
I was hoping to get the SEOmoz community’s advice on how to remove content most effectively from a large website.
I just read a very thought-provoking thread in which Dr. Pete and Kerry22 answered a question about how to cut content in order to recover from Panda. (http://www.seomoz.org/q/panda-recovery-what-is-the-best-way-to-shrink-your-index-and-make-google-aware).
Kerry22 mentioned a process in which 410s would be totally visible to googlebot so that it would easily recognize the removal of content. The conversation implied that it is not just important to remove the content, but also to give google the ability to recrawl that content to indeed confirm the content was removed (as opposed to just recrawling the site and not finding the content anywhere).
This really made lots of sense to me and also struck a personal chord… Our website was hit by a later Panda refresh back in March 2012, and ever since then we have been aggressive about cutting content and doing what we can to improve user experience.
When we cut pages, though, we used a different approach, doing all of the below steps:
1. We cut the pages
2. We set up permanent 301 redirects for all of them immediately.
3. And at the same time, we would always remove from our site all links pointing to these pages (to make sure users didn’t stumble upon the removed pages.When we cut the content pages, we would either delete them or unpublish them, causing them to 404 or 401, but this is probably a moot point since we gave them 301 redirects every time anyway. We thought we could signal to Google that we removed the content while avoiding generating lots of errors that way…
I see that this is basically the exact opposite of Dr. Pete's advice and opposite what Kerry22 used in order to get a recovery, and meanwhile here we are still trying to help our site recover. We've been feeling that our site should no longer be under the shadow of Panda.
So here is what I'm wondering, and I'd be very appreciative of advice or answers for the following questions:
1. Is it possible that Google still thinks we have this content on our site, and we continue to suffer from Panda because of this?
Could there be a residual taint caused by the way we removed it, or is it all water under the bridge at this point because Google would have figured out we removed it (albeit not in a preferred way)?2. If there’s a possibility our former cutting process has caused lasting issues and affected how Google sees us, what can we do now (if anything) to correct the damage we did?
Thank you in advance for your help,
Eric -
What is the best way to hide duplicate, image embedded links from search engines?
**Hello!
Hoping to get the community’s advice on a technical SEO challenge we are currently facing. [My apologies in advance for the long-ish post. I tried my best to condense the issue, but it is complicated and I wanted to make sure I also provided enough detail.]
Context: I manage a human anatomy educational website that helps students learn about the various parts of the human body. We have been around for a while now, and recently launched a completely new version of our site using 3D CAD images. While we tried our best to design our new site with SEO best practices in mind, our daily visitors dropped by ~15%, despite drastic improvements we saw in our user interaction metrics, soon after we flipped the switch.
SEOMoz’s Website Crawler helped us uncover that we now may have too many links on our pages and that this could be at least part of the reason behind the lower traffic. i.e. we are not making optimal use of links and are potentially ‘leaking’ link juice now.
Since students learn about human anatomy in different ways, most of our anatomy pages contain two sets of links:
-
Clickable links embedded via JavaScript in our images. This allows users to explore parts of the body by clicking on whatever objects interests them. For example, if you are viewing a page on muscles of the arm and hand and you want to zoom in on the biceps, you can click on the biceps and go to our detailed biceps page.
-
Anatomy Terms lists (to the left of the image) that list all the different parts of the body on the image. This is for users who might not know where on the arms the biceps actually are. But this user could then simply click on the term “Biceps” and get to our biceps page that way.
Since many sections of the body have hundreds of smaller parts, this means many of our pages have 150 links or more each. And to make matters worse, in most cases, the links in the images and in the terms lists go to the exact same page.
My Question: Is there any way we could hide one set of links (preferably the anchor text-less image based links) from search engines, such that only one set of links would be visible? I have read conflicting accounts of different methods from using JavaScript to embedding links into HTML5 tags. And we definitely do not want to do anything that could be considered black hat.
Thanks in advance for your thoughts!
Eric**
-