Questions created by ufmedia
-
Log files vs. GWT: major discrepancy in number of pages crawled
Following up on this post, I did a pretty deep dive on our log files using Web Log Explorer. Several things have come to light, but one of the issues I've spotted is the vast difference between the number of pages crawled by the Googlebot according to our log files versus the number of pages indexed in GWT. Consider: Number of pages crawled per log files: 2993 Crawl frequency (i.e. number of times those pages were crawled): 61438 Number of pages indexed by GWT: 17,182,818 (yes, that's right - more than 17 million pages) We have a bunch of XML sitemaps (around 350) that are linked on the main sitemap.xml page; these pages have been crawled fairly frequently, and I think this is where a lot of links have been indexed. Even so, would that explain why we have relatively few pages crawled according to the logs but so many more indexed by Google?
Technical SEO | | ufmedia0 -
Recommended log file analysis software for OS X?
Due to some questions over direct traffic and Googlebot behavior, I want to do some log file analysis. The catch is this is a Mac shop, so all our systems are on OS X. I have Windows 8 running in an emulator, but for the sake of simplicity I'd rather run all my software in OS X. This post by Tim Resnik recommended Web Log Explorer, but it's for Windows only. I did discover Sawmill, which claims to run on any platform. Any other suggestions? Bear in mind our site is load balanced over three servers, so please take that into consideration.
Technical SEO | | ufmedia0 -
2.3 million 404s in GWT - learn to live with 'em?
So I’m working on optimizing a directory site. Total size: 12.5 million pages in the XML sitemap. This is orders of magnitude larger than any site I’ve ever worked on – heck, every other site I’ve ever worked on combined would be a rounding error compared to this. Before I was hired, the company brought in an outside consultant to iron out some of the technical issues on the site. To his credit, he was worth the money: indexation and organic Google traffic have steadily increased over the last six months. However, some issues remain. The company has access to a quality (i.e. paid) source of data for directory listing pages, but the last time the data was refreshed some months back, it threw 1.8 million 404s in GWT. That has since started to grow progressively higher; now we have 2.3 million 404s in GWT. Based on what I’ve been able to determine, links on this particular site relative to the data feed are broken generally due to one of two reasons: the page just doesn’t exist anymore (i.e. wasn’t found in the data refresh, so the page was simply deleted), or the URL had to change due to some technical issue (page still exists, just now under a different link). With other sites I’ve worked on, 404s aren’t that big a deal: set up a 301 redirect in htaccess and problem solved. In this instance, setting up that many 301 redirects, even if it could somehow be automated, just isn’t an option due to the potential bloat in the htaccess file. Based on what I’ve read here and here, 404s in and of themselves don’t really hurt the site indexation or ranking. And the more I consider it, the really big sites – the Amazons and eBays of the world – have to contend with broken links all the time due to product pages coming and going. Bottom line, it looks like if we really want to refresh the data on the site on a regular basis – and I believe that is priority one if we want the bot to come back more frequently – we’ll just have to put up with broken links on the site on a more regular basis. So here’s where my thought process is leading: Go ahead and refresh the data. Make sure the XML sitemaps are refreshed as well – hopefully this will help the site stay current in the index. Keep an eye on broken links in GWT. Implement 301s for really important pages (i.e. content-rich stuff that is really mission-critical). Otherwise, just learn to live with a certain number of 404s being reported in GWT on more or less an ongoing basis. Watch the overall trend of 404s in GWT. At least make sure they don’t increase. Hopefully, if we can make sure that the sitemap is updated when we refresh the data, the 404s reported will decrease over time. We do have an issue with the site creating some weird pages with content that lives within tabs on specific pages. Once we can clamp down on those and a few other technical issues, I think keeping the data refreshed should help with our indexation and crawl rates. Thoughts? If you think I’m off base, please set me straight. 🙂
Intermediate & Advanced SEO | | ufmedia0 -
Behavior Flow vs. All Pages report in Google Analytics
In the interest of determining why our ecommerce site isn't converting, I've been spending some quality time with GA. I've suspected that our front page is part of the problem, especially where our organic traffic is concerned (we get a good deal of referral traffic from a link on an OEM's site). According to the Behavior Flow report under the Behavior section of GA, organic traffic to our home page is hemorrhaging (roughly 60% bounce rate). But when I went to the All Pages report (Behavior > Site Content > All Pages) and looked at organic traffic to our home page, then looked at the Medium as a secondary dimension, I'm getting a bounce rate of 35%. Why the massive discrepancy? Can somebody assist?
Reporting & Analytics | | ufmedia0 -
Authorship and Publisher on WordPress
I successfully enabled rel=publisher on our WordPress blog, and as a test I also enabled rel=authorship for a set of blog posts. (Tested both in Google's Rich Snippets Tester.) However, on the individual blog posts the publisher credit disappears. Is there a way to enable both to appear on blog posts?
Technical SEO | | ufmedia0 -
Old location in Google Places/Google Local
So I started at my company around six months ago. I claimed the company's Google+ page, and have been actively updating it. I also took the step of claiming the company's location in Google Places as well (yes, I know Google is shutting down Google Places in favor of Google+, but I figured it was a logical step to take anyway). Here's my dilemma: the company moved locations 2-3 years ago. That old location is still appearing in Google Places, and it also has its own Google+ page. Is there a recommended way to redirect that old location to our new office in Google Places and Google+?
Image & Video Optimization | | ufmedia0 -
How much will changing IP addresses impact SEO?
So my company is upgrading its Internet bandwidth. However, apparently the vendor has said that part of the upgrade will involve changing our IP address. I've found two links that indicate some care needs to be taken to make sure our SEO isn't harmed: http://followmattcutts.com/2011/07/21/protect-your-seo-when-changing-ip-address-and-server/ http://www.v7n.com/forums/google-forum/275513-changing-ip-affect-seo.html Assuming we don't use an IP address that has been blacklisted by Google for spamming or other black hat tactics, how problematic is it? (Note: The site hasn't really been aggressively optimized yet - I started with the company less than two weeks ago, and just barely got FTP and CMS access yesterday - so honestly I'm not too worried about really messing up the site's optimization, since there isn't a lot to really break.)
Technical SEO | | ufmedia0 -
Schema.org microformatting - itemprop within href tag?
I'm trying to implement microformatting on the site, specifically for the cities where we are active. I'm hoping this will help us rank in local search. This is what I have been doing: op="addressLocality">City Name In Google's Rich Snippets Testing Tool, that yields this: addresslocality = City Name However, I've also done this: City Name In Google's tool, that gave me this: addresslocality text = City Name
Technical SEO | | ufmedia
href = http://www.domain.com/webpage So which is better?0 -
How to rewrite WordPress permalinks for reverse proxy?
Our main site, www.domain.com, is on an IIS 6 server. When we started our blog, we wanted to put it in a subdirectory (domain.com/blog), but we couldn't because our IT people refused to support it. Instead, we built it on a third-party Apache server and configured it to open under blog.domain.com. However, I came across this SEOmoz post about the glories of reverse proxies, so I've persuaded our IT people to take a swing at it. We got it to work on a staging server, but the permalinks won't change (still appear as blog.domain.com/slug). The IT guys say it's due to a configuration problem with WordPress. Can somebody out there point me in the right direction as far as working out the URL issues with this?
Technical SEO | | ufmedia0 -
Multiple URLs in CMS - duplicate content issue?
So about a month ago, we finally ported our site over to a content management system called Umbraco. Overall, it's okay, and certainly better than what we had before (i.e. nothing - just static pages). However, I did discover a problem with the URL management within the system. We had a number of pages that existed as follows: sparkenergy.com/state/name However, they exist now within certain folders, like so: sparkenergy.com/about-us/service-map/name So we had an aliasing system set up whereby you could call the URL basically whatever you want, so that allowed us to retain the old URL structure. However, we have found that the alias does not override, but just adds another option to finding a page. Which means the same pages can open under at least two different URLs, such as http://www.sparkenergy.com/state/texas and http://www.sparkenergy.com/about-us/service-map/texas. I've tried pointing to the aliased URL in other parts of the site with the rel canonical tag, without success. How much of a problem is this with respect to duplicate content? Should we bite the bullet, remove the aliased URLs and do 301s to the new folder structure?
Technical SEO | | ufmedia0 -
Front page dropped to PR1 - thoughts?
The front page of our site dropped in late March from PR4 to PR1. Yes, I know toolbar PR isn't terribly reliable, isn't much of an indicator of overall SEO, etc. - however, upper management will want to know what happened and what is being done to fix it. Of course, the answer is obvious: go build links. But what might the cause be? As I mentioned in a past Q&A, the site is entirely encrypted and as a result may be causing us to leak some juice (http backlinks of course make up the vast majority of our links). We're planning to fix this once the site is ported over to a CMS, but that's still months off. Other than that, what might be the problem? Any ideas?
Technical SEO | | ufmedia0 -
Will using https across our entire site hurt our external backlinks?
Our site is secured throughout, so it loads sitewide as https. It is canonicalized properly - any attempt to load an existing page as http will force to https. My concern is with backlinks. We've put a lot of effort into social media, so we're getting some nice blog linkage. The problem is that the links are generally to http rather than https (understandable, since that's the default for most web users). The site still loads with no problem, but my concern is that since a redirect doesn't transfer all the link juice across, we're leaking some perfectly good link credit. From the standpoint of backlinkage, are we harming ourselves by making the whole site secure by default? The site presently isn't very big, but I'm looking at adding hundreds of new pages to the site, so if we're going to make the change, now is the time to do so. Let me know what you think!
Technical SEO | | ufmedia0