Questions created by BradsDeals
-
Conflicting average position data from Google Search Console?
I'm looking at Google Search Console data in Google Analytics, specifically Average Position as given in the Landing Page report, and the same metric broken out by mobile and desktop in the Devices report. In the Landing Page report, I see an aggregated average position that's much higher/worse than an actual average of what is reported for mobile, desktop and tablet traffic under the Device reporting. For example: Mobile: 5 Desktop: 5 Tablet: 5 So the average still should be roughly 5, right? Why would the Landing Page then show an aggregate Average Position of 8? I wouldn't expect to see a precisely same average given that different device types have different proportions that could render differently when the buckets are combined, but this is a huge swing. In fact, the aggregate Average Position as given in the top level Devices report is closer to 5 than to the 8 shown in the Landing Pages report. (These aren't actual numbers, but are illustrative of what I'm seeing, by the way.) Unless I'm missing some vital difference in the way that Average Position is reporting for the Landing Page report versus the Device reports, it doesn't seem like this should be possible. What am I missing?
Reporting & Analytics | | BradsDeals0 -
Looking for a way to crawl and test validity of affiliate links at scale. Ideas?
Hey all, I'm on the hunt for a service that will crawl our affiliate links and let us know when they return an error. I need to know that the last URL in the chain is returning a 200 over thousands of pages and links on a continual basis. The hitch is that most crawlers like Screaming Frog will return all of our links as working because it's only testing the first step, and this really requires a cloud solution anyway. Anyone happen to know of something? Edit for clarity's sake: I need something to check entire redirect chains in bulk that isn't a Wordpress plugin, isn't a website where you plug in a URL and it cuts you off after the first 100 results, and has the ability to crawl the site and provide reporting on a continual basis. Bi75jku
Affiliate Marketing | | BradsDeals0 -
Philosophical: Does Google know when a photo isn't what your meta data says it is? And could you be downgraded for that?
Not something I've ever heard discussed before, probably still a bit too esoteric for present day, but I've always been one to be guided by where I see Google headed rather than trying to game the system as it exists now. So think about it: Most stock and public domain photos are used repeatedly throughout the internet. Google's reverse image search proves that Google can recognize when the same photo is used across dozens of sites. Many of those photos will have alt and/or title text that Google also has crawled. If not it has the content of the page on which the photo exists to consider for context. So if Google has a TON of clues about what a photo is likely to be about, and can in theory aggregate those clues about a single photo from the dozens of sites using it, how might Google treat a site that mislabels it, old school "one of these things is not like the others" style? Would a single site hosting that photo be bolstered by the additional context that the known repeated photo brings in, essentially from other sites? If 10 sites about widgets are using the same widget photo, but the 11th uses an entirely new, never before published photo, would the 11th site then be rated better for bringing something new to the table? (I think this would be almost certainly true, drives home the importance of creating your own graphics content.) Anyway, like I said, all theoretical and philosophical and probably not currently in play, especially since an image can be used in so many different contexts, but it's New Years and things are slow and my brain is running, so I'm curious what other folks might think about that as the future of image optimization.
Image & Video Optimization | | BradsDeals1