Duplicate post, sorry!
Posts made by sweetfancymoses
-
RE: Local SEO Practice: Creating a Fictitious Business?
-
RE: Local SEO Practice: Creating a Fictitious Business?
Thanks for your input, Keri! I think we're going to go the pro bono route.
Starwood's a curious case, and I also wonder to what extent those pages exist for SEO reasons. Some of their test properties look innocuous, like sandboxes for translation plug-ins or new templates, but the Wikipedia citation on the 'List of New York Hotels' is really interesting to me. When I was working in-house for a New York hotel chain, I also picked up on weird affinities between that page and rankings in local search. Back in May I noticed that the Magic 7 for "NYC Hotels" exclusively listed hotels with Wikipedia pages, and decided to create one for one of the properties. Bingo! Within a few weeks, we were usually among the top local results for "nyc luxury hotels", "nyc 4-star hotels", and "nyc 5-star hotels" (yeesh, Google, it's either one or the other...)
At the same time, hotels with Wikipedia pages started receiving snippets and thumbnails from their articles on the far right of the screen, adjacent to the map. This seemed to be exclusive to the hospitality vertical, and I didn't notice the same SERP formatting in other cities. You would also see thumbnails of related hotels (which as a rule, also had Wikipedia pages), and a list of features that it seemed to be cross-referencing from multiple sources, like:
Hotel Class: 4 Stars
Architect: Steven Jacobs
Style: Modernist
My hunch is that Starwood was testing two hypotheses with the Test Galaxy Sheraton:
- Does inclusion on Wikipedia's list of hotels in New York City influence rankings, either local or web?
- Does Google source a hotel's 'features' from lists and structured mark-ups on a hotel's own website?
In any case, Google did away with the Wikipedia-heavy SERPs a few months ago but I suspect it continues to inform local search. It makes sense from a business perspective that Google would want to use use open-source data whenever possible. They could crunch its geocoordinates and with their algorithms without paying license fees, I would think, no?
-
RE: Local SEO Practice: Creating a Fictitious Business?
Think I needed this, Daniel. Thank you!
My conscience says that you're right. I've gone back and forth on this today, and whatever educational value there might be seems outweighed by the potential to confuse people and affect livelihoods.
There are things that I'd like to test that go beyond the standard on-page/off-page process of citations, schemata, and KML—things like EXIF data, geocaching, and Wikipedia mentions—but it doesn't seem fair to test any of them without having some skin in the game.
Ah, if only there were a sandbox for real company stuff...
-
Local SEO Practice: Creating a Fictitious Business?
Has anyone tried fabricating a fake, brick-and-mortar store as an SEO experiment? Sort of along the lines of what Starwood is doing (check out this Wikipedia experiment with the Test Galaxy Sheraton), but with a legitimate physical address and all?
Was it useful? And are there any potential legal troubles that could arise from borrowing a vacant address?
I'm thinking this could be helpful for my Intern to gain practical experience in local SEO, without the politics of working for a client But I wouldn't want to blight that address for future occupants if the experiment went horribly awry.
We could instead offer pro bono services to a small business with a limited web presence – that would be useful to her, and constructive. But I'd like to have a better understanding of what signals Google looks for when deciding whether to index a website in local search, and see whether possible to dupe those algos.
What are your thoughts, Mozzers?
.
-
RE: Link Building with PRweb press releases
For one client whose work is generating some interesting potential headlines (business technology for the financial sector), I'm starting to experiment with PressKing.com. It's a distribution wire that's roughly like DM'ing an unusually large, influential Google+ circle. It's too early to call it a success, but here's why I think it's important to use in tandem with any PR:
- Their database includes journalists and bloggers who've consented to receiving fresh news on an overwhelming list of topics. Client wants to publicize a new patent for mobile checking? Want to get your PR in front of 2000+ US-based columnists who've expressed interest in "ATMs", "Business Technology", or "Digital Imaging"? (It seems like a good idea, no?) You can do that with PressKing, and the contacts span from high-DA sites like NYTimes, Bloomberg, and WSJ to niche news sites and industry quarterlies.
- If you're already using OSE to identify where related sites receive links from, and you know your client's low-hanging fruit (i.e. sites that link to multiple competitors but not to your client's site), it could be an excellent idea to add contact email addresses from those sites to the distribution list. Would it be more effective to reach out directly to individual journalists with a personalized note? Probably – but if you don't have the time, I think there's a way to do this respectfully, and successfully.
The technical considerations of which keywords to use, what anchor text, to which pages, I think are old-world considerations that obscure the real issue. The real issue is how to get links to your latest and greatest content while it's still fresh, and "fresh" is powerful bait for many journalists seeking word counts to occupy an otherwise slow news day. I hope this helps!
-
RE: Duplicate content with same URL?
Are you specifying the URL rewrite rule at the page level, or in your .htaccess? I had a similar issue once on a WordPress Multisite install that was rewriting
example.com/site2 -> site2.com
And:
example.com/site3 -> site3.comThe issue wasn't "real" in that the users' browsers were moving to the preferred URLs specified in the HTTP headers, but our crawl tests were a nightmare of non-existent files much like yours. Rel="canonical" will help in that case to avoid penalties, but won't do any favors for page rank or indexation. I believe our developers created some additional page-level rewrites to deal with the phantom pages created in the crawl, but alas, I'm not sure what the details were.
You might post in a new thread or reach out to Chris Abernethy directly, he's far savvier with PHP than I am.
-
RE: Duplicate content with same URL?
Modern search engines won't penalize you for this, but you may lose link juice if your content has multiple URLs and each is receiving links. Best practice is to set up a few simple PHP mod_rewrite rules in your .htaccess for basic URL display issues (enforce trailing backslash, redirect to/away from www, etc.), as well as to declare your preferred URL in the HTML of each page using this handy .
Here's a great tutorial how to force lower-case URLs written by a fellow Mozzer (props, Chris! It's how I learned...), and here's 10 other useful mod_rewrites to add to your repertoire.
-
What's a really good example of a linkbait-y Category / Subcategory hierarchy?
Rand makes a really great point in this 2009 post about the shape of crawl paths:
"#4. Craft navigation / category pages that are worthy of links. If you can make these pages worthy of links and attention, you drive PageRank and crawl priority further down your site's architecture into the content (and signal the engine that ALL your pages are important."
Which makes sense, intuitively, because you'd like link juice to flow directly and undiluted to your money pages. "Here's all my Green Widgets, Roger: they're all right here. While you're at it, here's a related blog post—'5 Ridiculously Awesome Things Every Green Widget Buyer Should Know'—and, oh look! Would you like to see my Blue Widgets as well?"
In practice, though, the Home » Widgets » Green Widgets doesn't sound all that alluring. Useful, absolutely, for UX, but not for getting links. Anyone have some favorite examples of Category / Subcategory hierarchies that do well as link-bait?
Client is a marketing agency dealing in the technical arcana of databases and ad serving, so their money pages won't be as specific as a Green Widget or a Miami Hotel. Their site isn't huge, and the pages will be extensively interlinked, so the emphasis has more to do with link juice / page authority than indexation. But I'm wondering if it could it be smart to replace a generic "Services" category with a KW-rich drop-down menu of "Marketing Solutions" (i.e. 'Increase Customer Retention') and link each landing page to a relevant charcuterie of services, white papers, webinars, case studies, etc., rather than keeping these pages in their respective silos—even as they link horizontally to related services?
-
RE: My Google Places get mixed with another similar business
I've had some testy exchanges with our Google AdWords rep over a similar situation involving two hotels with similar names bleeding into each other's mixed local/organic SERPs. Turns out there was a directory listing of one hotel at the other's address that was then syndicated and picked up by many other directory sites: the issue resolved itself within a few weeks of contacting the syndicate.
That said, I'm not sure your issue is totally technical. Do you own the rights to the name "Miami Printing" at that address? Do you have a physical storefront there? You've phrased your question in terms of KW's but Google Places verifies and ranks listings mostly on consistency of business info (phone number, address, & website) across multiple sources including—but definitely not limited to—YP.com, Yelp, Bing, Facebook, Yahoo, etc. Also keep in mind that your "rank" relative to others' in local search is highly volatile and depends on a lot of IP-based factors.
It also sounds like you're talking about receiving reviews and photos intended for the other location. You're likely to continue to run into that problem so long as another business in your area shares your name: Google has no way to verify that its users are reviewing or posting photos of the correct location.
-
RE: Statistics, R, and You: Advice for a New Analyst?
Love a good wacky, "out there" response to keep my intuitions in check!
I get what you're saying. That's happening, but in-house SEO/SEM is a slow and measured process, subject to approvals that go far up the chain of command. I figure if I can at least present my recommendations in a more salient way, they'll be able to defend themselves whether or not I'm in the room.
We also, as with most hotels, deal with affiliate sites. The cost/benefit of involvement there has considerable room for interpretation, and while it isn't exactly my onus to understand it, I'd like to. Basically looking to grow here.
-
Statistics, R, and You: Advice for a New Analyst?
Hey SEOMozers!
Two prongs to this question; I'll keep it succinct.
I've been working as an in-house SEO/SEM Analyst for about 5 months now. While I'm generally savvy at telling the story behind the traffic/conversion data, and making forensic recommendations (I worked in SEO prior to this while in college), ideally I'd like to see my reports read less like these piddly Excel charts and percent change statistics. Ideally they'd look more like Nate Silver's FiveThirtyEight blog for the New York Times, or OkCupid's periodic dispatches on OkTrends: visual, statistically-informed, and predictive, the kind of report that under other circumstances might plausibly generate backlinks.
Data analysts swear by R for statistical modeling, but is it useful for our Google Analytics data sets, holes and uncertainty and all? Is the steep learning curve worth the effort? Tutorials I've seen online assume a proficiency in programming or statistics that's beyond me, or they're written to support a textbook exercise. Any recommendations for a book, online course, or general resource with more of a niche focus?
And a general question about stats too, since it's related: what level would you prescribe if I really wanted to kick this up a notch? I studied a humanity in college and while it helps with the numerical storytelling, I wonder if the practical arcana of Bayesian Methods/abstract probability theorems have a place in Web Analytics. Do they? Are there options for us bushy-tailed young analysts to pick this up without resorting to B School?
Thanks in advance!