I was thinking management might be more up to letting me toy with Schema if I chose a product line on one of our sites that has decent but not exemplary sales & traffic to test the Product schema on. Problem being that since we haven't seen competitors do it yet I don't have an adequate way to show them that it will be a good move for us in order to overtake anyone edging us out and they also don't want me giving our competitors any ideas that would inadvertently help them out by copying us.
Posts made by MikeRoberts
-
RE: Schema - Has it worked for you?
-
RE: Schema - Has it worked for you?
We had considered Rel=Author for our blogs but our regular blogger also does freelance for some sites that we might not want to have tied to our sites (nothing bad just not things that mesh well with the image we want to portray). So that idea was scrapped.
-
Schema - Has it worked for you?
Hi all! For a while now I've been trying to convince the higher ups that Schema (or related semantic markup languages) would be a plus for our sites. Predominantly we're in ecommerce and so far I still haven't noticed any of our competitors really using Schema. So I was asked to hold off on it until other projects had been handled. But now the coder I had been working closely with has moved to another job so I basically need to re-explain all of this to someone else. But that's not really the point of this discussion post.
I recently read Barry Schwartz's post on the Google Structured Data Markup Helper Tool over on Search Engine Land [link] and thought it'd be a good time to bring up Schema to management again. But first, I wanted to ask how Schema has worked for the rest of you. Have you seen any increases in clickthroughs? Have rich snippets in Google made you more visible? Have you seen a decent ROI for the time spent on implementing it? Are you seeing an increase in people implementing it across your verticals? Have you found certain types of semantic data are less useful than others? Any tips, tricks, pointers concerning their use and implementation for those who are considering it? Any good case studies you can share?
Thanks all!
-
RE: Will really old links have any benefit being 301'd
Personally, I wouldn't worry about redirecting a handful of pages that have been 404ing for 6-7 years. Odds are they don't rank for anything, Google has removed them from the index and they'll have little to no traffic going to them. Though I don't think they'll hurt you if you were to redirect them to a relevant live page... I just don't think you'll gain much from doing so.
-
RE: Rel Canonical
When looking at your campaigns, its important to remember the differences between the three sections in the Crawl Diagnostics. "Errors" are going to be the things that you want to prioritize to fix with your site, "Warnings" are the things that you should consider tweaking and/or fixing when you have the time but are not necessarily huge concerns, and "Notices" are just some interesting facts about such as how many Canonicals or 301 Redirects mozbot found on your site.
-
RE: Duplicate content or titles
As to #1, set your preferred domain in webmaster tools to either the WWW or without WWW version. Then set 301 redirects from the other to your preferred. This way you won't run into any issues of the bots seeing them as two different pages with duplicate content issues when they're really the same page.
-
RE: IS there such a thing as a Link Juice Viewer?
Link Juice isn't a real thing exactly... its an expression of a concept. (Personally I prefer the term Link Equity... sounds more professional to my ears.) Closer thing to a "link juice viewer" that I know of is Google Analytics.
Best way to see how it could be flowing through your site is to figure out which are your biggest landing pages and have the largest number of backlinks. You may want to give different sites different weight to their incoming links based on relevancy but since we don't know the numbers Google uses you may as well just use arbitrary numbers for an approximation. See how many links are on your heavily linked to pages and divide. Every page linked to from that page gains that percentage of the total. Keep going further down the path and eventually you'll see which pages are too many steps away from pages with good, relevant backlinks. Those pages aren't getting much love and could use some good, natural links to them or to the close by pages that link to them.
Mostly I just approximate in my head instead of doing any "real" fake math.
-
RE: Should I noindex the site search page? It is generating 4% of my organic traffic.
Since numerous search results pages are already in the index then Yes, you want to use the NoIndex tag instead of a disallow. The NoIndex tag will slowly lead to the pages being removed from the SERPs and the cache.
-
RE: Sudden Unexplained Drop Should I Disavow
If you haven't received an Unnatural Links warning then I wouldn't start disavowing random links you think might be harming you... as that could just hurt you more. It is possible given all the recent flux from Penguin that this is just some more shuffling. It could also be cause by the regular shuffling within the SERPs caused by your competitors making updates and tweaks to their sites that caused them to move up.
I'd suggest an small audit of your site... see if any pages could use tweaking, if there are any thin pages then look into adding fresh content, work on gaining some relevant & natural links to your site, etc. etc.
-
RE: Should I noindex the site search page? It is generating 4% of my organic traffic.
Google Webmaster Guidelines suggests you should "Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines."
-
RE: Duplicate Content
For the most part, tags in blog only have value in that they group together posts on similar topics. You should try to stay away from giving any post a one-off tag where they wind up being the only post ever to sit on that tag archive page. Because your site shows full articles instead of snippets, a blog post and any one-off tag archive page it appears on would be duplicates of each other. Other tag archives with varied posts would not be straight duplicates of any specific post. Normally people NoIndex their tag archives to avoid duplicate content issues.
-
RE: Blog Categories
I don't know Hubspot that well because all my blogging experience is with Wordpress... I'd assume Hubspot may have a way in the backend similar to Wordpress to change the page layout to snippets instead of the full article (which should cut down on the long scroll, lessen dupe content issues & excessive links) and a way to change the amount of articles to show on one page. Hopefully someone with more Hubspot experience chimes in but while you're waiting I'd double check any layout options you have the ability to tweak to see if those could help you.
-
RE: Website not being indexed by Google - does seomoz have a index checker?
First off, have you tried searching for your domain name in all the relevant search engines?
If you're not showing that way, have you tried copying a line of text from one of your pages and searching for that phrase to see if your page appears?
Do all of your pages that should be indexed have the correct meta robots? Its possible maybe they were set to NoIndex.
Have you checked to make sure you didn't accidentally disallow the pages in the robots.txt file?
If you do a site operator search in Google are you seeing a relatively correct number of pages returned?
Have you uploaded a sitemap to Google Webmaster Tools yet? If so, is it old and does it need updating?
-
RE: Is this a clear sign that one of our competitors is doing some serious black-hat SEO?
That alone doesn't prove much of anything other than that they had a massive increase in links. It could be due to spammy link building... or it could actually be a legitimate increase. Personally my first assumption would be black hat but you need more data than just that. Do they do any social or blogging or something? Maybe they recently posted something that garnered a lot of attention. They may have created good linkbait for the first time ever. Maybe they were in the news for something or a popular website mentioned them in a post that lead to shares & links.
If it is all spammy, they'll get hit soon enough.
-
RE: Best way to link 150 websites together
Why do they want all their shops linked together anyway?
Honestly if you really have to link them together then only link the sites that are relevant to one another and NoFollow the links. That should/could lessen any signals telling Google that your family of sites are just a link scheme. It's more a precautionary measure than anything. Don't worry if there isn't a chain of links that eventually ties every single one to each other somehow.... you don't want that. If its not a link scheme then you want to do whatever you can to make all of those links look as legitimate as possible so there are no questions about whether an algorithmic penalty is in your future.
-
RE: Why is moz saying I have a 404 error?
I've found times when I've cleared errors from Google Webmaster Tools and despite still being broken it wouldn't reappear on WMT for weeks.
Has the offending page been 301'd or the originating incorrect links fixed to point to the correct place?
As for Mozbot, did it just start the crawl or just finish the crawl? And when was your 404 fixed? It could be that mozbot crawled it before you fixed it so it's still saying that the page is broken. Or Roger could be drunk again and crawled an incorrect link it remembered from last time despite it not being there.
-
RE: Best way to link 150 websites together
Either way you cut it, sounds like a poor man's linking scheme to me.
Have they considered, perhaps, one or two sites with a bunch of relevant categories? Somehow I feel like starting and running 100-200 legitimate shops like that would be more of a headache than just taking all the related products and slapping them together.
-
RE: How many keywords is too many?
Whatever sounds natural.
There is no hard and fast number or percentage concerning keywords but if it looks like your page is trying waaaay too hard to be found for "blue widgets" or it sounds really unnatural when read out loud... then you've probably used the term too much.
-
RE: How to Stop Google from Indexing Old Pages
After reading the further responses here I'm wondering something...
You switched to a new site, can't 301 the old pages, and have no control over the old domain... So why are you worried about pages 404ing on an unused site you don't control anymore?
Maybe I'm missing something here or not reading it right. Who does control the old domain then? Is the old domain just completely gone? Because if so, why would it matter that Google is crawling non-existent pages on a dead site and returning 404s and 500s? Why would that necessarily affect the new site?
Or is it the same site but you switched to Java from PHP? If so, wouldn't your CMS have a way of redirecting the old pages that are technically still part of your site to the newer relevant pages on the site?
I feel like I'm missing pertinent info that might make this easier to digest and offer up help.
-
RE: Show wordpress "archive links" on blog?
Much like Matthew, I feel that keeping the Archive links would depend on how else you're interlinking content for users and your personal preference. Odds are that your posts are tagged... so your users can find the older, related content that way. If you want a visual representation of how often you post to your site there's the Calendar widget and other similar plugins that will link to your older posts. You can have a date archive list of posts (but the longer you're around and posting, the more overwhelming that will get and add far too many links) or you can have a dropdown menu pointing to your date archives. Then, of course, there's a Search Bar... let users find what they want that way instead of offering up 4000 different ways to get to those archives. If you think your users will have a need for any of those and it adds to the user experience, then go right ahead with them. If they just clutter up you page and offer up little extra value, then there's no real need for them.
For SEO purposes the archives have little to no value, create duplicate content, and having all those links will just dilute link equity being passed. But its more important to consider its impact on ease of use for visitors. Ask yourself the following: Will this help visitors? Do we need 6 ways to get to the same thing? Is there a better way to show them the same information? Does it make my site more easily navigated or just clutter things up?
-
RE: Where are my SEO Moz resources
Which resources are you looking for? Depending on what page of Moz you're on you can either find the Campaigns button in the top nav or in the top right corner and Research Tools will be in the top nav next to the Campaigns link.
-
RE: How to Stop Google from Indexing Old Pages
Have you submitted a new sitemap to Webmaster Tools? Also, you could consider 301 redirecting the pages to relevant new pages to capitalize on any link equity or ranking power they may have had before. Otherwise Google should eventually stop crawling them because they are 404. I've had a touch of success getting them to stop crawling quicker (or at least it seems quicker) by changing some 404s to 410s.
-
RE: Site explorer Issue
It may just be that moz bot hasn't crawled these links and seen them. Open Site Explorer is great but it does have its limitations and should be augmented with other backlink research tools.
-
RE: Traffic Data Report Error: Anyone else?
Last crawl I had the same problem. One of my campaigns dropped from 8000 visits to 0. The other campaigns were fine. According to Google Analytics everything was fine. I'm waiting to see if the current moz crawl happening for me right now fixes that issue.
-
RE: Does Googlebot Read Session IDs?
Safest bet, set up canonicals that point to the page minus the parameter so even if Google does read the session IDs it will understand that they relate to the canon link. Honestly, I'm not 100% sure if Google reads those sessions IDs or not either and have seen conflicting information. I know they read other parameters as separate URLs... I had a few issues with the way one of our sites handled products (sometimes it was ?model= and sometimes it was ?prod_id= and some old products also had ?sku=). But adding the canonicals will solve this problem if it exists and if the problem doesn't exist it won't hurt having a self-referential canonical sitting in the code in case someone scrapes your site.
-
RE: WordPress Plugin Backlinks?
It sounds like one of two things will happen given your explanation. Either the links will count because they are relevant. Or Google will decide not to count them because of their nature as an embedded widget link. I don't see this hurting you currently though with the way Google handles these things since its not a necessarily spammy link and its not hidden deep in some code somewhere trying to trick people.
-
RE: WordPress Plugin Backlinks?
The Matt Cutts video you're thinking of might be this one from Oct 2012.... Wherein he basically says the algorithm won't count those links because they aren't editorially included and aren't truly organic.
-
RE: Traffic Discrepancy between google and moz
I think Roger got a little drunk. I just noticed a similar issue in one of my campaigns. Everything is perfectly fine in GA but Moz shows a 100% drop in Organic Search Visits from 8,000+ visits one week to 0 at the last crawl. All the other metrics look fine though.
-
RE: Does "Using a dash in keyword name" affect SEO?
Matt Cutts has stated before that a dash - in a URL is interpreted similar to a space while an underscore _ is interpreted as a connector. http://youtu.be/AQcSFsQyct8
In your case, if Co-matic and Comatic are synonymous then I would choose whatever the actual Brand name is. Odds are Google will understand 1) that the brand is Co-matic and 2) that "Comatic" is "Co-matic" when looking for machinery.
Also, "Comatic" has the dictionary definition "of, pertaining to, or blurred as a result of a coma" which I don't think would be advantageous to have associated with your site.
-
RE: Duplicate title/content errors for blog archives
Most people NoIndex their blog archives to stop duplicate content errors. So tag archives, month and year archives, etc. should be set to NoIndex,Follow.
As for pagination issues, rel="next" and rel="prev" were created to show pages in a series so those might need implementation if they haven't been already.
-
RE: Auto Generated Title by Google in SERP
This happens far more frequently than you probably assumed. I do however find it amusing that one search will return my page with the correct title and then a similar search will return my page with an altered title. Seems almost silly that a synonym can completely change how my site's information appears in the SERPs... but it happens. On a positive side though, sometimes that may actually help to make you more relevant to a search you didn't think of while not hurting your positions for the terms you're trying to rank for.
-
RE: Are Tags in Blogs good?
The only potential value in blog tags is for the sake of usability. The SEO values are negligible at best and most wind up noindexing Tag archives anyway in order to save themselves from duplicate content issues. The value of relevant tags on a blog is that they help tie together other relevant articles on a subject to make them more easily accessible and tie all the posts together with something akin to an overarching theme. Use them for the sake of giving human users another way to find related content that may keep them on site longer.
-
RE: Where is my Hug from Roger?
I've never had the opportunity to see Roger in person... but when I do I expect a hug from him.
-
RE: HTML Encoding Error
Could be, I suppose. But it's been happening on and off for months now. I just mostly stop caring after a bit, clear out the errors and get annoyed when I see it pop up again. Its one of those things that doesn't actually cause a problem but I can't help feeling irked by its existence. All in all, I'm perfectly fine with the solution being "Google is wrong, leave it alone"... that's basically what I've been doing anyway.
-
RE: HTML Encoding Error
Sorry, I wasn't getting email notifications that people had answered. I checked with our remaining coder who said that was there on purpose (much like Highland stated) and he's going to take a look deeper into it once he has the chance but doesn't know why its showing up like that.
-
RE: Seomoz crawl: 4XX (Client Error) How to find were the error are?
Try plugging the URL into Open Site Explorer. There's a good chance that if Mozbot crawled it and found the 404 that it would also list the inbound links for it in OSE.
-
RE: Capitalization matters? Are Keywords treated as Case Sensitive?
In some cases, the capitalized version and lower case version of a search won't return the EXACT same results in the SERPs (sometimes a result will shuffle a bit [though not by much from what I've seen]) but Google does understand that "Example" and "example" are the same word much in the same way that it knows a singular and plural of the same word are related, i.e. "Shoes" will show up for a search of "Shoe" and vice versa.
If you also take into account the regular shuffling of the SERPs every time Google so much as blinks, there is the possibility that when the rankings data was pulled for your terms they were done so in a fashion that shows two different numbers that were both correct at the times they were pulled. Plus, in the past week (week and a half?) there was a decent bit of volatility in the SERPs that lead people to believe there was an algorithm change happening so for all you know that may have put an odd hiccup in the numbers as well.
Worse comes to worst you could always do a bit of testing to see if maybe you stumbled on a previously unknown traffic/rankings difference. Shouldn't be but with 500+ algorithm changes a year, you never know.
-
RE: CSS Display None / Hidden? Will I get in Trouble?
We use CSS Hidden content behind a clickable "Read More" on pretty much all of our sites in places where we want certain things viewable above the fold (mainly for aesthetic reasons). We've never seen any issues with Google not giving proper weight to the hidden content or flagging it as cloaking.
In your case though, I'd be concerned that having the transcripts for 12+ videos hidden on one page will make it less likely that each video will be found as easily as they should have. E.G. A video that would be perfectly relevant to a search may not appear in the SERPs for relevant terms because it is overshadowed by one or more of the other 12+ video transcripts. IMO, It might be more search-friendly and user-friendly to instead have a hub page linking out to the individual pages hosting the video and its related content.
-
RE: Bullet points good or bad for seo?
I've found that people like bullet points... gives them quick information without needing to read the rest of your content. Of course the problem arises of duplicate content when you use the same bullets over a large amount of products or all you do is change a word or two between every page. If the bullets are all the same then what's the value of them to the user? Bullet points (like all content) should be relevant and fresh. They should highlight the specific advantages of that product, i.e. what makes it special/different/unique/important/worth buying?
-
RE: "INDEX,FOLLOW" then later in the code "NOINDEX,NOFOLLOW" which does google follow?
I've never actually had any errors listed for non-indexable content in the HTML Improvements section of WMT. So I'm not 100% sure what would set off that notification. Though the sites I work on do have a number of pages that are NoIndex and/or NoFollow. So i guess the issue would be caused not by purposefully blocking the page but some other means that makes your page unable to be crawled properly.
-
RE: "INDEX,FOLLOW" then later in the code "NOINDEX,NOFOLLOW" which does google follow?
If you copy a string of text on the page and paste it into google search, does your page show up in the results? If so, then its being indexed despite the second robots tag. If it doesn't show up, then its not being indexed. So importance would rely on whether you want that page to be indexed and whether or not it is being indexed. Either way, you should look into cleaning that up at some point.
-
RE: Is there a report that shows how many times your keywords are searched?
Not that I know of... but even if they did it would only be an approximation of what they could gather were search volumes based off of trends. Whereas going through something like Adwords would give you the data straight from the source (more or less). There are also other tools out there to help with that data as well such as Wordtracker (like BeardoCo mentioned) and SEMrush which offers some organic keyword volumes along with the rest of its data.
-
RE: Is there a report that shows how many times your keywords are searched?
You can get some relative search numbers using the Google Adwords keyword tool.
-
RE: HTML Encoding Error
The page is being linked from only internal pages on the site not from any outside websites or scraper. Some of the pages WMT says the incorrect page is being crawled from are listed above.
-
HTML Encoding Error
Okay, so this is driving me nuts because I should know how to find and fix this but for the life of me cannot. One of the sites I work for has a long-standing crawl error in Google WMT tools for the URL /a%3E that appears on nearly every page of the site. I know that a%3E is an improperly encoded > but I can't seem to find where exactly in the code its coming from. So I keep putting it off and coming back to it every week or two only to wrack my brain and give up on it after about an hour (since its not a priority and its not really hurting anything). The site in question is https://www.deckanddockboxes.com/ and some of the pages it can be found on are /small-trash-can.html, /Dock-Step-Storage-Bin.html, and /Standard-Dock-Box-Maxi.html (among others). I figured it was about time to ask for another set of eyes to look at this for me. Any help would be greatly appreciated. Thanks!
-
RE: Redirecting a redirect - thoughts?
It is usually better to redirect one to one instead of redirecting from a redirect. Where possible it should be avoided because eventually, if you chain too many redirects to redirects, Google will stop caring. There will be no negative impact from changing an old redirect to point at a more relevant page. You may want to double-check that the level of traffic and inbound links pointing to the page is worth going through the trouble of the 301 instead of letting it 404. Sometimes it can be better to just let a page die if no one is clicking through on it.
-
RE: Something I'm missing META Description
From what I understand, when combining open graph protocol and meta description it needs to look like
name="description" property="og:description" content="My meta description copy."/>
-
RE: Mozcast: 5th & 9th May - what's shaking up?
Best I can tell, its a lot of crazy speculation. I've seen comments and articles where people are stating its a Penguin refresh or their traffic is all over the place or pages that previously ranked position 3-10 dropped to 50 while ranks 1 & 2 stayed safe.
Looking at my own analytics data... just another normal week. We'll get more info as the days go on.