Hi! Okay, first, here's the hard data you seek:
http://beingyourbrand.com/2012/10/21/social-media-case-studies-twitter-successes-to-learn-from/
And then this, Facebook custom audiences....I don't understand why more marketers aren't using these:
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Hi! Okay, first, here's the hard data you seek:
http://beingyourbrand.com/2012/10/21/social-media-case-studies-twitter-successes-to-learn-from/
And then this, Facebook custom audiences....I don't understand why more marketers aren't using these:
Hi! I'm sure you are very excited to see your business get off the ground. Don't bother submitting to the Yahoo Directory. Yahoo announced that they are shutting it down: http://techcrunch.com/2014/09/26/yahoo-to-shut-down-qwiki-yahoo-education-and-the-yahoo-directory/
How do you get more links...now that is the million dollar question. I have two recommendations:
If those don't give you at least a year's worth of ideas and things to do, listen and read them again. Cheers! I hope your site is a mad success!
Dana
Hi, You must be excited. I know how much work goes into a redesign. Here are some things I noticed while browsing the site on a PC using Internet Explorer (yes, the nasty, unforgiving IE...the bane of Web Devs everwhere!!):
Screenshots attached. Hope that's helpful!
Robert, I have thought the same thing many, many times. I mean, how can you thumbs down a post by Phil Nottingham on Video SEO...really? Or better, yet, an announcement of a feature improvement, new tool, or conference discount? Crazy people!
P.S. I was tempted to thumbs down this question just to mess with you, lol. Undoubtedly some negative Nelly out there will do it anyway...but it ain't gonna be me!
Moosa is spot on. His advice is absolutely right. Good luck!
Hi Vadim,
Here is what I consider to be one of the best posts on PageRank and how it works: http://www.webworkshop.net/pagerank.html
Here is a quote about the potential of increasing PageRank by increasing the number of pages on a site:
"
|
Example 5: new pages
Adding new pages to a site is an important way of increasing a site's total PageRank because each new page will add an average of 1 to the total. Once the new pages have been added, their new PageRank can be channeled to the important pages. We'll use the calculator to demonstrate these.
Let's add 3 new pages to Example 3 [<a>view</a>]. Three new pages but they don't do anything for us yet. The small increase in the Total, and the new pages' 0.15, are unrealistic as we shall see. So let's link them into the site.
Link each of the new pages to the important page, page A [<a>view</a>]. Notice that the Total PageRank has doubled, from 3 (without the new pages) to 6. Notice also that page A's PageRank has almost doubled.
There is one thing wrong with this model. The new pages are orphans. They wouldn't get into Google's index, so they wouldn't add any PageRank to the site and they wouldn't pass any PageRank to page A. They each need to be linked to from at least one other page. If page A is the important page, the best page to put the links on is, surprisingly, page A [<a>view</a>]. You can play around with the links but, from page A's point of view, there isn't a better place for them.
It is not a good idea for one page to link to a large number of pages so, if you are adding many new pages, spread the links around. The chances are that there is more than one important page in a site, so it is usually suitable to spread the links to and from the new pages. You can use the calculator to experiment with mini-models of a site to find the best links that produce the best results for its important pages."
So you see, it could be that those additional pages have potential to really help your site, but perhaps they aren't optimized in terms of the internal linking structure. Before deleting a bunch of content you worked hard to create, I would take a look at how those new pages are being linked to and what pages they are linking to. Study the internal architecture and you will most likely find your answer.
Long answer, I know, but I hope it helps!
|
Thanks David. Yes, we did this for Travis who responded above. Would you be interested in taking a look as well? If you private message me your username I can have my IT Director set up access for you.
Hi Ruben,
No self-promotion here But I am going to recommend a friend, try http://www.kwasistudios.com/about/team/ - I met Woj at MozCon this year and he's super smart and an upstanding guy and he has a great copywriter on his team. Tell him I sent you He's on Twitter of course - @WojKwasi - His group is based in Adelaide, Australia.
If he can't do it I'm sure he can recommend someone.
Hope that's helpful!
Dana
Hi,
Well in general I agree with Arjan, I am strongly of the opinion that some directories are still VERY worthwhile, and BOTW is one of them. When you are launching a brand new site, you just aren't going to have a lot of attention and inbound lin opportunities at first. Submitting your site to DMOZ (which is free), Yahoo.dir (Paid), Business.com (Paid), JoeAnt.com (Paid) and yes, BOTW (Paid) are all most definitely worthwhile. IMHO they send a signal to the search engines that you are a legitimate business. That's a very important message to send when you are a startup.
In addition to those I think there are some others that are worth it as well, depending on your particular business, particularly localized directories and directories targeting certain niche markets.
Hope that's helpful!
Dana
Hi Marissa,
Here's an example:
name="description" content="Your awesome meta description content goes inside the quotes, here">
Hope that helps!
It looks to me like Google has indexed the new URL version of the page yet but the ones ranking seem to be the old versions of the URLs and those are 301-redirecting, which can be slow, depending on where the searcher is located. It could be a speed issue that's causing the drop in rankings.
I didn't see a canonical tag on the new version of the pages. Adding those could help Google identify which page is the preferred version. It's going to take time for those old versions of your URLs to drop out of Google's index. You could use the URL remove tool in GWT, but if it's a large site this can be cumbersome.
Just some thoughts, not definitive by any means. I'm am curious to hear what others might have to say.
For those unaware of the news, here is a link to the Wall Street Journal article:
Travis, if you are willing to take a closer look, I can have our IT Director allow you to view the page by unblocking just your IP address. If you send it to me via private message I'll have him do that. Thanks in advance!!
Ah I was afraid of that. Yes, only visible on our internal network. I am going to do a short video capture and also check with IT to see if I can get limited access so the page can be viewed externally. Thanks Travis!
Hi guys. Take a look at the navigation on this page from our DEV site:
http://wwwdev.ccisolutions.com/StoreFront/category/handheld-microphones
While the CSS "trick" implemented by our IT Director does allow a visitor to sort products based on more than one criteria, my gut instinct says this is very bad for SEO. Here are the immediate issues I see:
Aside from these two big problems, are there any other issues you see that arise out of trying to use CSS to create product filters in this way? I am trying to build a case for why I believe it should not be implemented this way. Conversely, if you see this as a possible implementation that could work if tweaked a bit, and advice you are willing to share would be greatly appreciated. Thanks!
Thank you to Travis for pointing out the the link wasn't accessible. For anyone willing to take a closer look we can unblock the URL based on your IP address. If you'd be kind enough to send me your IP via private message I can have my IT director unblock it so you can view the page. Thanks!
Hi Vadim,
My initial response/question would be: If you are willing to consider de-indexing those pages, why not just remove them from the site completely?
Perhaps I am misunderstanding. Maybe this is what you are thinking of doing anyway? It is not uncommon on larger sites for a very small number of pages to be driving almost all of the traffic. Still, there may be people linking to some of those pages and that may be helping you, even if you aren't getting traffic from them. It sounds like it could be a possibility that you are cannibalizing keywords but not sure until taking a deeper dive. If you added pages targeting substantially similar keywords it could be that Google is having difficulty determining which page is more important for a given term. Consequently, neither page does as well as it could if there were only one.
Generally speaking, more pages on a site is a good thing, but only when the content is really unique and fulfills a need or want from your audience.
What did you do to promote this new content? Sometimes it takes some serious effort and coordination to get a piece of content noticed. The days of "If we build it they will come" are long gone. Maybe you just need to promote those new pages?
Just some thoughts. Cheers,
Dana
Yes, it sounds like perhaps there is a technical issue here. I like Keri's suggestion below. Also, have you grepped your server logs to see if Googlebot is having issues?
It can taken Google a long long time to take down search results to old pages that either don't exist any more or that 301 to a new page. You may have to resort to using the removal tool. I realize that for 2,000 URLs doing these one at a time is inconvenient, but it may just be what you have to do.
I have some old notes on domain migration that I'll try to dig up, but unfortunately I don't think there's much there that's helpful after the fact. But I'll see what I can find.
I agree with both of the previous suggestions and thought I would add a comment and a question too.
Seeing a decline of 50% or even more in traffic after a site migration is not uncommon. Hopefully your clients went into the migration with eyes open, knowing that they could see significantly lower traffic for anywhere from 6 weeks to a year, and maybe never fully recover. This sometimes happens. That's why the planning process is so important (and management of expectations).
That being said, when you installed Google Analytics on the new site, did anything change in your GA tracking code? Sometimes this happens and can lead to old analytics reports and new analytics reports not being an "apples to apples" comparison. It's just a thought. It could be that the traffic isn't actually 50% lower, but has changed much less than that.
Has revenue (or whatever your conversion goal is) dropped, increased or stayed the same?
My answer would be "it depends." If the authority of the linking site is really high, and there's reliable contact information, I'd say "yes, go for it." Ultimately, it's better not to have those 301s. However, if it's a small blog and the owner isn't adept with SEO, they may not even know what a 301 redirect is. If you start speaking in terms they don't understand, it could erode trust. I'd play it by ear. If you think you've got a good contact that can get it done pretty easily go for it, otherwise, leave them be. Hope that helps even though it's not a cut-and-dried answer!
Howdie,
Yes, I believe we got this sorted out. Interestingly, it wasn't any of the suggestions made here causing the 301 status code responses. I posted a thread in Google Webmaster Tools Forum regarding the issue and received a response that I am 99.5% sure is the correct answer.
Here is a link to that thread for future readers' reference: https://productforums.google.com/forum/#!mydiscussions/webmasters/zOCDAVudxNo
I believe the underlying issue has to do with incorrect handling of a redirect for this domain: ccisound.com
I am currently pursuing getting it corrected with our IT Director. Once the remedy is in place, I should know right away if it solves the issue I am seeing in the server logs. I'll post back here once I am 100% certain that was the issue.
Thanks all! This has been an interesting one for me!
They are pretty detailed, I'll send you yesterday's in a zip file so you can take a look. I'm certain that have everything needed. Thanks Eric!
Thanks so much Eric. Yes, I was thinking about the mobile version of our site being related to what I'm seeing too. However, I am unaware that we 301 redirect anything from the main site to the mobile site. In fact, users can actually switch to the mobile site via desktop by clicking "Mobile Site" in the footer and then browse the mobile version of the site via desktop. All of the URLs are identical.
Just out of curiosity I browsed to the mobile version of our site, grabbed a URL and then plugged it into "Fetch as Googlebot" in GWT. For all options, including desktop and the three mobile options a status code of 200 was returned.
Here is the response from my IT Director regarding the possibility that this is being done by our DNS manager:
"I do not believe so. Our DNS does translation of human readable names to IP address. It has nothing to do with the status being returned to a browser, and even if it did it could not write to the log file."
Is this accurate? I understand that the DNS cannot write to the log file, but if the DNS can flag a request to receive a certain status code from the server, then this scenario would still be a possibility.
According to our IT Director we have no spam filters, no mod_security module, absolutely nothing on our server to prevent it from being crawled by bot, human or spider from any IP address, including black-listed IPs.
To me, other than the obvious (no security is probably not a good idea at all), that means that the 301 status codes being returned because of a problem with server set up.
I do have server logs that I'd be willing to share privately with anyone who's willing to take a gander. Don't worry, I won't send you a month's worth. 1-2 days should be plenty.
In the meantime I am going to dive in and take a look further. It's entirely possible that IPs from Google are not the only ones receiving nothing but 301 status codes in response to requests.
Thanks William. Good suggestion. I am on it! I'll post back here once I know more.
Excellent thoughts! Yes, they are consistently the same IP addresses every time. There are several producing the same phenomenon, so I looked at this one 66.249.79.174
According to what I can find online this is definitely Google and the data center is located in Mountain View, California. We are a USA company, so it seems unlikely that it is a country issue. It could be that this IP (and the others like it) are inadvertently being blocked by a spam filter.
It doesn't matter the day or time, every time Googlebot attempts to crawl from this IP address our server returns 301 status codes for every request, with no exceptions.
I am thinking I need to request a list of IP addresses being blocked by the server's spam filter. I am not a server administrator...would this be something reasonable for me to ask the people who set it up?
Is returning a 301 status code the best scenario for handling a bot attempting to disguise itself as googlebot? I would think setting the server up to respond with a 304 would be better? (Sorry, that's kind of a follow-up "side" question)
Let me know your thoughts and I'm going to go see if I can find out more about the spam filter.
I have begun a daily process of analyzing a site's Web server log files and have noticed something that seems odd. There are several IP addresses from which Googlebot crawls that our server returns a 301 status code for every request, consistently, day after day. In nearly all cases, these are not URLs that should 301. When Googlebot visits from other IP addresses, the exact same pages are returned with a 200 status code.
Is this normal? If so, why? If not, why not?
I am concerned that our server returning an inaccurate status code is interfering with the site being effectively crawled as quickly and as often as it might be if this weren't happening.
Thanks guys!
Excellent answer. Thanks so much Doug. I really appreciate it! Adding a "nofollow" attribute to the Checkout button is a good suggestion and should be fairly easy to implement. I realize that internal nofollows are not normally recommended, but in this instance, may not be a bad idea.
This question came to mind as I was pursuing an unrelated issue and reviewing a site's robots/txt file.
Currently this is a line item in the file:
Disallow: https://*
According to a recent post in the Google Webmasters Central Blog: [http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html](http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html "Understanding Web Pages Better") Googlebot is getting much closer to being able to properly render javascript. Pardon some ignorance on my part because I am not a developer, but wouldn't this require Googlebot be able to execute javascript? If so, I am concerned that disallowing Googlebot from the https:// versions of our pages could interfere with crawling and indexation because as soon as an end-user clicks the "checkout" button on our view cart page, everything on the site flips to https:// - If this were disallowed then would Googlebot stop crawling at that point and simply leave because all pages were now https:// ??? Or am I just waaayyyy over thinking it?...wouldn't be the first time! Thanks all! [](http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html "Understanding Web Pages Better")
You might also try this one recently published on the Moz blog by @CyrusShephard : http://moz.com/blog/google-plus-correlations
I think this could be happening because of the way Google interprets the - I tried it with the opening quote, but not the closing quote and it worked. Notice too that the text that's highlighted (or bolded) in the search result is everything up to the
I agree with David. There are really arguments for going either way. I would give one edge to this method:
www.site.com/category-page/product-page
The advantage to using this instead of the super simple URLs is when you have a really large complex site and you need to move it to another platform. From an organizational standpoint, and just knowing from looking at your URLs what "lives" where, it's much easier if your URLs echo the structure of your site. Still, there are probably some ways to cope with that too, so depending on your CMS, this might not really be a problem.
Hi Dan,
Yes, if you accomplish this with CSS and collapsible/expandable
tags it's totally fine. It's understandable why from a design standpoint it might be much more attractive to have a page with less words on it. Justin Taylor (@justingraphitas) actually did a bang-up job in a Mozinar on designing for SEO that discusses this exact topic: http://moz.com/webinars/designing-for-seo
Hope that helps!
Dana
I checked your site in OSE and looked at both versions of your URL: http://brownboxbranding.com and http://www.brownboxbranding.com
Has something recently changed with the way your domain is redirected? I ask because it looks like the lionshare of authority and links all are on http://www.brownboxbranding.com which redirects to the "non-www" version.
It seems to me like the redirect should be the other way around. i.e. the "non-www" version of your domain should redirect to the "www" version. I would also make sure that Google Webmaster Tools is set to reflect the correct "preferred" domain.
Anyone else see this as the possible problem?
Have you tried doing an audit of your business citations to see if your NAP information is consistent across all the various business listing sites? Since Moz acquired GetListed.org (now called Moz Local), you can check your own site versus your competitors here: https://moz.com/local/search
You might try this and see if you gain any insights. It's a place to start at least. Hope that helps a bit!
I don't think you came across as defensive at all. I totally get the house-keeping issue. I know the "Bounty" section is something quasi-new...what about the possibility of just moving unanswered questions over there after they've gone unanswered for a set period of time, provided the person who posted responds to admin emails and indicates the question is still unanswered?
Perhaps another option would be for the original poster to reverse the "Answered" status?
I don't think Moz's intent at marking questions as "answered" was to effectively shut-down a topic, but, unfortunately, I do think that's what happens.
I agree with EGOL, I am not looking to see if someone marked my answer as a "good answer" or not, although I am always thankful if they do. What I do do is go back to questions I've answered to see if the person responded with another question or needs clarification on something and I try to help them if I can. Because I know sometimes people who are newer to Q & A often mark a question as "answered" when they read a response they "like" (but not be a complete answer), I'll often encourage them to continue to solicit answers from more people so they can get more input from the community.
It would be interesting to see data on how many threads complete stop getting new comments once they are marked as "answered." I bet it's more than 90%...which, from a UGC viewpoint, could mean Moz is losing out on content they would be getting by leaving more threads marked as "unanswered." Hmmm,
Amen! - Side note....I originally posted this discussion topic a week ago and it took me this long to come back and respond. I was really excited to see 13 new comments!
I totally agree with EGOL and Donna about the default view being changed to "Active." If this post hadn't been one of mine, I probably wouldn't have ever found it.
Excellent response! You know, I am here a lot...and I had no idea there was an "Active" view, so I am a perfect example of exactly what you described.
I really like your idea. It looks like Jenn has already picked up the ball and started running with it. That's very cool.
I agree with you EGOL that most often things get marked as "answered" when something is liked, but not necessarily answered. I have seen the thumbs down for answers that aren't necessarily what someone wanted to hear too, but less often lately.
I guess the whole reason I brought it up was because a few times I wanted more varieties of opinion on a question I had asked, but because it got marked as "answered" people stopped looking at it. Sounds like Moz might consider making some changes to the Q & A that could make it better. It's already really good, but I'm sure with some good feedback they can make it even better. Thanks again for chiming in!
Hi everyone,
This is not meant to be snarky at all, so I just want to preface my question with that.
So, since the new re-branded Moz rolled out last year, I'm sure many of you have noticed that if you ask a question and it is answered by a Moz associate, your question is marked as "answered."
I'm sorry, but I don't like this. Here's why,
I'm the one who asked the question. I should be the one who determines if the answer was adequate for me, or if it didn't sufficiently answer my question. This is particularly true when my question doesn't have to do with a customer service issue or a Moz tool question.
If I ask a question about SEO, Content, CRO, marketing or any other subject, I feel like it should be me and only me who determines whether or not I feel like my question is answered.
In addition to this, Moz is actually depriving themselves of useful UGC by shutting down questions in this way. How? Because when the rest of us who frequent the Q & A see a question that's already been marked as "answered" we tend not to open it, read it and respond, because we think that person has already gotten what they needed....when in fact, it could be that a Moz associate has jumped in and marked their question as answered when it really wasn't. Consequently, we all miss out.
I propose/move that Moz associates can only mark questions as "answered" when they pertain directly to Q & A about Moz tools, service and support. All other questions must be marked as "answered" only by the asker or closed as "answered" after they have been dormant for 6 months or more.
Can I get a second (motion) ?
Hi Pawan,
You're welcome Yes, I believe you are correct in saying that the data highlighter really only translates to Google right now. However, it seems Bing and Yahoo1 really are doing very little with structured data right now. I think it depends on your industry regarding your timeline of adding the markup. If you are in the restaurant, food or travel industry, I think you really have to start now just to stay competitive. If you're in a niche, maybe it's not so crucial. One thing's for sure, what's true about structured data now will probably be different in 6 months, so whatever you do now will need to reviewed over time, just like most anything else related to SEO There's always something new and always something changing. That's why we love it right?
Dana
I totally agree with Lesley. You asked why so few few sites might be using them. I think it's a question of knowledge and implementation. Unless you are extremely comfortable with HTML and XML, schema.org markup can be very intimidating. It also doesn't help that Google is choosing to display only certain elements of structured data right now, and even then, it's sporadic. In fact, recently, Google went from displaying a lot of authorship information to displaying less. This is all still in experimental stages. That being said, will it go away? i.e. Is it just a search fad?
My answer is: "no," structured data (also referred to as "schema," "microdata," "rich snippets," and "microformats" ) will only become more and more important until search engine bots get better at understanding different elements of a Web page, for example, understanding that there might be a MSRP price, an "our price" and a "regular price" simply by crawling the data. Right now, bots aren't very good at that because if they crawl three prices, all they are understanding is a very basic "$10.00" - "$8.00" - "$7.00" - but they won't have any idea how those three prices relate to each other without schema.org markup. Or, as another example, especially for e-commerce, a product page might have many images on it. How does a bot know which image on the page is the main product image? Bots aren't quite smart enough to know this because they can't "see" a page like a human sees a page...they can only crawl code.
But, fear not! There is help! Google initiated a microdata highlighter in Google Webmaster Tools sometime last year. If you have a smaller, simpler site, you can use this tool to markup your pages with schema without knowing a lick of code. Here's how to do it: http://www.danatanseo.com/2013/08/google-finally-demystifies-structured.html
Hope this is helpful!
I liked #2 because it was the only one that really communicated to me that the site was about design services.
Very interesting Travis. I hadn't even thought to take a look at some competitor's pdfs to see what they are looking like in some of the same tools. Yes, this is something we need to keep testing to see if we can figure out if going through the trouble of inserting links back to our domain is a worthwhile project.
Hi All,
I found one other discussion about the subject of PDFs and passing of PageRank here: http://moz.com/community/q/will-a-pdf-pass-pagerank But this thread didn't answer my question so am posting it here.
This PDF: http://www.ccisolutions.com/jsp/pdf/YAM-EMX_SERIES.PDF is reported by GWT to have 38 links coming from 8 unique domains. I checked the domains and some of them are high-quality relevant sites. Here's the list:
Domains and Number of Links
prodiscjockeyequipment.com 9
decaturilmetalbuildings.com 9
timberlinesteelbuildings.com 6
jaymixer.com 4
panelsteelbuilding.com 4
steelbuildingsguide.net 3
freedocumentsearch.com 2
freedocument.net 1
However, when I plug the URL for this PDF into OSE, it reports no links and a Page Authority if only "1". This is not a new page. This is a really old page.
In addition to that, when I check the PageRank of this URL, the PageRank is "nil" - not even "0" - I'm currently working on adding links back to our main site from within our PDFs, but I'm not sure how worthwhile this is if the PDFs aren't being allocated any authority from the pages already linking to them. Thoughts? Comments? Suggestions? Thanks all!
You might try the good folks at http://www.goinflow.com/ - Everett Sizemore in particular. He gave a very good mozinar on eCommerce SEO here: http://moz.com/webinars/ecommerce-seo-fix-and-avoid-common-issues This is how I became familiar with him.
There are also two guys at http://www.melen.net, Matthew Prepis and Oleg Korneitchouk (Oleg is active here in the Moz forum) who performed a high level audit for us that was top notch. They were striving to win our business and are still in the running. They are excellent.
Either of these two might come in quite a bit lower than a company like RKG, depending on the scope of the project of course.
I totally agree with Lesley. And here's the thing, the fee for what you need (around $3,000 a month) is pretty realistic...however, the plan of action is not. I totally disagree with the approach. I don't think that you are being communicated with effectively. I am hoping that you haven't already signed a contract and paid these folks. There are better SEOs who can and will provide you better advice and action that what you are being given. If you are stuck because you already have paid, pass along the info you're getting here and continue to post as they make new recommendations. Don't let them do anything to your business that makes you think "Hmmm, I don't know if that's the right thing." - Ask a ton of questions. You are the client. They have an obligation to serve YOU.
I completely agree with this. Well researched and reputable directories are still worthwhile. As long as they are part of a mix that includes some link diversity, I think they can be a very valuable way to jump start a brand new site.
I think perhaps the intention was that they didn't want these pages to be indexed. This makes sense for certain things/links from a homepage, like "My Shopping Cart." But honestly it looks like a lame attempt at Pagerank sculpting, which Google has been wise to for many years. The two "nofollow" links that concern me the most are the "Site Map" link and the link to their blog. Why in God's good name wouldn't you want a bot to follow links leading to your sitemap and blog. That's nonsensical.
Regarding the other "nofollow" attributes, those aren't necessary either. Get rid of them all. Matt Cutts has said on several occasions that he sees no practical reason why any Web site would want to "nofollow" any internal page. Here's a video where he says that: http://youtu.be/86GHCVRReJs
So, bottom line, "If it's a link within your site to another page within your site, I would leave the 'nofollow' off."
There you have it. I hope that helps!
Dana