Questions created by danatanseo
-
Question regarding Site and URL structure + Faceted Navigation (Endeca)
We are currently implementing the SEO module for Endeca faceted navigation. Our development team has proposed URLs to be structured in this way: Main category example: https://www.pens.com/c/pens-and-writing/ As soon as a facet is selected, for example "blue ink" - The URL path would change to https://www.pens.com/m/pens-and-writing/blue-ink/_/Nvalue (the "N" value is a unique identifier generated by Endeca that determines what products from the catalog are served as a match for the selected facet and is the same every time that facet is selected, it is not unique per user). My gut instinct says that this change from "/c/" to "/m/" might be very problematic in terms of search engines understanding that /m/pens-and-writing/blue-ink/ as part of the /c/pens-and-writing/ category. Wouldn't this also potentially pose a problem for the flow of internal link equity? Has anyone ever seen a successful implementation using this methodology?
Intermediate & Advanced SEO | | danatanseo0 -
Are you looking for an SEO job? National Pen (Pens.com) is hiring!
Hi all, We have an opening for a Senior SEO Associate. Would love to hire someone in the Moz Community. Here are the details: Sr SEO Associate https://g.co/kgs/Ucwzp7 Cheers, Dana
Jobs and Opportunities | | danatanseo0 -
Is this blog running on the Genesis Framework?
Hi all, WordPress isn't my area of specialty in terms of themes and identifying what a blog might currently be using. Here's the link: http://www.pens.com/blog I've had one developer tell me this blog is on the Genesis framework and another one told me it is not. Can someone weigh in here and also give me some tips on how to know one way or the other? If it is not on the Genesis Framework, can you provide any helpful links/tutorials on how to get this blog onto the Genesis Framework? We want to be able to use Yoast SEO and apparently our current theme will not allow us to do so. Thanks in advance! Dana
On-Page Optimization | | danatanseo0 -
Why does OSE show different link equity for homepage with and without trailing slash?
Hi all, I am working with a developer who believes OSE (Open Site Explorer) is flawed. His reasoning is that OSE shows completely different link equity and domain authority for these two URLs [company name is purposely withheld]: http://www.mydomain.com/us/ and http://www.mydomain.com/us NOTE: http://www.mydomain.com/us has been properly 301-redirecting to http://www.mydomain.com/us/ for years. The developer's argument is that the URL that is being 301-redirected shouldn't have any link equity at all, therefore OSE is flawed. Here is the response I would like to give him. Please feel free to poke holes, provide feedback or flat out correct me if I am wrong: Dear inquisitive developer, Any URL that is being linked to by another site that has link equity will get link equity passed to it. This is true even if the URL is a page that doesn't exist. What it can't do (if it doesn't exist) is then pass that link equity on to another page. Moz's OSE is showing separate link equity for the two versions of this URL because other sites have linked to both versions. The fact that one 301-redirects to the other is good and ensures that link equity from one is passing to the preferred URL. The 301-redirect does not, however, remove link equity from the first URL, but rather passes the existing link equity from one to the other. Consequently, Moz's OSE is not flawed, but rather is displaying the link equity of one URL that is benefiting the other via a 301-redirect. Sincerely, Dana
Link Explorer | | danatanseo3 -
Bingbot appears to be crawling a large site extremely frequently?
Hi All! What constitutes a normal crawl rate for daily bingbot server requests for large sites? Are any of you noticing spikes in Bingbot crawl activity? I did find a "mildly" useful thread at Black Hat World containing this quote: "The reason BingBot seems to be terrorizing your site is because of your site's architecture; it has to be misaligned. If you are like most people, you paid no attention to setting up your website to avoid this glitch. In the article referenced by Oxonbeef, the author's issue was that he was engaging in dynamic linking, which pretty much put the BingBot in a constant loop. You may have the same type or similar issue particularly if you set up a WP blog without setting the parameters for noindex from the get go." However, my gut instinct says this isn't it and that it's more likely that someone or something is spoofing bingbot. I'd love to hear what you guys think! Dana
Technical SEO | | danatanseo1 -
Is this CSS solution to faceted navigation a problem for SEO?
Hi guys. Take a look at the navigation on this page from our DEV site: http://wwwdev.ccisolutions.com/StoreFront/category/handheld-microphones While the CSS "trick" implemented by our IT Director does allow a visitor to sort products based on more than one criteria, my gut instinct says this is very bad for SEO. Here are the immediate issues I see: The URL doesn't change as the filter criteria changes. At the very least this is a lost opportunity for ranking on longer tail terms. I also think it could make us vulnerable to a Panda penalty because many of the combinations produce zero results, so returning a page without content, under the original URL. This could not only create hundreds of pages with no content, there would be duplicates of those zero content pages as well. Usability - The "Sort by" option in the drop down (upper right of the page) doesn't work in conjunction with the left Nav filters. In other words if you filter down to 5 items and then try to arrange them by price high to low, the "Sort" will take precedence, remove the filter and serve up a result that is all products in that category sorted high to low (and the filter options completely disapper), AND the URL changes to this: http://wwwdev.ccisolutions.com/StoreFront/category/IAFDispatcher regardless of what sort was chosen...(this is a whole separate problem, I realize and not specifically what I'm trying to address here). Aside from these two big problems, are there any other issues you see that arise out of trying to use CSS to create product filters in this way? I am trying to build a case for why I believe it should not be implemented this way. Conversely, if you see this as a possible implementation that could work if tweaked a bit, and advice you are willing to share would be greatly appreciated. Thanks! Thank you to Travis for pointing out the the link wasn't accessible. For anyone willing to take a closer look we can unblock the URL based on your IP address. If you'd be kind enough to send me your IP via private message I can have my IT director unblock it so you can view the page. Thanks!
Web Design | | danatanseo0 -
Why would our server return a 301 status code when Googlebot visits from one IP, but a 200 from a different IP?
I have begun a daily process of analyzing a site's Web server log files and have noticed something that seems odd. There are several IP addresses from which Googlebot crawls that our server returns a 301 status code for every request, consistently, day after day. In nearly all cases, these are not URLs that should 301. When Googlebot visits from other IP addresses, the exact same pages are returned with a 200 status code. Is this normal? If so, why? If not, why not? I am concerned that our server returning an inaccurate status code is interfering with the site being effectively crawled as quickly and as often as it might be if this weren't happening. Thanks guys!
Intermediate & Advanced SEO | | danatanseo0 -
Googlebot soon to be executing javascript - Should I change my robots.txt?
This question came to mind as I was pursuing an unrelated issue and reviewing a site's robots/txt file. Currently this is a line item in the file: Disallow: https://* According to a recent post in the Google Webmasters Central Blog: [http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html](http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html "Understanding Web Pages Better") Googlebot is getting much closer to being able to properly render javascript. Pardon some ignorance on my part because I am not a developer, but wouldn't this require Googlebot be able to execute javascript? If so, I am concerned that disallowing Googlebot from the https:// versions of our pages could interfere with crawling and indexation because as soon as an end-user clicks the "checkout" button on our view cart page, everything on the site flips to https:// - If this were disallowed then would Googlebot stop crawling at that point and simply leave because all pages were now https:// ??? Or am I just waaayyyy over thinking it?...wouldn't be the first time! Thanks all! [](http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html "Understanding Web Pages Better")
Algorithm Updates | | danatanseo0 -
How do you feel when Moz marks one of your questions as "answered?"
Hi everyone, This is not meant to be snarky at all, so I just want to preface my question with that. So, since the new re-branded Moz rolled out last year, I'm sure many of you have noticed that if you ask a question and it is answered by a Moz associate, your question is marked as "answered." I'm sorry, but I don't like this. Here's why, I'm the one who asked the question. I should be the one who determines if the answer was adequate for me, or if it didn't sufficiently answer my question. This is particularly true when my question doesn't have to do with a customer service issue or a Moz tool question. If I ask a question about SEO, Content, CRO, marketing or any other subject, I feel like it should be me and only me who determines whether or not I feel like my question is answered. In addition to this, Moz is actually depriving themselves of useful UGC by shutting down questions in this way. How? Because when the rest of us who frequent the Q & A see a question that's already been marked as "answered" we tend not to open it, read it and respond, because we think that person has already gotten what they needed....when in fact, it could be that a Moz associate has jumped in and marked their question as answered when it really wasn't. Consequently, we all miss out. I propose/move that Moz associates can only mark questions as "answered" when they pertain directly to Q & A about Moz tools, service and support. All other questions must be marked as "answered" only by the asker or closed as "answered" after they have been dormant for 6 months or more. Can I get a second (motion) ?
Moz Bar | | danatanseo4 -
GWT shows 38 external links from 8 domains to this PDF - But it shows no links and no authority in OSE
Hi All, I found one other discussion about the subject of PDFs and passing of PageRank here: http://moz.com/community/q/will-a-pdf-pass-pagerank But this thread didn't answer my question so am posting it here. This PDF: http://www.ccisolutions.com/jsp/pdf/YAM-EMX_SERIES.PDF is reported by GWT to have 38 links coming from 8 unique domains. I checked the domains and some of them are high-quality relevant sites. Here's the list: Domains and Number of Links
Technical SEO | | danatanseo
prodiscjockeyequipment.com 9
decaturilmetalbuildings.com 9
timberlinesteelbuildings.com 6
jaymixer.com 4
panelsteelbuilding.com 4
steelbuildingsguide.net 3
freedocumentsearch.com 2
freedocument.net 1 However, when I plug the URL for this PDF into OSE, it reports no links and a Page Authority if only "1". This is not a new page. This is a really old page. In addition to that, when I check the PageRank of this URL, the PageRank is "nil" - not even "0" - I'm currently working on adding links back to our main site from within our PDFs, but I'm not sure how worthwhile this is if the PDFs aren't being allocated any authority from the pages already linking to them. Thoughts? Comments? Suggestions? Thanks all!0 -
Specific question about pagination prompted by Adam Audette's Presentation at RKG Summit
This question is prompted by something Adam Audette said in this excellent presentation: http://www.rimmkaufman.com/blog/top-5-seo-conundrums/08062012/ First, I will lay out the issues: 1. All of our paginated pages have the same URL. To view this in action, go here: http://www.ccisolutions.com/StoreFront/category/audio-technica , scroll down to the bottom of the page and click "Next" - look at the URL. The URL is: http://www.ccisolutions.com/StoreFront/IAFDispatcher, and for every page after it, the same URL. 2. All of the paginated pages with non-unique URLs have canonical tags referencing the first page of the paginated series. 3. http://www.ccisolutions.com/StoreFront/IAFDispatcher has been instructed to be neither crawled nor indexed by Google. Now, on to what Adam said in his presentation: At about minute 24 Adam begins talking about pagination. At about 27:48 in the video, he is discussing the first of three ways to properly deal with pagination issues. He says [I am somewhat paraphrasing]: "Pages 2-N should have self-referencing canonical tags - Pages 2-N should all have their own unique URLs, titles and meta descriptions...The key is, with this is you want deeper pages to get crawled and all the products on there to get crawled too. The problem that we see a lot is, say you have ten pages, each one using rel canonical pointing back to page 1, and when that happens, the products or items on those deep pages don't get get crawled...because the rel canonical tag is sort of like a 301 and basically says 'Okay, this page is actually that page.' All the items and products on this deeper page don't get the love." Before I get to my question, I'll just throw out there that we are planning to fix the pagination issue by opting for the "View All" method, which Adam suggests as the second of three options in this video, so that fix is coming. My question is this: It seems based on what Adam said (and our current abysmal state for pagination) that the products on our paginated pages aren't being crawled or indexed. However, our products are all indexed in Google. Is this because we are submitting a sitemap? Even so, are we missing out on internal linking (authority flow) and Google love because Googlebot is finding way more products in our sitemap that what it is seeing on the site? (or missing out in other ways?) We experience a lot of volatility in our rankings where we rank extremely well for a set of products for a long time, and then disappear. Then something else will rank well for a while, and disappear. I am wondering if this issue is a major contributing factor. Oh, and did I mention that our sort feature sorts the products and imposes that new order for all subsequent visitors? it works like this: If I go to that same Audio-Technica page, and sort the 125+ resulting products by price, they will sort by price...but not just for me, for anyone who subsequently visits that page...until someone else re-sorts it some other way. So if we merchandise the order to be XYZ, and a visitor comes and sorts it ZYX and then googlebot crawls, google would potentially see entirely different products on the first page of the series than the default order marketing intended to be presented there....sigh. Additional thoughts, comments, sympathy cards and flowers most welcome. 🙂 Thanks all!
Technical SEO | | danatanseo0 -
Paid links that are passing link equity from a blog?
We have a well-known blogger in our industry with whom we've had a long-standing relationship. We've had inbound links from his blog for many, many years. Today I noticed that we are running a banner ad listed on all pages of his blog under a heading that says "Sponsors." He has dedicated an entire page of his site giving full disclosure of all advertising. However, all of the links on his site pointing to us are passing link equity. To my knowledge they've been this way ever since they were first established years ago. I am fairly certain this fellow, with whom we have an excellent relationship, neither knows nor cares what a "nofollow" attribute is. I am afraid that if I contact him with a request that he add "nofollow" attributes to all of our links that it will damage our relationship by creating friction. To someone who knows nothing and cares nothing about SEO, asking them to put a "nofollow" on a link could either seem like a technical request they don't know how to handle, or something even potentially "shady" on our part. My question is this: Considering how long these links have been there, is this even worth worrying about? Should I just forget about it and move on to bigger fish, or, is this a potentially serious enough violation of Google Webmaster guidelines that we should pursue getting those links "nofollow" attributes added? I should add that we haven't received any "unnatural" link notifications from Google, ever, and haven't ever engaged in any questionable link-building tactics.
Technical SEO | | danatanseo1 -
Please help me articulate why broken pagination is bad for SEO...
Hi fellow Mozzers. I am in need of assistance. Pagination is and has been broken on the Website for which I do SEO in-house...and it's been broken for years. Here is an example: http://www.ccisolutions.com/StoreFront/category/audio-technica This category has 122 products, broken down to display 24 at a time across paginated results. However, you will notice that once you enter pagination, all of the URLs become this: http://www.ccisolutions.com/StoreFront/IAFDispatcher Even if you hit "Previous" or "Next" or your browser back button, the URL stays: http://www.ccisolutions.com/StoreFront/IAFDispatcher I have tried to explain to stakeholders that this is a lost opportunity. That if a user or Google were to find that a particular paginated result contained a unique combination of products that might be more relevant to a searcher's search than the main page in the series, Google couldn't send the searcher to that page because it didn't have a unique URL. In addition, this non-unique URL most likely is bottle-necking the flow of page authority internally because it isn't unique. This is not to mention that 38% of our traffic in Google Analytics is being reported as coming from this page...a problem because this page could be one of several hundred on the site and we have no idea which one a visitor was actually looking at. How do I articulate the magnitude of this problem for SEO? Is there a way I can easily put it in dollars and cents for a business person who really thinks SEOs are a bunch of snake oil salesmen in the first place? Does anyone have any before and after case studies or quantifiable data that they would be willing to share with me (even privately) that can help me articulate better how important it is to address this problem. Even more, what can we hope to get out of fixing it? More traffic, more revenue, higher conversions? Can anyone help me go to the mat with a solid argument as to why pagination should be addressed?
Web Design | | danatanseo0 -
Can Googlebot crawl the content on this page?
Hi all, I've read the posts in Google about Ajax and javascript (https://support.google.com/webmasters/answer/174992?hl=en) and also this post: http://moz.com/ugc/can-google-really-access-content-in-javascript-really. I am trying to evaluate if the content on this page, http://www.vwarcher.com/CustomerReviews, is crawlable by Googlebot? It appears not to be. I perused the sitemap and don't see any ugly Ajax URLs included as Google suggests doing. Also, the page is definitely indexed, but appears the content is only indexed via its original source (Yahoo!, Citysearch, Google+, etc.). I understand why they are using this dynamic content, because it looks nice to an end-user and requires little to no maintenance. But, is it providing them any SEO benefit? It appears to me that it would be far better to take these reviews and simply build them into HTML. Thoughts?
Technical SEO | | danatanseo0 -
Can anyone recommend a tool that will identify unused and duplicate CSS across an entire site?
Hi all, So far I have found this one: http://unused-css.com/ It looks like it identifies unused, but perhaps not duplicates? It also has a 5,000 page limit and our site is 8,000+ pages....so we really need something that can handle a site larger than their limit. I do have Screaming Frog. Is there a way to use Screaming Frog to locate unused and duplicate CSS? Any recommendations and/or tips would be great. I am also aware of the Firefix extensions, but to my knowledge they will only do one page at a time? Thanks!
Web Design | | danatanseo0 -
How complicated would it be to optimize our current site for the Safari browser?
Hi all! Okay, here's the scoop. 33% of our site visitors use Safari. 18% of our visitors are on either an iPad or iPhone. According to Google Analytics, our average page load time for visitors using Safari is 411% higher than our site average of 3.8 second. So yes, average page load time pages loading in Safari is over 20 seconds...totally unacceptable, especially considering the large percentage of traffic using it. While I understand that there are some parameters beyond our control, it is in our own best interest to try to optimize our site for Safari. We've got to do better than 20 seconds. As you might have guessed, it's also killing conversation rates on visits from that browser. While every other browser posted double-digit improvements in conversion rates over the last several months, the conversion rate for Safari visitors is down 36%...translating into 10's of thousands in lost revenue. Question for anyone out there gifted in Web design and particular Web Dev....Do you think that it's possible/reasonable to attempt to "fix" our current site, which sits on an ancient platform with ancient code, or is this just not realistic? Would a complete redesign/replatform be the more realistic (and financially sound) way to go? Any insights, experiences and recommendations would be greatly appreciated. If you're someone interested in spec'-ing out the project and giving us a cost estimate please private message me. Thanks so much!
Conversion Rate Optimization | | danatanseo1 -
Can You Suggest 3 Books to Help Me with My 2014 Seth Godin Pick 3 Project?
Yesterday in Seth's blog he gave a wonderful suggestion for making 2014 a remarkable year. He suggested that you pick 3 people who have influence over your life/work and select three books to buy and give to these them to have them read, and then, discuss with you. Here's how he describes it: "Identify three books that challenge your status quo, business books that outline a new attitude/approach or strategy, or perhaps fiction or non-fiction that challenges you. Books you've read that you need them to read. Buy the three books for each of the three people, and ask them each to read all three over the holiday break." - http://sethgodin.typepad.com/seths_blog/2013/12/pick-three.html Don't worry about whether or not I've read the books you are suggesting. If I choose one, I'll read it first before asking someone else to read it. Here are some of my ideas: The End of Business as Usual - Brian Solis [currently reading] Don't Make Me Think - Steve Krug The New Rules of Marketing and PR - David Meerman Scott Positioning - The Battle for Your Mind - Ries and Trout Influence - Robert Cialdini Engage - Brian Solis The Dip - Seth Godin Keep It Simple - Siegel & Etzkorn Buy - Ology - Lindstrom The Long Tail - Chris Anderson FYI - The three people I've identified to share books with are my CEO (to whom I directly report), our Marketing Director (who is a peer, but who controls most of the marketing budget), and the Vice President of Retail Sales (He is also one of 5 co-owners of the business and a major stakeholder). The books you suggest can be the same for all three or different for all three. I think the major challenges we are facing in 2014 is agility, branding and redefining or abandoning old business practices that are costly, time wasters, inefficient, or all of the above. Thanks everyone!
Industry News | | danatanseo1 -
What would be considered a bad ratio to determine Index Bloat?
I am using Annie Cushing's most excellent site audit checklist from Google Docs. My question concerns Index Bloat because it is mentioned in her "Index" tab. We have 6,595 indexed pages and only 4,226 of those pages have received 1 or more visits since January 1 2013. Is this an acceptable ratio? If not, why not and what would be an acceptable ratio? I understand the basic concept that "dissipation of link juice and constrained crawl budget can have a significant impact on SEO traffic." [Thanks to Reid Bandremer http://www.lunametrics.com/blog/2013/04/08/fifteen-minute-seo-health-check/#sr=g&m=o&cp=or&ct=-tmc&st=(opu%20qspwjefe)&ts=1385081787] If we make this an action item I'd like to have some idea how to prioritize it compared to other things that must be done. Thanks all!
Technical SEO | | danatanseo1 -
What are the SEO strengths & weaknesses of Magnolia CMS?
We are considering upgrading our Web eCommerce platform. Our current provider has just implemented Magnolia CMS into their Web store package. Do any of you have experience with this CMS and can you share your experiences and thoughts on whether or not it has any implications for SEO? Thanks!
Technical SEO | | danatanseo0 -
Can Googlebot read the content on our homepage?
Just for fun I ran our homepage through this tool: http://www.webmaster-toolkit.com/search-engine-simulator.shtml This spider seems to detect little to no content on our homepage. Interior pages seem to be just fine. I think this tool is pretty old. Does anyone here have a take on whether or not it is reliable? Should I just ignore the fact that it can't seem to spider our home page? Thanks!
Technical SEO | | danatanseo0 -
How is Google crawling and indexing this directory listing?
We have three Directory Listing pages that are being indexed by Google: http://www.ccisolutions.com/StoreFront/jsp/ http://www.ccisolutions.com/StoreFront/jsp/html/ http://www.ccisolutions.com/StoreFront/jsp/pdf/ How and why is Googlebot crawling and indexing these pages? Nothing else links to them (although the /jsp.html/ and /jsp/pdf/ both link back to /jsp/). They aren't disallowed in our robots.txt file and I understand that this could be why. If we add them to our robots.txt file and disallow, will this prevent Googlebot from crawling and indexing those Directory Listing pages without prohibiting them from crawling and indexing the content that resides there which is used to populate pages on our site? Having these pages indexed in Google is causing a myriad of issues, not the least of which is duplicate content. For example, this file <tt>CCI-SALES-STAFF.HTML</tt> (which appears on this Directory Listing referenced above - http://www.ccisolutions.com/StoreFront/jsp/html/) clicks through to this Web page: http://www.ccisolutions.com/StoreFront/jsp/html/CCI-SALES-STAFF.HTML This page is indexed in Google and we don't want it to be. But so is the actual page where we intended the content contained in that file to display: http://www.ccisolutions.com/StoreFront/category/meet-our-sales-staff As you can see, this results in duplicate content problems. Is there a way to disallow Googlebot from crawling that Directory Listing page, and, provided that we have this URL in our sitemap: http://www.ccisolutions.com/StoreFront/category/meet-our-sales-staff, solve the duplicate content issue as a result? For example: Disallow: /StoreFront/jsp/ Disallow: /StoreFront/jsp/html/ Disallow: /StoreFront/jsp/pdf/ Can we do this without risking blocking Googlebot from content we do want crawled and indexed? Many thanks in advance for any and all help on this one!
Intermediate & Advanced SEO | | danatanseo0 -
Best Places to Post SEO/Marketing Jobs?
I have several colleagues in the industry (and some fellow marketers) who have asked me where the best places are to look for and post SEO job opportunities. I personally like InBound.org and LinkedIn. Where do you recommend marketeers look for job opportunities? D
Industry News | | danatanseo1 -
Is the TTFB for different locations and browsers irrelevant if you are self-hosting?
Please forgive my ignorance on this subject. I have little to no experience with the technical aspects of setting up and running a server. Here is the scenario: We are self-hosted on an Apache server. I have been on the warpath to improve page load speed since the beginning of the year. I have been on this warpath not so much for SEO, but for conversion rate optimization. I recently read the Moz Post "How Website Speed Actually Impacts Search Rankings" and was fascinated by the research regarding TTFB. I forwarded the post to my CEO, who promptly sent me back a contradictory post from Cloudflare on the same topic. Ily Grigorik published a post in Google+ that called Cloudflare's experiment "silly" and said that "TTFB absolutely does matter." I proceeded to begin gathering information on our site's TTFB using data provided by http://webpagetest.org. I documented TTFB for every location and browser in an effort to show that we needed to improve. When I presented this info to my CEO (I am in-house) and IT Director, that both shook their heads and completely dismissed the data and said it was irrelevant because it was measuring something we couldn't control. Ignorant as I am, it seems that Ilya Grigorik, Google's own Web Dev Advocate says it absolutely is something that can be controlled, or at least optimized if you know what you are doing. Can any of you super smart Mozzers help me put the words together to express that TTFB from different locations and for different browsers is something worth paying attention to? Or, perhaps they are right, and it's information I should ignore? Thanks in advance for any and all suggestions! Dana
Intermediate & Advanced SEO | | danatanseo0 -
How reliable is the link depth info from Xenu?
Hi everyone! I searched existing Q & A and couldn't find an answer to this question. Here is the scenario: The site is: http://www.ccisolutions.com I am seeing instances of category pages being identified as 8 levels deep. For example, this one: http://www.ccisolutions.com/StoreFront/category/B8I This URL redirects to http://www.ccisolutions.com/StoreFront/category/headphones - which Xenu identifies as being only 1 level deep. Xenu does not seem to be recognizing that the first URL 301-redirects to the second. Is this normal for the way Xenu typically reports? If so, why is the first URL indicated to be so much further down in the structure? Is this an indication of site architecture problems? Or is it an indication of problems with how our 301-redirects are being handled? Both? Thanks in advance for your thoughts!
Intermediate & Advanced SEO | | danatanseo0 -
Question about the Playback Locations report in YouTube Analytics
We have several hundred YouTube videos. In the Playback Report section in YouTube analytics, I can see a list of sites where people have viewed our videos. Some of the sites listed are competitors. The report does not show the URL of the page where the video is embedded. Is there a way to find this information? I have already tried using Google to search for the URL, but I'm thinking this isn't going to search the source code and the video URL isn't going to appear on the page anywhere. Any ideas? Thanks!
Competitive Research | | danatanseo0 -
Why did Moz take the fun video out of its Robots.txt file?
I loved this. The new robots.txt file is a big yawn. I miss the video. Can you guys add it back (or something equally delighting)? Pretty please?
Link Building | | danatanseo1 -
Can anyone speak to the pros and cons of installing mod_expire on an Apache server?
We recently had mod_deflate and mod_expire installed on our server in an attempt to improve pagespeed. They worked beautifully, at least we thought they did. Google's pagespeed insights tools evaluated our homepage at 65 before the install and 90 after...major improvement. However, we seem to be experiencing very slow load on our product pages. There is a feeling (not based on any quantifiable data) that mod_expire is actually slowing down our page load, particularly for visitors who do not have the page cached (which would probably be most visitors). Here are some pages to look at with their corresponding score from the Pagespeed Insights tool: Live Sound - 91 http://www.ccisolutions.com/StoreFront/category/live-sound-live-audioWireless Microphones - 90 http://www.ccisolutions.com/StoreFront/category/microphones Truss and Rigging - 79 http://www.ccisolutions.com/StoreFront/category/lighting-truss light weight product detail page 83 http://www.ccisolutions.com/StoreFront/product/global-truss-sq-4109-12-truss-segment heavy weight product detail page 77 http://www.ccisolutions.com/StoreFront/product/presonus-studiolive-16-4-2 Any thoughts from my fellow Mozzers would be greatly appreciated!
Technical SEO | | danatanseo1 -
I am experiencing referrer spam from http://r-e-f-e-r-e-r.com/ (don't click) - What should I do?
It amazes me that every day in search marketing is filled with something new that I don't know or never heard of. Most of you are probably familiar with referrer spam, but I hadn't ever heard of it before. I am currently experiencing referral spam on my personal blog. What's the best way to get rid of this pest? Shall I ignore them? Block them in my robots.txt file? Use Google's Disavow? or should I just plain holler "Curse you referral spam people!!!" ? Thanks all!
White Hat / Black Hat SEO | | danatanseo0 -
Should I remove our videos from DotSub.com to try & boost our Wistia videos?
Okay all you video SEO lovers, I had a thought today that I thought would make an interesting question. I have been using the tools at DotSub.com to transcribe our videos. The tools are great, and even better, free. We have about 80 videos on DotSub and all of these videos are also on our YouTube channel. Once I've completed the transcription, I've been exporting the .srt file and uploading them to YouTube to replace the God-awful machine transcriptions (sorry Google, but they are baaaadddd). Anyway, sometimes our DotSub video will outrank the same YouTube video, sometimes not. The 80 videos on DotSub have amassed about 10,000 views and a couple of translations...which is nice, I guess? Recently, we've begun experimenting with Wistia. Here is a search term for which our YouTube and DotSub video results pretty much flood page 1: "studiolive webinar" All of the DotSub videos also exist as Wistia videos on our Website and blog. For obvious reasons, we would rather have our Wistia videos rank in those three positions which right now are dominated by the DotSub versions. Should I remove the content from DotSub in order to try to get the Wistia videos to rank instead? Or should I leave them all there? I should probably add that we do have Wistia videos that are outranking both Dotsub and Youtube versions of the exact same videos...so I know that's possible. I'm just wondering if by leaving all of these videos up at DotSub if we are cannibalizing our potential at ranking for videos that link back to our site? What do you think?
Branding | | danatanseo0 -
301-Redirects, PageRank, Matt Cutts, Eric Enge & Barry Schwartz - Fact or Myth?
I've been trying to wrap my head around this for the last hour or so and thought it might make a good discussion. There's been a ton about this in the Q & A here, Eric Enge's interview with Matt Cutts from 2010 (http://www.stonetemple.com/articles/interview-matt-cutts-012510.shtml) said one thing and Barry Schwartz seemed to say another: http://searchengineland.com/google-pagerank-dilution-through-a-301-redirect-is-a-myth-149656 Is this all just semantics? Are all of these people really saying the same thing and have they been saying the same thing ever since 2010? Cyrus Shepherd shed a little light on things in this post when he said that it seemed people were confusing links and 301-redirects and viewing them as being the same things, when they really aren't. He wrote "here's a huge difference between redirecting a page and linking to a page." I think he is the only writer who is getting down to the heart of the matter. But I'm still in a fog. In this video from April, 2011, Matt Cutts states very clearly that "There is a little bit of pagerank that doesn't pass through a 301-redirect." continuing on to say that if this wasn't the case, then there would be a temptation to 301-redirect from one page to another instead of just linking. VIDEO - http://youtu.be/zW5UL3lzBOA So it seems to me, it is not a myth that 301-redirects result in loss of pagerank. In this video from February 2013, Matt Cutts states that "The amount of pagerank that dissipates through a 301 is currently identical to the amount of pagerank that dissipates through a link." VIDEO - http://youtu.be/Filv4pP-1nw Again, Matt Cutts is clearly stating that yes, a 301-redirect dissipates pagerank. Now for the "myth" part. Apparently the "myth" was about how much pagerank dissipates via a 301-redirect versus a link. Here's where my head starts to hurt: Does this mean that when Page A links to Page B it looks like this: A -----> ( reduces pagerank by about 15%)-------> B (inherits about 85% of Page A's pagerank if no other links are on the page But say the "link" that exists on Page A is no longer good, but it's still the original URL, which, when clicked, now redirects to Page B via a URL rewrite (301 redirect)....based on what Matt Cutts said, does the pagerank scenario now look like this: A (with an old URL to Page B) ----- ( reduces pagerank by about 15%) -------> URL rewrite (301 redirect) - Reduces pagerank by another 15% --------> B (inherits about 72% of Page A's pagerank if no other links are on the page) Forgive me, I'm not a mathematician, so not sure if that 72% is right? It seems to me, from what Matt is saying, the only way to avoid this scenario would be to make sure that Page A was updated with the new URL, thereby avoiding the 301 rewrite? I recently had to re-write 18 product page URLs on a site and do 301 redirects. This was brought about by our hosting company initiating rules in the back end that broke all of our custom URLs. The redirects were to exactly the same product pages (so, highly relevant). PageRank tanked on all 18 of them, hard. Perhaps this is why I am diving into this question more deeply. I am really interested to hear your point of view
Algorithm Updates | | danatanseo0 -
How do I follow more people on Twitter?
Hi my Moz friends, I like following all of you, a lot! Because I like following you all so much, but not so many of you like following ME, I have been cut off by Twitter and I'm not allowed to follow any more people until I have more people following ME. Sad Face I am sure that more than a few of you have been in this scenario. What should I do? I could dump 10-20 followers but that 's not going to be enough to pacify the Twitter Gods! Thanks guys, Dana
Social Media | | danatanseo0 -
Question regarding eCommerce sites, relative URLs and secuirty certificates
We recently installed a new SSL certificate on an ecommerce site. Our IT Director is insisting that all pages on the site must be coded in such a way so that the address bar maintains a green background when a visitor is navigating the site after navigating to a secure page or logging in. I have worked on many ecommerce sites and never has this been an issue. Amazon does not use the green bar....but they are Amazon. In order for this to work, he is insisting that all internal URLs be coded as relative instead of absolute. How bad is this for SEO or does it really not matter that much? How crucial is it for trust and security? Opinions welcome!
Conversion Rate Optimization | | danatanseo0 -
How can Google index a page that it can't crawl completely?
I recently posted a question regarding a product page that appeared to have no content. [http://www.seomoz.org/q/why-is-ose-showing-now-data-for-this-url] What puzzles me is that this page got indexed anyway. Was it indexed based on Google knowing that there was once content on the page? Was it indexed based on the trust level of our root domain? What are your thoughts? I'm asking not only because I don't know the answer, but because I know the argument is going to be made that if Google indexed the page then it must have been crawlable...therefore we didn't really have a crawlability problem. Why Google index a page it can't crawl?
Intermediate & Advanced SEO | | danatanseo0 -
Why is OSE showing no data for this URL?
Hi all, Does anyone have any ideas as to why OSE might not have any data for this URL: http://www.ccisolutions.com/StoreFront/product/shure-slx24-sm58-wireless-microphone-system-j3 It is not a new page at all. It's been on the site for years. Is OSE being quirky? Or is there an underlying problem with this page? Thanks in advance for any light you can shed on this, Dana
Moz Pro | | danatanseo0 -
Is anyone here managing or doing SEO for a site using GoECart?
We are preparing to update/migrate to a new ecommerce platform. We are in the process of choosing right now. One of the things we know we want is faceted navigation, but I am well aware of the problems this presents for SEO. Are any of you amazing people here using, managing or have experience with GoECart? I am interested to know your feedback, particularly from an SEO viewpoint. Thanks in advance! Dana
Web Design | | danatanseo0 -
Should we include our header logo in a sprite or leave it as a regular image?
We are combining the images in our header and footer into sprites. We noticed that when we include our header logo in the sprite, we lose the "alt" text associated with the header logo. Is this undesirable? Would it be better to leave the logo in our header as an image with "alt" text? Here's the link: http://www.ccisolutions.com
Web Design | | danatanseo0 -
Is Spritepad really as easy as they make it out to be?
Hi all, Have any of you used http://wearekiss.com/spritepad to combine images into sprites after your site has already been designed? I can see how it would be easy if you were doing this work as you were building and designing a site. But, on a custom coded site (that is not running on any well-know platform or CMS), is it really going to be a piece of cake? Interested to hear about your experiences with this tool. Thanks! Dana
Web Design | | danatanseo0 -
How can stie A be outranking site B?
Hi everyone. I seem to have a lot of questions the past couple of days! Take a look at these two URLs: Site A: http://www.peachstateaudio.com/products/Mixing_Consoles/Digital_Consoles/Roland/peachstate-audio-roland-m200i Site B: http://www.ccisolutions.com/StoreFront/product/roland-m-200i-digital-mixer I will mail 2-dozen homemade chocolate chip cookies to anyone who can tell me why Site A massively outranks Site B for the term "roland m-200i" - Seriously. Thanks in advance!
Competitive Research | | danatanseo0 -
I need help compiling solid documentation and data (if possible) that having tons of orphaned pages is bad for SEO - Can you help?
I spent an hour this afternoon trying to convince my CEO that having thousands of orphaned pages is bad for SEO. His argument was "If they aren't indexed, then I don't see how it can be a problem." Despite my best efforts to convince him that thousands of them ARE indexed, he simply said "Unless you can prove it's bad and prove what benefit the site would get out of cleaning them up, I don't see it as a priority." So, I am turning to all you brilliant folks here in Q & A and asking for help...and some words of encouragement would be nice today too 🙂 Dana
Technical SEO | | danatanseo0 -
Am I blind or has Google finally shut down its "Related Searches" option?
I know I just used this a few days ago, so I was surprised when doing keyword research today that I could no longer access the "Related Searches" feature in Google search. Has anyone else noticed this? It's a pity if it's gone, although I think Google announced it was going to shut this down over a year ago. They said the same thing about the "Patent" search too, but it is still available. I know using "Related Searches" was really popular with SEOs so I am wondering if anyone else is as sad-faced as I am? Or perhaps was it just bumped today so Google could have fun with their April Fool's beta test of Google "Nose" ?
Keyword Research | | danatanseo0 -
Why would so many links be appearing in the source code of this page - but not on the page itself?
Hi everyone, Have a look at the source code of this page: http://www.daleproaudio.com/c-332-mixers.aspx Notice the numerous links in the source code that do not appear on the page. Is this cloaking? Or is it a by-product of their use of Nextopia? Bonus points if anyone knows what platform this site is running on 🙂 Thanks!
SERP Trends | | danatanseo0 -
Is there something fundamentally wrong with our site architecture?
Hi everyone! Could a few of you brilliant people take a look at the architecture of this site http://www.ccisolutions.com, and let me know if you see any obvious problems? I have run the site through XENU, and all of our most important pages, including categories and products, are no deeper than level 3. Everything deeper than that is, in most cases, an image, a pdf or an orphaned page (of which we have thousands). Could having thousands upon thousands of orphaned pages be having a more hurtful effect on our rankings than our site architecture? I have made loud noises and suggested that duplicate content, site speed and dilution of page authority due to all those orphaned pages are some of the primary reasons we don't rank as well as we could. But, I think those suggestions just aren't sexy or dramatic enough, so there is much shaking of heads and discussion that it must be something fundamentally wrong with site architecture. I know re-arranging the furniture is more fun than scrubbing the floors, but I think our problems are more about fundamental cleanup than moving things around What do you think?
Web Design | | danatanseo0 -
How does Google Keywords Tool compile search volume data from auto-suggest terms?
Hi everyone. This question has been nagging at my mind today ever since I had a colleague say "no one ever searches for the term 'presonus 16.4.2'" My argument is "Yes they do." My argument is based on the fact that when you type in 'presonus 16" - Google's auto-suggest lists several options, of which presonus 16.4.2 is one. That being said. Does Google's Keyword Tool base traffic estimates ONLY on actualy keywords typed in by the user, in this case "presonus 16" or does it also compile data for searchers who opt for the "suggested" term "presonus 16.4.2" ??? To clarify, does anyone have any insight as to whether Google is compiling data on strictly the term typed in from a use or giving precendence to a term being selected by a user that was listed as an auto-suggest, or, are they being counted twice???? Very curious to know everyone's take on this! Thanks!
Intermediate & Advanced SEO | | danatanseo0 -
Why would an image that's much smaller than our Website header logo be taking so long to load?
When I check http://www.ccisolutions.com at Pingdom, we have a tiny graphic that is taking much longer to load than other graphics that are much bigger. Can anyone shed some light on why this might be happening and what can be done to fix it? Thanks in advance! Dana
Technical SEO | | danatanseo0 -
What might make Bing.bot find a URL that looks like this on our site?
I have been doing something Richard Baxter recently suggested and reviewing our server logs. I have found an oddity that hopefully some of you smart Mozzers can help me figure out. Here is the line from the server log (there are many more like this): 157.55.32.166 - - [04/Mar/2013:08:00:59 -0800] "GET /StoreFront/category/www.ccisolutions.com/StoreFront/category/shure-se-earphones HTTP/1.1" 200 94133 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" "-" See how the www.ccisolutions.com appears after /StoreFront/category/ ? We used to see weird URLs reported in GWT that looked like this, but ever since we fixed our canonical tags to be absolute instead of relative URLs, they no longer appeared in our Webmaster Tools reports. However, it seems there is still a problem. Where/how could Bingbot be seeing URLs configured this way? Could it be a server issue, or is it most likely a data problem? Thanks in advance! Dana P.S. Could this be resulting from our massive use of relative URLs all over the site?
Technical SEO | | danatanseo0 -
Can you use a base element and mod_rewrite to alleviate the need for absolute URLs?
This is a follow up question to Scott Parsons' question about using absolute versus relative URLs when linking internally. Andy King makes the statement that this can be done and that it saves additional space (which he claims then can improve page speed). Is this a true and accurate statement? Can using a base element and mod-rewrite alleviate the need for absolute URLs? I need to know before going off on a "change all of our relative URLs to absolutes" campaign. Thanks in advance! Dana
Web Design | | danatanseo0 -
Question about collapsible/Expandable
I will premise this question by saying I am not a developer, so please forgive my ignorance on this one. Is there a way to achieve an expandable/collapsible without relying on javascript? Thanks all! Dana
Technical SEO | | danatanseo0 -
Question about Wistia and possible other Video Solutions for better SEO?
We are considering using Wistia as a more SEO-friendly video solution. In our preliminary tests, we like what we see with the exception of one thing. There is no way for video users to toggle the interactive transcripts on and off. From an aesthetic viewpoint, our team finds the scrolling text extremely visually distracting. For usability and SEO purposes, we know that having the transcript there is important. Unfortunately in the embed codes, you are limited to either including the interactive transcript, or leaving it out. There is no mechanism to allow users to view it if they want to, but leave it off if they don.t Has anyone here created a workaround for this problem or found another solution, like Wistia, that has a more aethetically pleasing and user-friendly presentation of trascripts/captions? Thanks! Dana
Image & Video Optimization | | danatanseo0 -
How does a search engine bot navigate past a .PDF link?
We have a large number of product pages that contain links to a .pdf of the technical specs for that product. These are all set up to open in a new window when the end user clicks. If these pages are being crawled, and a bot follows the link for the .pdf, is there any way for that bot to continue to crawl the site, or does it get stuck on that dangling page because it doesn't contain any links back to the site (it's a .pdf) and the "back" button doesn't work because the page opened in a new window? If this situation effectively stops the bot in its tracks and it can't crawl any further, what's the best way to fix this? 1. Add a rel="nofollow" attribute 2. Don't open the link in a new window so the back button remains finctional 3. Both 1 and 2 or 4. Create specs on the page instead of relying on a .pdf Here's an example page: http://www.ccisolutions.com/StoreFront/product/mackie-cfx12-mkii-compact-mixer - The technical spec .pdf is located under the "Downloads" tab [the content is all on one page in the source code - the tabs are just a design element] Thoughts and suggestions would be greatly appreciated. Dana
Technical SEO | | danatanseo0 -
Is Google applying some customized search results, even when Private Browsing?
I am including a screenshot of a very interesting search result I received while InPrivate Browsing in Google using IE9. I was spot-checking some keywords while private browsing and the first one I searched was "presonus studiolive." Then, I searched a completely unrelated term "communion supplies." I am attaching a screenshot of the search results page I then received from Google. Interesting, no? I can't even begin to wrap my head around the implications of a search results page that mixes results from two completel unrelated terms. Thoughts? 7QNxPHM.jpg
Intermediate & Advanced SEO | | danatanseo0