Nice. My last question, if you don't mind is - do you feel there would be an impact to rankings/traffic if we went back - a year and a half later - to edit the redirect file?
Posts made by Blenny
-
RE: Is there a way to tell which redirect - from another domain - is driving traffic to your website?
-
RE: Is there a way to tell which redirect - from another domain - is driving traffic to your website?
Thanks Logan -
Just to be clear - in the redirect file then, we'd have a setup which is basically (old domain url) directed to (new domain url+utm parameters?).
That's interesting. I'd never heard of using the UTM codes in a redirect file.
-
RE: Is there a way to tell which redirect - from another domain - is driving traffic to your website?
Thanks Peter.
The source does not show up in our analytics. To be clear, the site is no longer active, we just redirected its major URL's to appropriate locations of our website.
We can definitely try to take a look at the server logs though, - thanks!
-
Is there a way to tell which redirect - from another domain - is driving traffic to your website?
My company acquired another domain/website several year's ago. We want to see - for accounting purposes, how much traffic/revenue that site's REDIRECTS are driving to our website now. Is there any way to pull this information?
I thought of comparing when the site was originally redirected to our traffic/revenue changes - but so much has changed from the acquisition that I wouldn't have faith in this value. Thanks for any assistance you can provide.
-
RE: Brand in Title Tag - a Ranking Factor for Scaling Big Websites?
Thanks FCBM!
I hadn't considered the reformatting Google tends to do anyway.At this point, I haven't seen any where this is the case 'in the wild' for our brand (maybe they've recognized that it doesn't affect the clickthrough on our stuff?). They simply just let it cut off since they're mostly over the 55 or so limit.
I think a test is in order as well. Thanks!
-
Brand in Title Tag - a Ranking Factor for Scaling Big Websites?
I'm in the middle of redesigning title tags on a large ecommerce site - approximately 9000 product pages.
The old structure was -(product name/description) | (Website/Brand)
So an example would be -
Big League Chew - 13 oz. | Target - With 'Target' Being the site's brand and appearing on each.
With Google's new Title Tag display, our title tags are too long now. Unfortunately, our Brand/Website is HUGE - over 18 characters. My question is two fold -
1. Is it OK to remove brand from the title tags of some particularly long names? Will this impact ranking?
2. Does Google look for brand in these title tags, and more specifically: brand consistency in title tags?
I'd love to cut the brand out of some as the product name is the biggest click-through element by far - but I don't want to affect rankings. My 'gut' says that I should focus on clickthrough rate with title tags and cut brand where necessary. Does anyone have thoughts on this?
-
RE: Moz Company & Personal Account Conflict
Yeah exactly - keeping that personal identity is where search is headed - it'd be nice if Moz would develop a similar model :).
-
Moz Company & Personal Account Conflict
Hey Moz -
I just had a question regarding the Moz Pro Account.
Sometimes I find myself torn between using my personal Moz account, and the Moz Pro Account associated with my company (and my team).
Is there a way to link a personal account WITH a company pro account, so I could post as myself, build my own cred, but still enjoy the benefits of Pro on my company's dime?
Thanks!
-
RE: Acquired Old, Bad Content Site That Ranks Great. Redirect to Content on My Site?
Thanks Andy and Travis -
Yeah, I think the enhancement and/or citation route is probably the safest for us. Thanks for your help!
-
RE: Acquired Old, Bad Content Site That Ranks Great. Redirect to Content on My Site?
Thanks Bill -
You're exactly right - I'm definitely not considering redirecting the entire site for the reasons you point out - there ARE some shady practices to the product pages in particular.
What I'm wondering though is if I can at least gleam some content authority from individual articles and such?
-
RE: Acquired Old, Bad Content Site That Ranks Great. Redirect to Content on My Site?
Just to clarify the whole 'bad' content thing. The content is outdated but was good in its day..and did acquire links. It needs updated and improved..which I am prepared to do.
-
Acquired Old, Bad Content Site That Ranks Great. Redirect to Content on My Site?
Hello. my company acquired another website. This website is very old, the content within is decent at best, but still manages to rank very well for valuable phrases. Currently, we're leaving the entire site active on its own for its brand, but i'd like to at least redirect some of the content back to our main website. I can't justify spending the time to create improved content on that site and not our main site though.
What would be the best practice here?
1. Cross-domain canonical - and build the new content on our main website?
2. 301 Redirect Old Article to New Location containing better article
3. Leave the content where it is - you won't be able to transfer the ranking across domain.
Thanks for your input.
-
RE: Facebook Places & Business Pages - Any way to consolidate
Thanks David, but at this point that information is outdated. I recently sent a request directly to Facebook regarding this issue - but at the current time, there doesn't seem to be a way to combine the two (at least that I, or anyone I've spoken to can find).
-
RE: AJAX and Bing Indexation
Hi, thanks for your response, and I apologize for the delay in responding!
In our current state, removing the AJAX links would be extremely difficult.
We do actually have the AJAX Crawling Protocol in place, which is, conceivably why Google is able to crawl us and our rankings are basically unchanged.
After speaking again with Bing's Support, they did acknoledge that they DO follow the escaped_fragment we set up, but that a rel="canonical" tag to the non-AJAX version was creating what they called in infinite indexation loop..whereby a java redirect at the non-AJAX, sent them to the AJAX, and a rel canonical sent them back to the non-AJAX. They suggested that if we wanted them to index the "Pretty" AJAX version, we remove the rel canonical pointing to the non-AJAX url. They didn't suggest putting the Pretty AJAX url in the rel canonical - I'm wondering if they may be a solution?Ideally, we'd have them index the non-AJAX url (though it seems like that isn't possible? Sorry this is so convoluted!)
In the meantime, we've removed rel canonical entirely from this level of our website..but at the moment rankings haven't really been affected.
Any suggestions? It feels like AJAX may be just completely inadvisable for Bing.
-
AJAX and Bing Indexation
Hello. I've been going back and forth with Bing technical support regarding a crawling issue on our website (which I have to say is pretty helpful - you do get a personal, thoughtful response pretty quickly from Bing).
Currently our website is set with a java redirect to send users/crawlers to an AJAX version of our website. For example, they come into - mysite.com/category..and get redirected to mysite.com/category#!category. This is to provide an AJAX search overlay which improves UEx. We are finding that Bing gets 'hung up' on these AJAX pages, despite AJAX protocol being in place. They say that if the AJAX redirect is removed, they would index and crawl the non-AJAX url correctly - at which point our indexation would (theoretically) improve.
I'm wondering if it's possible (or advisable) to direct the robots to crawl the non-AJAX version, while users get the AJAX version. I'm assuming that it's the classic - the bots want to see exactly what the users see - but I wanted to post here for some feedback. The reality of the situation is the AJAX overlay is in place and our rankings in Bing have plummeted as a result.
-
BingHoo Tools No Longer Directly Supporting AJAX
This post was derived from the fact that our website, which uses AJAX and AJAX urls formatted using Google's crawl guidelines (and ranks well), is being completely misread by BingHoo's bots (and destroying our rankings).
After some research, we found that Bing's tools used to contain an option for their crawlers to interpret AJAX-infused urls, but that this feature was removed with the latest update.
I've seen others post on this issue with no response, so I figured I'd post the customer support email we received below - kind of strange thing to receive. Takeaway - even AJAX done right is rough in BingHoo.
(sorry I can't post my site here..).
Hello,
This is Roxanne of Bing Technical Support and I will be assisting you with this issue.
I understand that you cannot find the Configure Bing for AJAX Crawling box in Bing Webmaster. Let me explain.
We appreciate your feedback about Bing Webmaster tools. We regret to inform you that Bing Webmaster Tools is no longer directly supporting AJAX. We'll pass this feedback onto our Bing development team.
If you are having ajax-specific related issues with your site in Webmaster tools, please let us know.
We apologize for the time spent and the inconvenience this may have caused you. If you should require further clarification and need more assistance, please feel free to reply to this email.
Thank you and have a great day!
Regards,
Roxanne
Bing Technical Support
-
RE: Does Bing(and Yahoo)Crawl AJAX Based Content?
Hi Brandon,
This is a great question, and one I too cannot find a straight answer for either. I found that article you reference as well, and wasn't able to find the enable option either. If I had to guess, I'd say no - they don't know how to handle it. My site was recently changed to integrate an AJAX based search product at our category page level. The url structures and server responses are designed to Google's standards regarding AJAX crawl ability, but our rankings on these category pages have basically disappeared in Bing/Hoo. The underlying HTML is full of crawlable links as well to compensate. Hopefully someone chimes in with an answer.
-
RE: Proper way to 404 a page on an Ecommerce Website
Hi Nakul,
I appreciate your willingness to help! We actually resolved the issue with help from our developer - standard 404 page - both to viewer and bots - but we've implemented a routine to regularly search for viable redirects to eliminate as many as possible.
On a related note - pretty good post on SEOmoz blog today about this very topic - coincidence?!
-
RE: Proper way to 404 a page on an Ecommerce Website
Hi Nakul,
These products would be gone forever - like a discontinued item.
The 302 to 404 is my main concern - I agree with each of you that from a UEx perspective redirecting to relevant category pages is ideal.
Is this a standard way of setting this up on a large website (I didn't do it and it seems strange to me). Is there a better way (strictly from the SE perspective?).
Thanks.
-
RE: Proper way to 404 a page on an Ecommerce Website
Thanks Sean. I'm not too concerned about 301s in this case - any products that are worth the link would show up in webmaster tools - and search engines expect a certain number of 404s returned. I'm just wondering if this is the correct way to report the 404 error to the engines - with a 302 onto a 404 page? Temporary redirects have me scared. Thanks.
-
Proper way to 404 a page on an Ecommerce Website
Hello. I am working on a website that has over 15000 products.
When one of these is no longer available - like it's discontinued or something - the page it's on 302s to a 404 page.
Example - www.greatdomain.com/awesome-widget
Awesome widget is no longer available
302s to
-www.greatdomain.com/404 page.
For the most part, these are not worthy of 301s because of lack of page rank/suitable LPs, but is this the correct way to handle them for search engines? I've seen varying opinions.
Thanks!
-
RE: New CMS system - 100,000 old urls - use robots.txt to block?
Great stuff..thanks again for your advice..much appreciated!
-
RE: New CMS system - 100,000 old urls - use robots.txt to block?
Thanks for the advice! The previous website did have a robots.txt file with a few wild cards declared. A lot of the urls I'm seeing are NOT indexed anymore and haven't been for many years.
So, I think the 'stop the bleeding' method will work, and I'll just have to proceed with investigating and applying 301s as necessary.
Any idea what kind of an impact this is having on our rankings? I submitted a valid sitemap, crawl paths are good, and major 301s are in place. We've been hit particularly hard in Bing.
Thanks!
-
RE: New CMS system - 100,000 old urls - use robots.txt to block?
Thanks a lot! I should have been a little more specific..but, my exact question would be, if I move the crawlers' attention away from these 'Not Found' pages, will that benefit the indexation of the now valid pages? Are the 'Not Found's' really a concern? Will this help my indexation and/or ranking?
Thanks!
-
New CMS system - 100,000 old urls - use robots.txt to block?
Hello.
My website has recently switched to a new CMS system.
Over the last 10 years or so, we've used 3 different CMS systems on our current domain. As expected, this has resulted in lots of urls.
Up until this most recent iteration, we were unable to 301 redirect or use any page-level indexation techniques like rel 'canonical'
Using SEOmoz's tools and GWMT, I've been able to locate and redirect all pertinent, page-rank bearing, "older" urls to their new counterparts..however, according to Google Webmaster tools 'Not Found' report, there are literally over 100,000 additional urls out there it's trying to find.
My question is, is there an advantage to using robots.txt to stop search engines from looking for some of these older directories? Currently, we allow everything - only using page level robots tags to disallow where necessary.
Thanks!
-
Facebook Places & Business Pages - Any way to consolidate
Hello,
I've read multiple posts on this, and the new timeline layout rolling out for business pages..but I haven't seen an update to this issue.
My company has a retail store and an online store. Because of this, we had to create a places page and a regular business page. When people "check-in" via Facebook, the check-in goes to the places page. But our main marketing efforts and large fan base are at the business page.
Facebook used to allow you to combine these. In a perfect world, all would live on one business page.
Has anyone heard an update on this feature?
Thanks either way.
-
RE: Rel Canonical tag usage on ECommerce website
Thanks! I understand what you're saying and I agree...this is exactly the method that our CMS generates these pages. The crawlable, additional pages are unique and should be crawled. This being said, from a search engine's perspective, the obvious "canonicalized" page should be the main category. I believe the robots, no index/follow is the best option for me - though I'm not exactly sure how to implement it with our CMS system.. Thanks.
-
RE: Rel Canonical tag usage on ECommerce website
Thanks!
Hadn't considered the robots tag like this. Unfortunately, our site's CMS system will make either of these options tough to actually implement. But it's great to know there're some options.
-
RE: Search directory - How to apply robots
That was my first instinct as well - but I wanted to check if there was a better move? Thanks for your help!
-
RE: Rel Canonical tag usage on ECommerce website
Wow, thanks alot I hadn't heard this was even available. Any chance you could give me a link to where I could find info. to implement?
Thanks again for your help, either way!
-
Rel Canonical tag usage on ECommerce website
Hello,
I have read up on the rel canonical tag and I'm ready to apply it to my site's categorization structure.
However, I'm concerned that, because my website does not have a "view all" button for our product pages, the rel canonical tag would not be appropriate.
For example, if you come to my site's main category url, you come to
At this level - you get the top 12 items in the category.
if you want to see the next page, you click a crawlable link that goes to
etc. etc.
The site does not offer a view all function.
Would applying the rel canonical tag be appropriate in this instance, or do I have to let Google crawl and index each page independantly?
Thanks.
-
Search directory - How to apply robots
Hi.
On the site I'm working on, we use a search directory to display our search results. It displays as follows -
With the dynamic search results appearing after the hash tag.
Because of the structure of the website, many of the lefthand nav defers back to this directory. I know that most websites "noindex, nofollow" the search results pages, but due to the ease of customers generating them, I'm afraid that if I do this, we'll miss out on the inevitable links customers will provide...and, even though it's just the main search directory, these links will still help my domain. The search is all java-generated so there's nothing for spiders to follow within this directory - save the standard category nav.
How should I handle this?
Thanks.
-
RE: ECommerce Site Breadcrumbs Best Practice
Thanks a lot Marcin. This totally confirmed what I was thinking and I appreciate the source links.
-
ECommerce Site Breadcrumbs Best Practice
I'm working on an Ecom website and I was wondering -
For breadcrumbs - is there an SEO and/or UEx preference when it comes to taking them back to the homepage?
I have the option of going CATEGORY > SUB CATEGORY > SUB CATEGORY
or
HOME > CATEGORY > SUBCATEGORY > SUBCATEGORY
Each example is hyperlinked except for the lowest level.
Thanks
-
RE: CSS Hiding Text - Does this matter to search engine crawlers
Thanks for taking the time to reply. This is what I thought..,.so you're saying that Google can access/activate the CSS files? I think our developer assumes that Google doesn't activate the styles, so the links underneath would be fine. Thanks!
-
CSS Hiding Text - Does this matter to search engine crawlers
Hello,
I'm working on a site and a developer is using CSS to mask crawlable links below. Then, java, advanced search links go on top of this. So, if you disable Java, but have CSS enabled, you don't have a lefthand nav. With both CSS and Java disabled you have a fully crawlable website.
Is this a red flag? I understand a user without java would have a problem since most people don't disable CSS. But, is this a problem for search crawlers?
Thanks!
-
RE: Video Rankings and Optimization Issues
Hi Dylan,
Wow, I totally should have read your question before I posted one of my own.
First of all, no hard evidence on this..but it appears to me that Google is giving a page a rankings boost when a video is embedded on it...and in my case, these videos are hosted a variety of places - not just YouTube (one on site, one on a product manufacturer's). Conveniently enough, they also give you a chance to gain more of the SERP by pulling out the video within with a thumbnail.
In my case, the two pages became visible in brand searches for my company, though those pages were NOT optimized at all for our brand terms, and have nothing but internal links to them. Investigation is on-going - I haven't checked if they rank for what they're optimized for, but this is probably moot since they ranked well before.
Anyway, takeaway for me appears to be Google is giving more love to videos as quality content & ranking factors...and hosting platform doesn't matter...but my evidence is minimal at this point.
Also, all searches were performed Universal Google..not on Youtube.
-
Videos increase ranking of products in SERPS from Ecommerce Website
Just noticed something I've never seen before..and I just wanted to see if anyone else experienced this.
I work for a 15000+ item eccommerce website, and today I noticed that on a few brand searches, several individual product pages were coming up. This is actually unusual because most of our individual item pages (including these) aren't ranked well enough to show up well in a brand search (and don't try to target brand terms either), but a correlation here was that both items contained videos referenced within. These were not videos hosted on our YouTube brand page either..these were videos done by separate manufacturers - one was hosted on their site, one on ours. Google actually pulled the snapshot of the video to the SERP as well... even though it was embedded within other product copy.
Has anyone else noticed any preferential treatment given to effectively random items on your eCommerce website because it was augmented by video? I can assure you there was nothing otherwise unique about these products and they're not really that sought after. Neither item or url was new, and neither were the videos within. Also, this was a Universal Google search, not one for videos.
(Sorry, I'm not allowed to reference directly).
Thanks.
-
RE: Root domain registered in search engines, inbound links to www sub-domain. A problem?
Hey Josh,
Goog does differentiate the two in terms of applying link juice, but unless for some reason they deliver different content, it's not a major problem. My website has had this issue for awhile due to issues with our CMS system. There is an easy fix though. Try incorporating the rel canonical tag into your pages. Lots of great write-ups on SEOMoz and elsewhere on the usefulness of this tag - essentially tells Google which version of the url to credit the link juice and to keep in its index.
Then, as you said, I would change the internal link structure to reflect the "www" or "non-www" dependent on which one has the most links (Keep the one most people have linked to determined by a OpenSiteExplorer report from SEOMoz) so as to maintain consistency from that point forward. Ideally, you'd be able to 301 the less-linked version to the other since you lose a fraction of link juice with a 301.
Done correctly, you may actually gain engine authority since both non-www and www versions would "funnel" all of their link juice together.
Good luck!
-
RE: 301ing entire E-Com website - Using TopPages Report as a Guide
Hi Sha, thanks for your response.
1. Yes, we will use the same domain but structure will be completely changed.
2. Unfortunately, our current CMS is so backward, that it will be a complete file name and, directory change.
Thanks!
-
RE: 301ing entire E-Com website - Using TopPages Report as a Guide
Thanks..great answer..and yeah, that's pretty much how I feel with that last sentence..ugghhh..
-
301ing entire E-Com website - Using TopPages Report as a Guide
Hi Folks,
My company is in discussions with an ERP provider to migrate our existing website to a new backend system, new url structures..the whole 9 yards.
We have over 15,000 products, and a lot of "odd" url structures created by an outdated system that we were unable to adjust. All and all, if you're talking about a raw url count, it's in the 100s of thousands.
During discussions, we were told that bulk 301 redirects would be a problem. They could be performed manually though (greaaattt).
Due to this, I ran an OpenSite Explorer report, and isolated our top pages. After exporting the CSV, I was able to breakout "all" urls that have links from other domains.
My question is, has anyone used the OpenSite Explorer on a website of similar size, to form the basis of a migration? Do you have enough confidence in the tool to use it in this way, or should we re-negotiate our agreement until a way can be found to mass 301 ALL urls. I'm at least a little concerned that OpenSite Explorer isn't indexing all of the links out there. Gasp..or would there be a better tool to accomplish what I'm trying to do?
Thanks!