I agree with Ryan. The one thing to consider is whether redirects will help or hurt your site. Even websites that are appropriately redirected lose some link equity in the process. See Matt Cutts' video here which says that roughly 10-15% of PageRank is lost through redirects and outgoing links. Therefore, if the site has existed using the format domain.com/post-name for a long time and attracted links to those URLs, then the small benefit you get from redirecting to domain.com/keyword/post-name may be outweighed by the natural loss of link equity.
Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Posts made by Mark_Ginsberg
-
RE: Using keywords in my URL: Doing a redirect to /keyword
-
RE: Business Name is Meta Description
I'd go with Business Name, because it's more likely to be searched. Searchers like to see content that matches their exact query.
Also, personally I hate when people use copyright / trademark annotation in copy when they don't have to. Others may disagree, that's just me!
-
RE: How to handle (internal) search result pages?
If none of these pages are indexed, you can block them via robots.txt. But if someone else links to a search page from somewhere on the web, google might include the url in the index, and then it'll just be a blank entry, as they can't crawl the page and see not to index it, as it's blocked via robots.txt.
-
RE: How to handle (internal) search result pages?
Blocking the pages via robots.txt prevents the spiders from reaching those pages. It doesn't remove those pages from the index if they are already there, it just prevents the bots from getting to them.
If you want these pages removed from your index, and not to impact the size of your index in the search engines, ideally you remove them with the noindex tag.
-
Blocking Affiliate Links via robots.txt
Hi,
I work with a client who has a large affiliate network pointing to their domain which is a large part of their inbound marketing strategy. All of these links point to a subdomain of affiliates.example.com, which then redirects the links through a 301 redirect to the relevant target page for the link. These links have been showing up in Webmaster Tools as top linking domains and also in the latest downloaded links reports. To follow guidelines and ensure that these links aren't counted by Google for either positive or negative impact on the site, we have added a block on the robots.txt of the affiliates.example.com subdomain, blocking search engines from crawling the full subddomain. The robots.txt file is the following code:
User-agent: *
Disallow: /
We have authenticated the subdomain with Google Webmaster Tools and made certain that Google can reach and read the robots.txt file. We know they are being blocked from reading the affiliates subdomain. However, we added this affiliates subdomain block a few weeks ago to the robots.txt, but links are still showing up in the latest downloads report as first being discovered after we added the block. It's been a few weeks already, and we want to make sure that the block was implemented properly and that these links aren't being used to negatively impact the site. Any suggestions or clarification would be helpful - if the subdomain is being blocked for the search engines, why are the search engines following the links and reporting them in the www.example.com subdomain GWMT account as latest links. And if the block is implemented properly, will the total number of links pointing to our site as reported in the links to your site section be reduced, or does this not have an impact on that figure?From a development standpoint, it's a much easier fix for us to adjust the robots.txt file than to change the affiliate linking connection from a 301 to a 302, which is why we decided to go with this option.Any help you can offer will be greatly appreciated.Thanks,Mark
-
RE: Please Settle a Bounce Rate Debate
The form could trigger a google analytics event on successful submission without having to take you to a confirmation page. You often have ajax forms that don't load a new page, and you can track success of the form with a google analytics event and a not a pageview of a thank you page. A very popular solution that works this way on Wordpress is Contact Form 7.
When your form "wipes the data" as you said and shows the customer the successful form submission, you can trigger a Google analytics event then.
Mark
-
RE: Please Settle a Bounce Rate Debate
I don't think this should be counted as a bounce, because the visitor converted by filling out the form, but analytics may track it as a bounce, because they left after one page and the form submission may not be counted. I would trigger the form to fire an event upon successful completion, the event by default should count as an interaction and thus not as a bounce on the site.
See this resource here from Google Analytics - https://developers.google.com/analytics/devguides/collection/gajs/eventTrackerGuide#non-interaction
Particularly, this sentence - they're talking here about the default consideration of events, as long you don't specify it's a non-interaction event - "a single-page session on a page that includes event tracking will not be counted as a bounce if the visitor also triggers the event during the same session."
So set up an event to capture form submission, and this should solve your one page visit/form submission/bounce rate quandary.
Good luck,
Mark
-
RE: Why are plus signs (+) suddenly showing up in Google Analytics organic search keywords reports?
Not sure why this is growing recently, but when learning regex for Google Analytics with the awesome LunaMetrics regex guide, I remember coming across the need to write brand names for advanced segments and to cover the possibility of two words being written with or without a space. Don't remember exactly where I saw it, but since then I've been writing them this way (\s|+), if I were writing seomoz for a brand advanced segment, and wanted to cover seo moz and seomoz, I would do it seo(\s|+)?moz
Basically, the regex for a space is \s, but analytics sometimes treats spaces as +, so to cover your bases, you do it either with a \s or a +.
My point is, this has been around for a while - not sure why the sudden increase, but I know this has been around for quite a bit. Maybe try drilling down a bit and seeing if you can find a common denominator here about the traffic and what is causing it.
Mark
-
RE: Google Analytics: how to filter out pages with low bounce rate?
I would also look into sorting the data by the metrics you want, and using weighted sort instead of the default. Weighted sort takes into account other metrics as well - so this way, when you sort by bounce rate, it doesn't just show you 100% bounce rate at the top, even if that page only has 1 view and so is skewed, but gives you a much better idea at pages that are performing poorly and actually getting visits.
You can read more about weighted sort here on the GA blog - http://analytics.blogspot.co.il/2010/08/introducing-weighted-sort.html
Hope this helps,
Mark