I think that either way you will probably be ok, but I would lean toward removing the /career-resources/ folder as it is probably not needed. I think you could just have a .com/career-resources.html as your index page and the link to all to topic folders from there. Anytime, you can have a file that is closer to the root, that is an indicator of the importance of the URL and so that helps as well. Also, I would not mess with index.html file names, just end the folder in a slash e.g. .com/resume-tips/ A lack of a page name in a folder is the index page. Nobody goes to google.com/index.html or moz.com/index.html same thing with folders.
Posts made by CleverPhD
-
RE: URL Structure Question
-
RE: GWT and html improvements
We had the same issue on one of our sites. Here is how I understand it after looking into it and talking to some other SEOs.
The duplicate content Title and Meta description seem to lag any 301 redirects or canonicals that you might implement. We went through a massive site update and had 301s in place for over a year with still "duplicates" showing up in GWT for old and new URLs. Just to be clear, we had the old URLs 301ing to the new ones for over a year.
What we found too, was that if you look into GWT under the top landing pages, we would have old URLs listed there too.
The solution was to put self canonicalizing links on all pages that were not canonicaled to another one. This cleaned thing up over the next month or so. I had checked my 301 redirects. I removed all links to old content on my site, etc.
What is still find are a few more "duplicates" in GWT. This happens on two types of URLs
-
We have to change a URL for some reason - we put in the 301. It takes a while for Google to pick that up and apply it to the duplicate content report. This is even when we see it update in the index pretty quick. As, I said, the duplicate report seems to lag other reports.
-
We still have some very old URLs that it has taken Google a while to "circle back" and check them, see the 301 and the self canonical and fix.
I am honestly flabbergasted at how Google is so slow about this and surprised. I have talked with a bunch of people just to make sure we are not doing anything wrong with our 301s etc. So, while I understand what is happening, and see it improving, I still dont have a good "why" this happens when technically, I have everything straight (as far as I know). The self canonical was the solution, but it seems that a 301 should be enough. I know there are still old links to old content out there, that is the one thing I cannot update, but not sure why.
It is almost like Google has an old sitemap it keeps crawling, but again, I have that cleared out in Google as well
If you double check all your stuff and if you find anything new, I would love to know!
Cheers!
-
-
RE: What is Happening to Me?!
I had "via this intermediate link" when I had a link from another website to an old URL that I had a 301 redirect to the new URL. The link was credited in GWT to the new URL. I had a bunch of links that were being caught by a default 301 to a main directory. I used the "via intermediate URL" to then find what old URLs still had links and then change the 301s for those specific old URLs to more appropriate URLs on my site.
Run those intermediate links through your browser etc and see if that tells you anything. What is strange is that the parameter in the URL is another URL. You may want to check those out too separately.
-
RE: Blocked by Meta Robots.
Actually it would not be in the meta robots noindex. The meta tag does not prevent Google from crawling the page it is on. If it did that, then Google would not be able to crawl the page and then it would not be able to read the tag :-). The meta robots will tell Google to remove the page from the index and so it is very effective for that application.
That said, the GWT warning, is probably related to you robots.txt file located at
http://www.yourdomain.ext/robots.txt
Put that in your browser and see if you have any of your files, pages Disallowed in that file. If that is the case, then Google will not be able to spider a page to start with, let alone read the meta tags. Do some searching on Google on how robots.txt works Moz obviously has one
http://moz.com/learn/seo/robotstxt
Here is a video on how to use Wordpress and robot.txt - it may or may not relate to your config, but will show a plugin that you can use to adjust
http://www.youtube.com/watch?v=JY9A5OqHTvw
You can figure out how to understand it and then what you need to update it. Get with your IT person or whoever admins your site
-
RE: What is Happening to Me?!
Maybe the Moz folks can change this from a "question" to an "ongoing discussion"? Not sure if that is possible. Thanks for the updates Jesse.
-
RE: Is it dangerous to use "Fetch as Google" too much in Webmaster Tools?
I would say it is not a preferred way to alert Google when you have a new page and it is pretty limited. What is better, and frankly more effective is to do things like:
- add the page to your XML sitemap (make sure sitemap is submitted to Google)
- add the page to your RSS feeds (make sure your RSS is submitted to Google)
- add a link to the page on your home page or other "important" page on your site
- tweet about your new page
- status update in FB about your new page
- Google Plus your new page
- Feature your new page in your email newsletter
Obviously, depending on the page you may not be able to do all of these, but normally, Google will pick up new pages in your sitemap. I find that G hits my sitemaps almost daily (your mileage may vary).
I only use fetch if I am trying to diagnose a problem on a specific page and even then, I may just fetch but not submit. I have only submitted when there was some major issue with a page that I could not wait for Google to update as a part of its regular crawl of my site. As an example, we had a release go out with a new section and that section was blocked by our robots.txt. I went ahead and submitted the robots.txt to encourage Google to update the page sooner so that our new section would be :"live" to Google sooner as G does not hit our robots.txt as often. Otherwise for 99.5% of my other pages on sites, the options above work well.
The other thing is that you get very few fetches a month, so you are still very limited in what you can do. Your sitemaps can include thousands of pages each. Google fetch is limited, so another reason I reserve it for my time sensitive emergencies.
-
RE: Will implementing 301's on an existing domain impact massively on rankings?
301ing non www to www or vice versa is best practice to prevent / minimize duplicate content. You want to have it in place.
-
RE: What is Happening to Me?!
The annual price you pay to Screaming Frog is worth it. Considering your situation, I would just go ahead a buy it.
As far as browser plugins for user agen switching the plugin I use in FF is
http://chrispederick.com/work/user-agent-switcher/
and Chrome is
These are not spidering programs per se, they change what user agent your browser shows to web servers so you can see what they show. You can also use this to emulate an iPhone and see what your mobile site looks like for testing.
It may or may not be that these other sites are hacked, I am just saying, if you want to see how Google sees these sites, it may be instructive to figuring out your issue.
-
RE: What is Happening to Me?!
FWAP! (sound of virtual slap)
Wow, this does sound like a nightmare. Are you seeing changes in traffic? I would wait a few days to see if it sticks. It seems that there is a disavow in your future with a reconsideration. I would document, document, document as this will make both the disavow and reconsideration more effective so that when your stuff is reviewed Google can dip into it.
Have you tried screaming frog on any of the domains. One key point is to set the user agent to Google Bot - the sites may show something different to bots than to people
I had a site hacked once. Screaming frog picked up the links, but you would not see them when you went on the site. I changed the user agent in my browser (using plug in) and boom they were there. So the site was not only hacked, but was effectively cloaking. It may be what happened to these other sites and why Google sees them and you don't. Luckily in my case we caught it pretty fast and got rid of it. One other point, we were load balancing servers on this site and only one of the servers was infected. So even as a bot, you only saw the links sometimes. It made it trickier to find, but point being, you may want to check these sites a couple different times as well.
-
RE: Massive URL blockage by robots.txt
Even though there are less pages indexed compared to those that are blocked, you still have a significant increase in indexed pages as well. That is a good thing! You technically have more pages that are indexed than before. It looks like you possibly relaunched the site or something? More pages blocked could be an indexing problem, or it might be a good thing - it all depends on what pages are being blocked.
If you relaunched the site and used this great new whiz-bang CMS that created an online catalog that gave your users 54 ways to sort your product catalog, then the number of "pages" could increase with each sort. Just imagine, sort your widgets by color, or by size or by price, or by price and size, or by size and color, or by color and price - you get the idea. Very quickly you have a bunch of duplicate pages of a single page. If your SEO was on his or her toes, they would account for this using a canonical approach or possibly a meta noindex or changing the robots.txt etc. That would be good as you are not going to confuse Google with all the different versions of the same page.
Ultimately, Shailendra has the approach that you need to take. Look in robots.txt, look at the code on your pages. What happened around 5/26/2013? All those things need to be looked at to try and answer your question.
-
RE: How long does Google Analytics store data?
If you search the forums there are reports of 25 months back in 2009 as a minimum, it would not surprise me that they have been expanding since then like they do for storage on other products.
I have an account with GA data going back to August 2008 to current - so that is almost 4 years.
-
RE: How to find internal pages linking to a URL?
Ah yes - it is easy to miss where that tab is - glad you found it!
-
RE: How to find internal pages linking to a URL?
Use a tool like the Screaming Frog SEO Spider. It will show you the link and all the page(s) it is on. If you find nothing, you have a spreadsheet showing that the link is not present. Remind the client that next time if they can possibly get a screen shot that would really help.
-
RE: Should all pages on a site be included in either your sitemap or robots.txt?
I thinks Ron's point was that if you have a bunch of duplicates, the dups are not "real" pages, if you are only counting "real" pages. Therefore, if Google indexes your "real" pages and the dup versions of them, you can have more pages indexed. That is the issue then that you have duplicate versions of the same page in Google's index and so which will rank for a given key term? You could be competing against yourself. That is why it is so important you deal with crawl issues.
-
RE: Should all pages on a site be included in either your sitemap or robots.txt?
You want to have as many pages in the index as possible, as long as they are high quality pages with original content - if you publish quality original articles on a regular basis, you want to have all those pages indexed. Yes, from a practical perspective you may only be able to focus on tweaking the SEO on a portion of them, but if you have good SEO processes in place as you produce those pages, they will rank long term for a broad range of terms and bring traffic..
If you have 20,000 pages as you have an online catalog and you have 345 different ways to sort the same set of page results, or if you have keyword search URLs, or printer friendly version pages or your shopping cart pages, you do not want those indexed. These pages are typically, low quality/thin content pages and/or are duplicates and those do you no favor. You would want to use the noindex meta tag or canonical where appropriate. The reality is that out of the 20,000 pages, there are probably only a subset that are the "originals" and so you dont want to waste Googles time in crawling those pages.
A good concept here to look up is Crawl Budget or Crawl Optimization
http://searchengineland.com/how-i-think-crawl-budget-works-sort-of-59768
-
RE: Sticker Shock!!
Something else I would suggest. Usertesting.com You setup scenarios for users to walk through your site and can specify age, sex, location and income of the users. You get back video with commentary on your site. You can do 3 users for less than $200 and you get some really useful information that will really help bolster whatever input you get from the conversion companies. Think of it as a way to put together a quick focus group and you often get results back in less than 1 hour so it is fast, too.
-
RE: Google is indexing blocked content in robots.txt
This will sound backwards but it works.
-
Add the meta noindex tag to all pages you want out of the index.
-
Take those same pages out of the robots.txt and allow them to be crawled.
The meta noindex tells Google to remove the page from the index. It is preferred over using robots.txt
http://moz.com/learn/seo/robotstxt
The robot.txt - blocks Google from crawling the page, but things can still show up if there are other pages linking to the page you are trying to remove.
http://www.youtube.com/watch?v=KBdEwpRQRD0
You have to allow Google to crawl the pages (by taking them out of the robots.txt) so it can read the noindex meta tags that then tell Google to take them out of the index.
-
-
RE: Any Legit Local Address Services Out There?
But if he is not trying to get a Google Places listing, but just establish that they are a US based business for Google.com organic listings, then I think that would be appropriate use of a mailbox service in the US. He is trying to show to Google that they do service the US via an ecommerce websites.
I have used Earth Class Mail earthclassmail.com as we are a virtual company in the US. Everyone works out of home offices, but we deliver services via the web. ECM can give you a physical address (not a PO box) and scan all your mail in and then you can access online. If you need the actual copy, you just pay the forwarding fees. Has worked great for us.
-
GWT Error for RSS Feed
Hello there!
I have a new RSS feed that I submitted to GWT. The feed validates no problemo on http://validator.w3.org/feed/ and also when I test the feed in GWT it comes back aok, finds all the content with "No errors found".
I recently got a issue with GWT not being able to read the rss feed, error on line 697 "We were unable to read your Sitemap. It may contain an entry we are unable to recognize. Please validate your Sitemap before resubmitting."
I am assuming this is an intermittent issue, possibly we had a server issue on the site last night etc. I am checking with my developer this morning.
Wanted to see if anyone else had this issue, if it resolved itself, etc.
Thanks!
-
RE: About Us in the Footer
If the links are relevant to users to navigate the site, how is that any different than the other links you show there in the footer?
I think your concern stems from when someone buys a link on another site and it is put in the footer across the site. That is a different situation and generally a no-no. That has to do with external links.
-
RE: Canonical URL, cornerstone page and categories
Thanks RFK.
I would not use the canonical in this way for two reasons.
- It is the improper use of the canonical link. Google may ignore the canonical directive when it is used improperly and actually gives a similar example on the webmaster blog of this being an incorrect use
http://googlewebmastercentral.blogspot.com/2013/04/5-common-mistakes-with-relcanonical.html
"Remember that the canonical designation also implies the preferred display URL. Avoid adding a rel=canonical from a category or landing page to a featured article."
- If you were to use the canonical in this way and Google follows it, you are eliminating all of your blog posts from the index. This would result in lost traffic from long tail searches and ultimately less traffic to your site.
-
RE: Canonical URL, cornerstone page and categories
Just to add to redfishkings point - You can only canonical one page to another page. You mention using a canonical link from a "category archive page to the individual posts" that will not work.
-
RE: Uhhh... How do I log in?
Also, only logged in users can post in the QNA That can be an indicator too!
-
RE: Multiple Local Schemas Per Page
I help run a directory site and we have City pages with multiple listings. We don't mark up that page but we have a landing page for each location. The landing page for a location is what we markup that way.
If you look on sites like Yelp they do the same thing.
On the Dallas page there is no schema markup but
http://www.yelp.com/biz/eddie-vs-prime-seafood-dallas
If you visit a local restaurant then the schema markup shows up.
I was looking through the schema.org documentation
http://schema.org/docs/gs.html
Using the url property. Some web pages are about a specific item. For example, you may have a web page about a single person, which you could mark up using the Person item type. Other pages have a collection of items described on them. For example, your company site could have a page listing employees, with a link to a profile page for each person. For pages like this with a collection of items, you should mark up each item separately (in this case as a series of Persons) and add the url property to the link to the corresponding page for each item, like this:
_[itemprop="url">Alice Jones](alice.html) [itemprop="url">Bob Smith](bob.html)_
So in the example on Schema, you can tag the location with the type and then use the URL parameter to point to the actual page that has the information.
I then looked at CitySearch I did see an example of this
http://dallas.citysearch.com/find/section/dallas/restaurants.html
and
http://dallas.citysearch.com/profile/34327220/lewisville_tx/mama_s_daughters_diner.html
If you look at the code on the Dallas Restaurant pages they use Item List markup
itemscopeitemtype="http://schema.org/ItemList">
That is a list of Breakfast Restaurants in Dallas and then for each place on that page they will mark up
itemtype="http://schema.org/LocalBusiness" itemprop="itemListElement">
and
itemscope itemtype="http://schema.org/AggregateRating" itemprop="aggregateRating">
and then they reference the URL to the location page (as suggested above in the schema.org documentation)
[Check with your developer, but it looks like if you define the list of locations first, the spiders can see that all of the locations are a part of that list (vs the page being dedicated to a single location) then when you have the link to the landing page for the location you can do the full markup.
Good luck!](http://dallas.citysearch.com/profile/41040743/dallas_tx/breadwinners_cafe_bakery.html)
-
RE: Should I noindex, nofollow a lot of child pages?
I would say leave them in there as you want Google to see everything on your site that might be relevant to the user.
I think you are referring to what some call Crawl Optimization or Crawl Budget (great article here http://www.blindfiveyearold.com/crawl-optimization) and yes there is something to making sure that you do not waste Google's time in crawling pages that do not matter.
I would still think that product pages are worth Google's time if you have good content and also, these are the things that you sell. Seems that Google would not only want to see your category pages, but what items are in a category. One thing to note, all of those "red box" product pages all link up to the "red box" category page. That is part of what makes the "red box" category authoritative within your site as you are telling Google this with your internal link structure. You may find that if you noindex your product pages, your category pages may go down.
The use of the noindex/nofollow to help with Crawl Optimization is really more for pages like search pages, or pages that can be resorted 100 different ways with 100 different URLs. Those are all duplicates and waste Googles time. Your product pages are really different animals and so my vote would be to keep them in the crawl.
-
RE: Should I noindex, nofollow a lot of child pages?
I would think you would want Google to find your product pages and then get you traffic for them. I dont think the solution is to use noindex as that would take them out of the index for sure.
I am betting that either due to your site archtecture or how you have your sitemap setup or even possibly that you have thin content on all the product pages are more of the issue.
If you don't want to work on any of those things, sure you can noindex all of your product pages, but then it just seems like you are giving up and limiting your long term outlook for ranking pages in Google.
The only reason I would use the noindex in a case like yours would be to keep duplicate product or category pages out of the index. Additionally, I would use that also to keep Google out of any of your search result pages, shopping cart etc. Those are the pages that are wasting Google's time. That brings up another point, are you having Google crawl a bunch of duplicate content on your site and that is why it never gets to the "good" content pages?
Good luck!
-
RE: Is it possible to export Inbound Links in a CSV file categorized by Linking Root Domains ?
You are correct. Excel should be able to match on that example or one similar to it. You may need to do some reading within the Microsoft help pages to see how it uses regular expressions (for example I dont know if you need the asterisk in your query). Just mess around with the function and you should be able to figure it out.
-
RE: Ranking UK company in Google.com
Well, then, ignore what I said. It did sound like you had a .com and a .co.uk. Google likes to keep sites in a given country.
I would suggest you look at this Moz post
http://moz.com/community/q/what-is-the-best-way-to-rank-well-on-both-google-co-uk-google-com
-
RE: Ranking UK company in Google.com
Basic level, you should go into Google Webmaster Tools and make sure that your Geographic Target is set for the US on the .com. Otherwise, all Google has to go on is your UK physical address. You may also want to set the UK domain to the UK geographic location in GWT.
Another question, is the US site just a duplicate of the UK site as far as content? It may be that Google sees both, but since the UK site is the original then it focuses on the UK site.
I would agree that a US address (and not a PO box) would help and you can get an actual virtual street address by using services such as Earth Class Mail. https://www.earthclassmail.com/ They have various street addresses you can use, they then scan everything in and you can login and see your mail. If you need something ultimately delivered they can forward that too. It is a pretty cool service and we have been using it for over a year. I work for a virtual company (we have folks all over the US) and we use this and it is great as we used to have to forward mail and it would take forever. Now we can login online and access it almost immediately.
-
RE: Are Their Any SEO Dangers When Cleaning Up a Site
I think it depends on what pages you are removing.
If you have old blog posts that still get traffic and have links to them, you may want to reconsider keeping them or how you redirect them. You should be able to look in your Analytics and see the amount of traffic they get, and also look in OSE to see what kinds of links they have pointing to them.
We are doing something similar on one of the sites I manage and I was able to look at these factors objectively vs subjectively and make some good decisions. When we looked at old content (content prior to 2005) and looked at the factors above it was pretty interesting. There was one page that accounted for 65% of all pageviews out of several hundred articles in that time period. We obviously treated that page with greater care than the rest of the pages.
You will probably find a similar pattern, the 80/20 rule. There will be about 20% of your pages that you really should keep or be very careful where you redirect and the rest you can just let 404 or 301 to a subdirectory higher.
When you look at it this way, you can account for any SEO "equity" that a page might have and then know what the impact would be. Back to my example above, even if I deleted all of those old pages (including the one page that accounted for the majority of traffic to that group of pages) all of those pages combined only accounted for 2% of all my total pageviews. So, even if I screwed something up when I moved thi group, I know the impact was minimal and I can sleep a little better at night.
-
RE: Is it possible to export Inbound Links in a CSV file categorized by Linking Root Domains ?
There is logic built into Excel that allows you to both COUNT and COUNTIF or COUNTA
http://office.microsoft.com/en-us/excel-help/countif-function-HP010069840.aspx
If you are wanting to count the total number of domains you can setup a function to count non blank cells
http://office.microsoft.com/en-us/excel-help/count-nonblank-cells-HP003056101.aspx
=COUNTA(A2:A6)
This would count non blank cells from A2 though A6.
You can use this also to count if you have a value that is greater than a given number. Say you wanted to see all the linking domains with a DA greater than 20
=COUNTIF(B2:B7,">20")
You get the idea. Note that you may need to format your number cells as Numbers to get this to work.
You should be able to use one of the logic functions above in Excel to get you what you need.
-
RE: One Page Guide vs. Multiple Individual Pages
Generally speaking, you don't want thin content. Don't do it. Take each of the 8 topics and write to the fullest. You may be surprised how much you can write on a given topic (vs just 150 words). Hire out a freelance writer on oDesk to help if you need. You may then want to have an overview page that talks about the 8 section and links to them so that people (and bots) can see how things are organized.
-
RE: Too many broken links which i am unable to understand
I just ran a spider on your site and saw no 404s from that crawl so there were no links on your site pointing to the URLs. Also, like you said, those URLs are not in the Google index. I checked your sitemap. You reference a zipped sitemap in your robots.txt http://www.marketing91.com/sitemap.xml.gz vs http://www.marketing91.com/sitemap.xml but not sure that this makes a difference.
You need to trace back to when you saw the errors showing up and see if you did anything to your site at the time. Did all of this show up when you updated themes or made some other change? It may be that you used to have these URLs on the old version of the site and when you made changes the pages went away and now Google is trying to change them. In that case, it may not be the 404 errors that are causing the problem, but something else when you updated.
It may be that you need to setup some "Catch all" 301 redirects so that when garbage gets added to the end of your URLs they 301 to the correct page. That would help clean this up, but not sure this is the reason you are losing traffic from Google.
Honestly, these are just guesses and so I hope that they may trigger something else for you to check.
Good luck.
-
RE: How to Destroy Old 404 Pages
You have a few options here. Option A is if you are going to build a site that will have similar topic based content as the old one and you want to use a larger portion of that domain authority from the old site to the new.
-
Pull those 404 errors from GWT in a spreadsheet. This gives you a corpus of links to work with.
-
Go into Bing WT and they have a way to browse what they have and had indexed. What is nice here is that Bing will tell you what URLs (even old 404s) have links to them.
-
Run your links through Open Site Explorer. You can then also get linking data, FB and Twitter data in addition to OSE data on the old URLs
-
If need be, run the more important dead URLs through the Wayback Machine http://archive.org/web/web.php you can now even see what the actual content was on the old URLs.
-
After doing all of this, pretty quick you should be able to see if there were any authority pages on the site that have now expired and you also know what those pages were about via the wayback machine.
-
On the authority pages, create new pages on the new site that have to do with the same topic, i.e. semantically related to the old page.
-
301 the old authority pages to the new authority pages.
-
The rest of the URLs you can just let them 404. They will continue to 404 several time until Google drops them. I would leave them in GWT as over time they should drop out as Google starts to ignore those pages, this may take a few months. You can then just check GWT for any new 404s that might show up from the new site and you need to deal with.
One thing to note on all of this. You may have to let the old sitemap 404 vs redirecting the sitemap.
http://moz.com/blog/how-to-fix-crawl-errors-in-google-webmaster-tools
"One frustrating thing that Google does is it will continually crawl old sitemaps that you have since deleted to check that the sitemap and URLs are in fact dead. If you have an old sitemap that you have removed from Webmaster Tools, and you don’t want being crawled, make sure you let that sitemap 404 and that you are not redirecting the sitemap to your current sitemap."
If you delete the 404s from GWT the next time Google spiders the old pages they will just show up again, up to you then.
Option B - if you dont care about the old pages, just let them 404 as mentioned above, but be aware of the issue with old sitemaps. You can check the Google index for old URLs in the SERPs or also if you look into GWT and look for data on your Search Traffic. Make sure that the old URLs are not showing up under your Search Queries.
-
-
RE: Website ROI for organic SEO
Forget the industry standards. This data should be in your clients analytics. They should be able to see the %of traffic and conversion rates for organic vs paid vs direct. I would bet that organic converts the best, and if that is true and you can increase the proportion of traffic from organic, then you have a winning solution for them that is specific to their site and industry.
-
RE: Creating 20+ websites with links back to central site
Creating sub folders is better as you do not have to worry about your 20 web sites looking like some sort of link farm. Plus, you build the overall brand with the main website. That said, you do not want 20 identical pages for each location on the main website. You would want to have unique and original information on each location page about that location, who works there, what services do they provide, etc etc.
If you want to give the client more control, why not setup each of the location pages so that a location could login and update the information. It would be just like you can update your Google+ Local profile, you could even setup a login etc.
That said, if you give the client control of the listing/page/website then you run into the issue that client will often do a poor job of providing good information, and/or mess up your SEO if you are trying to get those pages ranked.
I would suggest a hybrid solution where you setup the pages for each location, even interview each location and gather up the information that is needed to really make those location pages information rich. You can then take input from that location and build your pages with that information. If there are some small edits or updates that a location needs, you can make those updates (or not) as you would still maintain editorial and SEO control.
I have managed a site with thousands of locations and we found that use of the folders worked really well. We actually gave users access to update location profiles, but often they would put information that was frankly, poorly written. Throw out all the SEO points, some of these "self edited" location profiles did not make me want to visit that location as the copy was so poor. It was not until we took more control of the content on location pages that we were able to get a good balance between original content from the locations and well written page with an eye to SEO.
-
RE: Login required pages that redirect back to the post
No follow links to all login pages and noindex meta tag all login pages. Just keep those login pages etc out of the index.
-
RE: Major URL changes in new site launch
One more - did you update the on page content to reflect the new URL structure? Any content linking to old URLs?
-
RE: Major URL changes in new site launch
When did you launch the new site? For all 300 old pages did you redirect to the 300 new pages or did you just redirect everything to the home page? On those redirects, did you redirect to the same/equivalent page on the new site or different pages? Are the new URLs cached in Google SERPs or do you see the old URLs? Are you still seeing old URLs in your GWT reports for impressions and clicks? Did you update your sitemap with new URLs? Are you sure you got all the old pages? Often CMSs will generate all kinds of versions of a URL using parameters and so you will need your 301s to account for that.
Not sure if that is an answer, but I am just curious.
-
RE: How does google recognize original content?
(offers napkin to EGOL to wipe up coffee spittle)
-
RE: Can I 301 re-direct a page to regain the authority from a penguin penalized page
Seems like you would pass along the positive and negative link equity from that page to your home page.
My gut says that you would not want to pass a penalty of any type from one page to another.
Separate point here. The 301 redirect passing equity is really only effective if there is a semantic relationship between the two pages. In other words, if you have a page that ranks for red widgets and you 301 it to a page that is all about green bogies, you are not going to get much benefit. Google also does not like a many to one relationship with 301s and so you want to try and minimize that anyway.
http://moz.com/blog/save-your-website-with-redirects
"The new page doesn’t have to be a perfect match for the 301 to pass equity, but problems arise when webmasters use the 301 to redirect visitors to non-relevant pages. The further away you get from semantically relevant content, the less likely your redirect will pass maximum link equity."
This is also a nice article here
http://miamiwebcompany.com/blog/study-website-recover-from-a-bad-301-redirect
So, even if there was no penalty, if not semantically relevant then it is really not worth it.
-
RE: Site Crawler Tool by the Company Formerly Known As SEOMoz
This is still there. Look under your Campaigns (Button on the top right) and select the site/campaign from the list you want to review. Once within a campaign, click on "Crawl Diagnostics" in the subnav. I like to export to CSV to look through things - but this is all there.
-
RE: Search Results Showing Additional info/Links
There are various snippets that Google may add below an organic listing. Some examples include Sitelinks, reviews, breadcrumbs, location data, authorship information etc.
You can go into your Google Webmaster Tools console and see what sitelinks Google has selected and even decide that you want to remove some of them. The sitelinks are chosen by Google and are a function of your site structure and authority of pages.
Other data than can be shown is using rich snippets
https://support.google.com/webmasters/answer/2722261?hl=en
This can also include product information
https://support.google.com/webmasters/answer/146750?hl=en
If you have this type of data on your site, it is a no brainer to mark it up where appropriate.
Generally I would follow the standards in Schema.org, but Google also has a markup helper tool
-
RE: Overall Campaign Statistics
You bet - feel free to mark my response as a Good Answer as well and good luck!
-
RE: Overall Campaign Statistics
There is a great whiteboard friday on this here
http://moz.com/blog/fixing-the-broken-culture-of-seo-metrics-whiteboard-friday
IMHO I would not focus on rankings per se as a main metric. Rankings are just an symptom or factor in what your actual goal is. The goal is not to just rank better, it is not just to get the client more traffic, it is to help the client make more money. If the client makes more money, generally you should too. The other thing about ranking is that ranking is now all personalized and so it will vary from user to user. It is really only useful as a relative measure. Another way to look at this is, who cares if you are #1 for a search if you dont get much organic traffic from that keyword and or the conversion rate is abysmal.
I would setup a spreadsheet that had a sheet/tab per client. For each client tab you would put in key metrics on a monthly basis. I would look at Organic Traffic (# of Unique VIsitors) then look at # of Conversions from Organic, average value of Conversions from Organic and then the Total Value for all conversions from Organic. Most clients, if they have analytics setup correctly (or you can help them) should have this data readily available or at least you should be able to look at Organic traffic at a minimum and then just report revenue improvements for those that report on that. You should then create a master tab that aggregates all of this data into a master table.
You should then be able to look month over month for % improvement on a client by client basis in addition to your performance overall. You can then say, we helped our clients increase on average of 20% improvement in organic traffic during a typical 4 month period resulting in, 40,000 new leads and 1.3 million dollars in total revenue! BOOM - sign me up!
Also, you can see how long it takes to get results on average - 3 months 6 months, etc. You can have those conversations up front with clients - here is the timeline we typically see -etc. You may have a client that has great traffic, but conversions and revenue are down. You are now proactive in talking to clients about landing page improvements and continue to show your value.
I once heard a yellow page exec say at a conference (paraphrased), "Most businesses don't care where they rank in Google, they don't care about organic website traffic, they don't even care what their website looks like, or how many friends they have on Facebook, they just want to hear the phone ring. Are you helping the phone ring?"
Setting up your metrics around number of conversions and revenue will make what you are doing tangible to clients and keep them paying you.
Hope this helps!
-
RE: Responsive design (Showing diffrent pages(icons) for Mobile/Tablet users)
That then gets back to the original issue. If all you have access to is the original resolution of the mobile device, that is how the page would render vs that of the daisy chained display.
-
RE: Why are Google Webmaster Tools' Google rankings different to actual Google rankings?
Do a search on Google search personalization. Example
http://en.wikipedia.org/wiki/Google_Personalized_Search
What you see when you search is based on if you are logged in (or not), your IP, your previous searches etc. These days there is no single ranking level, you are seeing a sort of average. Rankings are more useful as a relative measure vs an absolute. Are you moving up or down, is that causing increased or decreased traffic and is that traffic converting at a higher or lower percentage level. You really need to look at all of that together and use ranking as a way to diagnose any issues with your final conversion numbers.