This was a good answer and deserves to be labeled as such. I decided not to pursue this since I have been lucky to take the top spot for important key phrases. Thank you for such a well crafted answer.
- Home
- PaulKMia
Latest posts made by PaulKMia
-
RE: Stuffing keywords into URLs
-
RE: Developing a drop down menu: Do I use javascript or pure css?
Hey Zachary!
It's a good idea to use pure css menus when the design and functionality are the same as using a menu with javascript AND the actual menu items are in a clean
html list (not delivered through the JS). But keep in mind that cross-browser functionality can be an issue with pure css menus. Internet Explorer is a real problem in this regard.
As for SEO, I doubt inline javascript that adds functionality to a clean html list will make search engines burp. We used to worry a lot about that like 5 years ago because JS used to cause slow load times, but nowadays the Internet is a lot faster, servers are faster, caching is better, etc... All the same, it's good coding practice to remove inline JS from HTML.
If you want something fast and cross-browser supported look for jquery menus. There are free samples all over the place and they are brilliantly fast and you can list your menu items in a plain html list (the important thing).
Happy coding... - P
-
RE: My page has fallen off the face of the earth on Google. What happened?
Hi there Ricky!
Check out the source code on your site. You're declaring your tag two times (two opening tags, one close tag), and the tag and other content are in a weird places because of it. The browser renders it okay, but from a code perspective (which I suspect is a bot's perspective), it's like a person with two heads. : ) Is Google's bot reading it right? Note that I just looked at your home page, your services page, and a few others... Hope this helps! - P
-
RE: Stuffing keywords into URLs
Hi Egol,
Thank you for your reply. The long folder names are probably from using WordPress as you pointed. I found a blog on their subdomain using WordPress.
I have to say that I've enjoyed reading your responses throughout the QA forum because your responses are short and to the point, pithy and no-BS. So, I'm curious about your response to my question. Above you responded "I doubt it" to the question about Google ferreting out keyword stuffed URL paths. Instead of trying to read between the lines, let me ask you, how good of a job is Google doing? How are they falling short?
Kindest regards,
Paul
-
RE: Stuffing keywords into URLs
Hi Trevor!
Thank you for your response! I'm VERY new to the concept of canonical issues. If you not in my other response, I'm just getting back into the game. How much do you think the canonical issue really plays?
Kind regards,
Paul
-
RE: Stuffing keywords into URLs
Hi Ryan!!
Man, I'm thrilled to see you responded, and that you responded so thoroughly. I've been reading threads in this QA forum for a few days, and I've come to think of you as a bit of an SEO celebrity! I have to figure out how to filter for questions you've answered! : )
Okay...the site. I've been away from SEO for about eight years and a lot has changed. In the past, I've enjoyed top positions in the SERPs under highly competitive key phrases, even recently (probably because good legacy websites seem to carry weight). Back then, I placed my primary site in directories thinking that people who visited the directories would see my listing and click on it and visit me (as opposed to getting a link to get "juice"). This is probably what has been giving my site good rankings for a while, and the fact that I've never used web-chicanery to outrank others. Over the years, I've seen spammy and trickster sites appear and disappear. I used to rip those sites (the only way to get a global vision of what's going on), and I studied what they did. I've got a curious little archive of living black hat tricks, all of which failed as Google caught on to them.
Now I turn my reflectors back on to what's going on in SEO and what companies and individuals are doing to position themselves in SERPs. I'm saddened to report this, but for all the overhauls, tweaking and tinkering that Google has done since 2001 when I started, spammy sites and sites with poor content, usability, usefulness, and design are still outranking truly useful, high-quality, high-integrity sites.
Very recently, I read complaints by people who felt like their sites had been unfairly affected by the Panda update (http://www.google.com/support/forum/p/Webmasters/thread?tid=76830633df82fd8e&hl=en). I followed the links to "offending" sites (sites people felt ranked higher than theirs for no good reasons), and I went through the code in the complainants' sites as well. Holy cow...many of the complainants have good reason to complain. Shallow, spammy, zero-effort sites are blowing away robust sites with truly useful content. I've NEVER had a sinking feeling in my gut in 10 years that ranking well was a crapshoot - but I got that feeling after studying those complaints.
Years ago I worked in Macromedia Dreamweaver (remember how cool "Macromedia" was?) with regular HTML and nowadays I work in Visual Studio, just recently creating my first MVC3 site. MVC allows you to manipulate every tiny aspect of your site, including the URL path. There is absolutely no relation between the path you see in your browser and the actual path to the files on the server. And you can change the path and name of any page instantly and almost effortlessly. It's GREAT for SEO. So, I've been paying special attention to directory names and page names out there on the Internet. That's when I came across "themarketinganalysts" site and their unusually high rankings for so many important key phrases. After combing through that site, studying the source code, checking their rankings across many key phrases - I have to say, regardless of PA of 53 and keyword variances, the code reminds me of some of the code from spammy trickster sites from the early 90s.
If you hand code html, you get a certain vision for what the page will look like as you type along, from the mind’s eye of a visitor. When you go to a site and the code is packed with keywords, weird use of elements (like themarketinganalystemarketinganalysts' textless use of the H1tag to render the logo through CSS – an old trick to put the
next to the tag), you get the feeling that whoever wrote that code is telling search engines one thing, and visitors something different. It's duplicitous. Oddly enough, I'm not fazed by a company that outranks me (there is enough work for ALL of us), but I want to see healthy optimization, not one story in the code and another on the rendered page.
I'm going to do a more in-depth review of the code, page by page, look for trends and track down the sources that provide PA coefficients (or try to!). I’ll use the Wayback Machine to study the evolution of the site. Off the bat:
Mar 21, 2009 "This website coming soon"
Mar 31, 2009 "PREDICTIVE WEB ANALYTICS" - nothing about translation
May 25, 2009 Starts taking current formOdd. This is claimed on the current site: "Since 1989, The MARKETING ANALYSTS has built its Language Translation Services business..." That claim in not supported by what Wayback Machine shows. Geesh... Did I stumble across enterprise-wide shadiness? Hope not!
I'll come back to you and share my SEO findings.
-
Stuffing keywords into URLs
The following site ranks #1 in Google for almost every key phrase in their URL path for almost every page on their site. Example: themarketinganalysts.com/en/pages/medical-translation-interpretation-pharmaceutical-equipment-specifications-medical-literature-hippa/ The last folder in this URL uses 9 keywords and I've seen as many as 18 on the same site. Curious: every page is a "default.html" under one of these kinds of folders (so much architecture?).
Question: How much does stuffing keywords into URL paths affect ranking? If it has an effect, will Google eventually ferret it out and penalize it?
Best posts made by PaulKMia
-
RE: Developing a drop down menu: Do I use javascript or pure css?
Hey Zachary!
It's a good idea to use pure css menus when the design and functionality are the same as using a menu with javascript AND the actual menu items are in a clean
html list (not delivered through the JS). But keep in mind that cross-browser functionality can be an issue with pure css menus. Internet Explorer is a real problem in this regard.
As for SEO, I doubt inline javascript that adds functionality to a clean html list will make search engines burp. We used to worry a lot about that like 5 years ago because JS used to cause slow load times, but nowadays the Internet is a lot faster, servers are faster, caching is better, etc... All the same, it's good coding practice to remove inline JS from HTML.
If you want something fast and cross-browser supported look for jquery menus. There are free samples all over the place and they are brilliantly fast and you can list your menu items in a plain html list (the important thing).
Happy coding... - P
-
RE: My page has fallen off the face of the earth on Google. What happened?
Hi there Ricky!
Check out the source code on your site. You're declaring your tag two times (two opening tags, one close tag), and the tag and other content are in a weird places because of it. The browser renders it okay, but from a code perspective (which I suspect is a bot's perspective), it's like a person with two heads. : ) Is Google's bot reading it right? Note that I just looked at your home page, your services page, and a few others... Hope this helps! - P
Looks like your connection to Moz was lost, please wait while we try to reconnect.