Thanks.
This is what I figured, but I realized that I've never tried it before and I wanted to be 100% sure before potentially noindex'ing my homepage
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Thanks.
This is what I figured, but I realized that I've never tried it before and I wanted to be 100% sure before potentially noindex'ing my homepage
If I've got a page that is being called in an iframe, on my homepage, and I don't want that called page to be indexed.... so I put a noindex tag on the called page (but not on the homepage) what might that mean for the homepage? Nothing? Will Google, Bing, Yahoo, or anyone else, potentially see that as a noindex tag on my homepage?
Clean up (remove) those spammy backlinks. This is VERY important.
Optimize your title tags across the entire site. Use this tool to see what your title tags will look like on Google. Your homepage title tag says "Colorado bernese mountain dogs | puppies | breeders Colorado" - I would put something more along the lines of "Bernese of the Rockies: Colorado Bernese Mountain Dog Breeders". My suggestion/example is slightly longer than what Google will display but since it is your homepage, you want to make sure your brand name is front and center. For the interior pages you wouldn't necessarily need to do that. For the puppies page, instead of "Colorado bernese mountain dog puppies | For Sale Colorado" - I would do something like "Adorable Bernese Mountain Dog Puppies For Sale in Colorado". My example barely fits within what Google will display, it says exactly what the page is about, it's clean, AND it has something a little enticing ('adorable').
Fix up your URLs. They are very "keyword stuffy" right now. Your "Education & Tips" page's URL is http://www.berneseoftherockies.com/colorado-bernese-mountain-dogs/ -- It should be something as simple as http://www.berneseoftherockies.com/education or http://www.berneseoftherockies.com/education-and-tips. Editing your URLs will mean 301 redirects are necessary. You're using Wordpress and there's a plugin called "Redirection" that works. I still prefer just editing the htaccess file manually. Some quick Google searches can tell you everything you need to know about 301 redirects.
Your overall site isn't that bad. You're using H1s and H2s. You have a lot of content. You're inserting images and some video. Do those 3 things above and you'll be much better off. Of course there's more you can do, but check out the Learn SEO link I shared in my previous comment. I can't go on forever
Hey Chris,
Unfortunately, Google didn't give you any secrets
"Meta tags" is broad. It doesn't specify what the attribute is. You can have meta keywords, meta descriptions, meta charsets, etc. You can read a bit more about exactly what a "meta tag" is defined as here.
As for the meta keywords tag, Google does not use it at all. Without looking up the exact date, I believe Google announced this in 2008 or 2009. It's been a longggg time since the meta keywords tag was used as a ranking factor. A couple years later, Google announced that the meta description tag was no longer used as a ranking factor. HOWEVER, unlike the meta keywords tag, Google does still look at the meta description tag. It just isn't taken into consideration while ranking pages. Meta descriptions are important for click through rates, but they won't give you a rankings boost.
Why aren't you ranking for anything? Well, at least 90% of the backlinks are spammy. I only took a very brief look, but I saw a bunch of directory links and not much else (maybe nothing else). If you don't have a manual penalty yet, you might be due for one. I would highly suggest getting those spammy backlinks removed.
Assuming all the content on your site is original, it isn't a terrible site. It could use work in some areas, but then again, that can be said for 99.99% of websites. If you really want to learn about what you should be doing to improve your rankings, take a look at the Learn SEO section of Moz.
I've never seen this before so I don't have much to say, but if I ran into this I would think to check for double implementation of the analytics code. Give the source code a look-see and CTRL + F for the UA code. See if it's there twice.
The 'default' setup will be for no redirects to be in place. It's not that something was done incorrectly, but rather, a best practice for SEO was overlooked. It's not something your average Joe or webmaster with no SEO experience is likely going to.
In your scenario, you will want to choose one or the other, and setup a force WWW (or non-WWW) 301 redirect. If your site is on a linux-based platform, that is accomplished with an htaccess file. If you are on a Windows-based server, you will need to use IIS (I think). A simple Google search for "force www redirect windows server" will likely return tons of solutions.
If you're on Wordpress or a similar CMS, it is usually very easy to do from within the backend. If you have no idea what server you're on, or what platform your site is built on, use builtwith.com to find out
How do you choose between www.example.com or just example.com? You'll want to go with the one that's linked to more often, and then work on sticking with that as often as possible in the future. The majority of the link juice that's pointing towards example.com will be transfered to www.example.com with the 301 redirect, so it's not like those links without WWW will be worthless. They only lose a very tiny amount of their value through the redirect. The good news is, you're fixing a big issue so it isn't going to get worse for you.
Let me know if you have more questions.
Hey Kimberly,
I would highly suggest checking out the resources on this page: Learn SEO
A good place to start with the Moz Toolset is to click on (from within your campaign) "Search >> On-Page Optimization >> Add & Manage Page Grades". You can enter your main page URLs and their corresponding main keywords. Moz will automatically check a plethora of areas on your page and give you feedback on where you can improve the grade given.
Also check out "Crawl Diagnostics" under the same "Search" menu. There's a lot of data in there about what's potentially harmful to your site. Things like 404 pages, duplicate content, missing title tags, duplicate title tags, overly dynamic URLs, and much more. If you are unsure of what something means, like "overly dynamic URLs" for example, and you've got 90 results for that, you can search Google for help on fixing it. Or ask here. Or you might even stumble into that issue and ways to resolve it in the "Learn SEO" link above.
Good luck!
You could setup 301 redirects from the sold property URLs to another relevant page, like other properties available in the same neighborhood/town/city. Or possibly even to search result page that contains very similar properties in regards to square footage, bedrooms, baths, etc.
The answer on this page is from someone that tested it, with 2 authors: http://webmasters.stackexchange.com/questions/25140/how-to-implement-rel-author-on-a-page-with-multiple-authors
You can test it as well and use the tool (http://www.google.com/webmasters/tools/richsnippets) to see what happens. I don't doubt that you'll end up with the same results -- the 1st instance of rel=author is going to be used.
I've come across this discussion a few times, and it always ends the same... there just isn't (currently) a way to handle multiple authors for 1 page using rel=author. On your forum, is there a "best answer" chosen? Maybe you can assign the respondent that gives the "best answer" the authorship? I'm not sure how Google will feel about that though, as it's not a clear article or blog post that is being assigned authorship. Rather, it's just a response in a discussion. Much like blog comments. Although blog comments aren't exactly like your scenario, it's similar enough, and it would be strange if a blog comment author was set as the author for a whole page.
You can't have more than 1 author per page. If you put 10 rel=author tags on 1 page, I would assume that Google is just going to look at the first in the list. OR, Google may just entirely ignore it. They don't have to show a rich snippet, just because the rel=author code is there. It's up to them whether or not the author image shows in the SERP and if you've got 10 on 1 page, there's a chance Google might do something like this (Warning: F-bomb ahead!): http://www.quickmeme.com/img/78/788d7237ecca60d8d235cb352eab832f472264065a048fcd24adc834f0ca82f0.jpg
Yeah, I know I can do that in Google+ Local... and I know that some other websites also support this. What I want to know is how Moz Local handles this with its '1-click' auto-syndication feature.
Just listing the address and hoping people will call 99% of the time isn't really an option. I don't want to list the guy's home address as the business address if it's going to be public.
Voila!
http://wordpress.org/plugins/wp-socializer/
That one is exactly like socialmediaexaminer.com's
Can you pinpoint the approximate date that the traffic dropped significantly? Look for a Penguin update that's near that time frame using this page: http://moz.com/google-algorithm-change
If you don't see a Penguin update near your drop in traffic, AND you don't have a manual penalty, you might have another issue. Possibly a Panda penalty or just some other site health issue that caused the drop in traffic. Considering the large amount of spammy backlinks you say existed, it does sound like a Penguin penalty is likely, but it can't hurt to check that Google Algorithm Change history.
If the spammy backlinks are ALL gone now, then you're right... there's nothing you can do in regards to disavowing or manual removal. If you do have a manual penalty that was given due to unnatural inbound links, you can submit a reconsideration request and let them know that you didn't build the backlinks and they all disappeared. Let Google know you plan to continue to monitor your backlink profile and take immediate action against future negative backlinks that are found.
There's a chance I'm wrong about this. Maybe EVERY site shows that "update" line, regardless of whether or not an https version was found by Google....
Try a "site:" search on Google for both variations.
edit: "if you have a website on HTTPS, or if some content is indexed under different subdomains." You'll see the "update" line if you've got multiple sub-domains as well. So that's likely what's happening in your scenario.
I'm wondering how Moz Local will handle a business that doesn't want to publicly display an address. A handyman, landscaper, or locksmith might want to do something like this. Is it easily handled or do you have to manually try to "work around" to hide the address on sites that let you?
Thanks!
That "update" line you posted a screen shot of means you have an https version of your website. Are you sure you've got the right version verified in Webmaster Tools? If you've verified http but not https, or vice versa, verify the other one. You might be able to see the backlinks in Webmaster Tools on the other version.
You can read more about this recent Google Webmaster Tools update here: http://searchenginewatch.com/article/2337524/Google-Webmaster-Tools-Gives-More-Precise-Index-Status-Data
Update us with what you find once you look into that a bit!
Hey Robert!
There are a few tools you can use to get a good idea of when an external link was created.
Between those 4, you should be able to find the link you want. Webmaster Tools is free but will be limited. The others show you some data for free, but require a monthly subscription for all the details.
Hey BloggerGuy!
Your reference to "WP adapting theme" is actually called "responsive design." That is definitely a great way to go in regards to mobile. I could go on forever but this great write up (and accompanying video by Matt Cutts) already does a great job at explaining why responsive design is a good choice.
http://searchenginewatch.com/article/2308069/Googles-Matt-Cutts-Responsive-Design-Wont-Hurt-Your-SEO
Let me know if you have any specific questions after watching that video, and reading through SEW's take on it.
On the /media-coverage/ page, the header says "Media Coverages Archives" -- I imagine there's a way to edit this headline? I checked archives.php with the theme but it doesn't look like it's pulling from there. If you aren't sure, I can see if the plugin creator can help out!
I am trying to add "Media Coverage" as a custom post type, and it asks for singular and plural... well, singular is the same as plural ('Media Coverages' sounds ridiculous), but it's telling me that they MUST be different from each other. Ever run into this problem and find a way around it?
I had a feeling that post types might be my solution... I've been putting off learning about them for too long. Today is my day!
I'll look into that and let you know if I end up with any follow-up questions
I'm curious what some of your thoughts are on the best way to handle the separation of blog posts, from press releases stories, from media coverage. With 1 WordPress installation, we're obviously utilizing the Posts for these types of content.
It seems obvious to put press releases into a "press release" category and media coverage into a "media coverage" category.... but then what about blog posts? We could put blog posts into a "blog" category, but I hate that. And what about actual blog categories? I tried making sub-categories for the blog category which seemed like it was going to work, until the breadcrumbs looked all crazy.
This just doesn't seem very clean and I feel like there has to be a better solution to this. What about post types? I've never really worked with them. Is that the solution to my woes?
All suggestions are welcome!
EDIT: I should add that we would like the URL to contain /blog/ for blog posts /media-coverage/ for media coverage, and /press-releases/ for press releases. For blog posts, we don't want the sub-category to be in the URL.
I don't work for Moz so I can't help answer... But I just wanted to point out that my OSE looks very different from yours. I am also able to see everything for www.wpbf.com.
If you feel like you have done everything within your power to try and get the links removed, but there's just no way, then you should disavow the URL or domain. You should attempt to reach the owners of the domain 2-3 times before giving up. During my link removals, there have been a decent number of webmasters that finally responded on my 2nd or 3rd attempt.
As for disavowing URL or domain, if the entire domain is something you'd never want a link from, disavow the entire domain. Even if you only have ONE link from the entire site. Still, disavow the domain. Only disavow the URL if you think the site in general is good quality but you happen to be on 1 particular spammy page for some reason.
For some reason, Google has decided that the interior page is more relevant for the query. There's many reason this might happen...
Go to google.com and do a "site:" search for your domain. site:example.com -- Is your homepage ranking #1 on the SERP? If not, your homepage may not be indexed or might have a manual penalty imposed. Are you 100% sure the homepage is indexed? Do you have Webmaster Tools? Make sure everything is all square in there.
What do you mean by the subpage's PR is 28? Do you mean PA? Is this interior page very relevant for your main keyword or is it just sort of (loosely) relevant?
The Google Disavow tool doesn't work like that. It won't actually remove links from any pages. It is basically just a signal to Google that you want those links to be nofollowed. Ahrefs would have no clue if something has been disavowed or not.
Each page with unique 300 words will be fine in google's eyes?
If you have 300 words on each page, as long as it's useful content that people are sticking around to read, then you should be okay. Your end goal should be to provide value to your visitors. If 300 words is plenty of content for the subject of your pages, then you're okay. If you have a blog about quantum physics and you only write 300 words per page... you might not be so okay anymore
After the text is removed is there any chance to recover from Panda? If your site is penalized by Panda, and you make adjustments to fix the issues you were once penalized for, yes, you can certainly recover. It's possible that duplicate content isn't your only issue, and there may be more to fix. Again, this is assuming you're penalized by Panda. I found a really good post about Panda recovery a couple weeks ago. Lucky for you, I bookmarked it! http://www.ventureharbour.com/panda-recovery-a-guide-to-recovering-googles-panda-update/
What about Page title and page meta description? I wouldn't personally write my titles and meta descriptions like that. It is probably a good idea to vary them up and make them a bit more unique from one another. If I'm being totally honest, I think your example title tags might work for Google. That would be up to you though if you're willing to take that chance. If everything else on your site is fantastic, and your only issue is those types of title tags, I really don't think Google would give you a problem. Either way, the best thing to do (obviously) is make them more unique. I'm not a personal fan of them being too similar, but I have seen it done like that on a site before and the pages ranked just fine (they were pretty low competition keywords though). Edit: This is the only question I'm not that sure about... your examples might be okay, but I don't want to give you bad advice.
This is my second question on MOZ and your answered both of them.
Hooray! I hope I'm helping you out I've made it a goal of mine to make it to the top 50 in Moz Points before the end of 2014.
First thing that comes to mind is maybe the site had a lot of site-wide links before. If it had 5,000 or 10,000 links coming from 1 single domain and that website went down, that would be a huge loss of referring pages in a short amount of time. Maybe they were in web directories and asked to be removed? While simultaneously attempting to build some high quality backlinks from more referring pages?
It's all speculation of course, but plausible.
Curious... was your site apart of the MyBlogGuest.com network? They were recently taken down hard by Google.
Here's a recent Tweet from Matt Cutts stating that sites posting guest posts can receive manual penalties, not just sites that receive links from guest posts: https://twitter.com/mattcutts/statuses/446438659689316353
Your site does seem like pretty good quality, but the sole purpose of it appears to be for guest blogging opportunities. Someone manually reviewed it and decided it was penalty worthy... To be reconsidered you might need to either A) remove all the links or B) nofollow all the links. I'm not 100% sure if nofollowing is enough. You'll probably also want to start posting a lot more content that isn't guest blogs. You might be already doing that (I didn't look around for too long). Good luck, Aaron.
The answer is a big, fat, juicy, YES. That is the epitome of duplicate content.
You need to write the content completely unique from the other page. You cannot trick Google. The Panda will bite you hard
I wouldn't recommend hiding the date because you don't want users to know that the content is old. What about when you publish something fresh and someone lands on the page but they can't find a date? They won't know how up to date that information is. I think a lot of people look for dates on blog posts, and rightfully so. They want to see that they're getting good information. You're right, if something is 2+ years old they might look for something more up to date. But you can update old blog posts and re-date them. Add something new to it, make some changes, and update the date.
Imagine an SEO strategy blog that didn't date the posts. You would be doing your visitors a complete disservice by hiding the date. You might have a post all about article directory submissions and they won't see that it's from 2008. That's not enhancing user experience, and people won't be happy with you.
Old content won't always be a bad thing. Read #4, "Burstiness," on this blog post: http://www.seobythesea.com/2014/03/incomplete-google-ranking-signals-1/
It's really interesting and a great read about how older content will sometimes receive the boost in rankings over fresh content.
EDIT: I'd like to add that it's completely okay to hide the date in some circumstances. You might have some sort of evergreen content that truly will stand the test of time and info may not ever, or rarely, change on the topic. For instance, if you were writing a blog post about how to improve your basketball shot. Who cares if the post is from 2006? In that case, hiding the date isn't going to reduce the overall user experience.
Sounds like you should actually be using rel=next and rel=prev.
More info here: http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
Good find. I've never seen this part of the help section. Their resonating reason behind all of the examples seems to be "You don’t need to manually remove URLs; they will drop out naturally over time."
I have never had an issue, nor have I ever heard of anyone having an issue, removing URLs with the Removal Tool. I guess if you don't feel safe doing it, you can wait for Google's crawler to catch up, although it could take over a month. If you're comfortable waiting it out, have no reasons to rush it, AND feel like playing it super safe... you can disregard everything I've said
We all learn something new every day!
Yes. It will remove /page-52 and EVERYTHING that exists in /oahu/honolulu/metro/waikiki-condos/. It will also remove everything that exists in /page-52/ (if anything). It trickles down as far as the folders in that directory will go.
**Go to Google search and type this in: **site:honoluluhi5.com/oahu/honolulu/metro/waikiki-condos/
That will show you everything that's going to be removed from the index.
Yep, you got it.
You can think of it exactly like Windows folders, if that helps you stay focused. If you have C:\Website\folder1 and C:\Website\folder12. "noindexing" \folder1\ would leave \folder12\ alone because they're not in the same directory.
Yep. Just last week I had an entire website deindexed (on purpose, it's a staging website) by entering just / into the box and selecting directory. By the next morning the entire website was gone from the index
It works for folders/directories too. I've used it many times.
I'm not 100% sure Google will understand you if you leave off the slashes. I've always added them and have never had a problem, so you want to to type: /oahu/waianae-makaha-condos/
Typing that would NOT include the neighborhood URL, in your example. It will only remove everything that exists in the /waianae-makaha-condos/ folder (including that main category page itself).
edit >> To remove the neighborhood URL and everything in that folder as well, type /oahu/waianae-makaha/maili-condos/ and select the option for "directory".
edit #2 >> I just want to add that you should be very careful with this. You don't want to use the directory option unless you're 100% sure there's nothing in that directory that you want to stay indexed.
Yep! After you remove the URL or directory of URLs, there is a "Reinclude" button you can get to. You just need to switch your "Show:" view so it shows URLs removed. The default is to show URLs PENDING removal. Once they're removed, they will disappear from that view.
When "noindex" is added to a page, does this ensure Google does not count page as part of their analysis of unique vs duplicate content ratio on a website? Yes, that will tell Google that you understand the pages don't belong in the index. They will not penalize your site for duplicate content if you're explicitly telling Google to noindex them.
Is there a chance that even though Google does not index these pages, Google will still see those pages and think "ah, these are duplicate MLS pages, we are going to let those pages drag down value of entire site and lower ranking of even the unique pages". No, there's no chance these will hurt you if they're set to noindex. That is exactly what the noindex tag is for. You're doing what Google wants you to do.
I like to just use "noindex, follow" on those MLS pages, but would it be safer to add pages to robots.txt as well and that should - in theory - increase likelihood Google will not see such MLS pages as duplicate content on my website? You could add them to your robots.txt but that won't increase your likelihood of Google not penalizing you because there is already no worry about being penalized for pages not being indexed.
On another note: I had these MLS pages indexed and 3-4 weeks ago added "noindex, follow". However, still all indexed and no signs Google is noindexing yet.....
Donna's advice is perfect here. Use the Remove URLs tool. Every time I've used the tool, Google has removed the URLs from the index in less than 12-24 hours. I of course made sure to have a noindex tag in place first. Just make sure you enter everything AFTER the TLD (.com, .net, etc) and nothing before it. Example: You'd want to ask Google to remove /mls/listing122 but not example.com/mls/listing122. The ladder will not work properly because Google automatically adds "example.com" to it (they just don't make this very clear).
Here's the Google documentation on multilingual and multi-regional websites: https://support.google.com/webmasters/answer/182192?hl=en
You will also want to read up on hreflang here: https://support.google.com/webmasters/answer/189077?hl=en
I would personally go with example.com/mx
Hey there, Brant!
First off, I would recommend that you change the color of the links in your paragraphs so they stand out. I like to also make them underline when they're hovered over.
As for having "too many links" -- Matt Cutts recently came out and said that the previous '100 link per page' limit has been lifted. Here's the video from Nov 2013: http://www.youtube.com/watch?v=QHG6BkmzDEM
It is very unlikely that the current # of links on your homepage is hurting your rankings.
I considered that but didn't really see any reason this site would add links to our site and then quickly remove them. I've seen crazier things though....
I actually just finally figured it out, though. They are displaying our site through a damn iframe. There's a link hidden in the jumbled mess on their site that says "View Website" -- I missed that before. Since it doesn't actually link to us, crawlers weren't finding it and my source code search for our URL didn't find it either. However, Google is reporting it as a link, which is strange. The "View Website" link really just links to another page on their own site that shows our homepage in an iframe.
I've noticed that there is a domain in WMT that Google says is linking to our domain from 173 different pages, but it actually isn't linking to us at all on ANY of those pages. The site is a business directory that seems to be automatically scraping business listings and adding them to hundreds of different categories. Low quality crap that I've disavowed just in case.
I have hand checked a bunch of the pages that WMT is reporting with links to us by viewing source, but there's no links to us. I've also used crawlers to check for links, but they turn up nothing. The pages do, however, mention our brand name. I find this very odd that Google would report links to our site when there isn't actually links to our site. Has anyone else ever noticed something like this?
Canonical will pass link juice almost exactly like 301s will, so there's no harm in going that route. Matt Cutts explains that in this video: http://www.youtube.com/watch?v=zW5UL3lzBOA
You sound like you're good to go. You've got duplicate content worked out, and you've got a plan to retain link juice (canonical).
Ah! I misunderstood the bit about reverse proxying. In that case... to be perfectly honest, I'm not sure.
When you setup a reverse proxy, what happens to the sub-domain? Does it go away or does it still exist live? If it remains live, you'd end up with a duplicate content issue.
EDIT >> I found this at the source you linked to (which answers my question) -->
"The next thing you can do is add a robots.txt file to the sub-domain that stops robots from indexing it. As Reverse Proxying keeps the requested URL the /blog/ URLs will use the robots.txt from the main domain rather than the sub-domain.
The final (and most extreme) thing you can do is to register Google Webmaster Tools for the sub-domain and remove it from the index. If you are doing this, you need to do it in conjunction with robots.txt."
You need to setup 301 redirects for ALL of the pages and posts on the blog sub-domain to their new locations in the sub-folder. This is very important. Without the proper redirects in place, you will lose all value from links pointing to the blog sub-domain, plus all the history, authority, and rankings that the pages have earned.
As for your reasoning to move it from a sub-domain to a sub-folder, I'm not sure you'll receive any sort of link juice boost on your root domain from doing this. Maybe someone else can prove me wrong/correct me...
I'm not sure I will ultimately be able to help answer your question... but I wanted to let you know that your question isn't currently giving enough information for someone to be able to help.