Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • So I am working on a website, and it has been doing seo with keyword links for a a few years. The first branded terms comes in a 7% in 10th in the list on Ahefs. The keyword terms are upwards of 14%. What is the best way to get this back in line?  It would take several months to build keyword branded terms to make any difference - but it is doable. I could try link removal, but less than 10% seem to actually get removed -- which won't make a difference. The disavow file doesn't really seem to do anything either. What are your suggestions?

    | netviper
    0

  • My programmer showed me this demo website where all the navigation and content is embedded within javascript: http://sailsjs.org/#! Google site search returned 51 in results, all pages pretty much unique Title Tags and Meta Descriptions Bing site search returned 24 results with pretty much identical Title Tags and Meta Descriptions Matt Cutts said it's fine but to test first: http://www.youtube.com/watch?v=Mibrj2bOFCU Has anyone seen any reason to avoid this web convention? My gut is to avoid this approach with the main drawback I see is that websites like this won't do well on search engines other than Google that have less sophisticated algorithms. thoughts?

    | Rich_Coffman
    0

  • I've been looking at large image packages through iStock, Getty, Fotolia and 123RF, but before spending a bunch of money, I wanted to get some of your feedback on Creative Commons images. Should be worried that something found on Google Images > Search Tools > Usage Rights section can be used without issue or legal threats from the big image companies so long as they are appropriately referenced? AND will using these types of images and linking to the sources have any affect on SEO efforts or make the blog/website look spammy in Google's eyes because we need to link to the source? How are you using Creative Commons images and is there anything I should be aware of in the process of searching, saving, using, referencing, etc? Patrick

    | WhiteboardCreations
    0

  • I need to remove/disavow hundreds of domains due to an algorithmic penalty.  Has anyone disavowed first and done the outreach thing second as a tactic? The reason why I was considering this was as follows: Most of the websites are from spammy websites and unlikely to have monitored accounts/available contact details. My business is incredibly seasonal, only being easily profitable for half of the year. The season starts from next month so the window of opportunity to get it done is small. If there's a Penguin update before I get it done, then it could be very bad news. Any thoughts would be much appreciated. (Incidentally, if you are interested in, I also posted here about it: http://moz.com/community/q/honest-thoughts-needed-about-link-building-removal)

    | Coraltoes77
    0

  • I have a page "A" that I want to completely delete and move the written content from A" to page "B". Since I am deleting "A" (not keeping page) is it OK to upload the content from "A" to page "B" and search engines will give "B" credit for the unique content? Or, since the content has already once been indexed on "A", "B" may struggle to get full credit for this new unique content, even though page "A" is deleted?

    | khi5
    0

  • Hello, One of my clients ranking drop dramatically.
    We believe it was due to an upgrade to his site. While the site was live www.clientdomain.com
    Work was being done on the new site www.clientdomain.com/new (1 month) I think google crawled the /new link and took as a content duplication since both sites had the same content. Is there a MOZ tool to see if a site has been penalized or any online tool? Thanks

    | ogdcorp
    0

  • Most websites (for example) Zalando intern link there other CC tld domains to the root. For example: On http://www.zalando.nl/damesschoenen-pumps/ the links in the footer go to the other CC tld's:  http://www.zalando.es , zalando.co.uk etc. Does anyone have experience with the fact if you would interlink to the relevant page on the other CC tld;s.
    For example: http://www.zalando.nl/damesschoenen-pumps/ links to  http://www.zalando.co.uk/womens-shoes-heels/ in stead of linking to the homepage ?
    In theory this would give more relevance intern linking. Looking forward to hear if anyone tried or experienced this and what the results where?

    | TjeerdvZ
    0

  • Hello Mozzers, Thought we'd get the group's opinion on this: This site (power lead generation) is ranking for the keyword "lead generation" on Google.ca at the 5th position organically . It's performing even better than some of the better optimized sites with more content related to this keyword. Any input would be appreciated. Cheers, SEO5..

    | SEO5Team
    0

  • Out of 250 domains that link to my site about 115 are from low quality directories that are published by the same company and hosted on the same ip address. Examples of these directories are: -www.keydirectory.net         -www.linkwind.com              -www.sitepassage.com -www.ubdaily.com                -www.linkyard.org A recent site audit from a reputable SEO firm identified 125 toxic links. I assume these are those toxic links. They also identified about another 80 suspicious domains linking to my site. They audit concluded that my site is suffering a partial Penguin penalty due to low quality links. My question is whether it is safe to remove these 125 links from the low quality directories. I am concerned that removing this quantity of links all at once will cause a drop in ranking because the link profile will be thin with only about 125 domains remaining that point to the site. Granted those 125 domains should be of somewhat better quality. I am playing with fire by having these removed. I URGENTLY NEED ADVICE AS THE WEBMASTER HAS INITIATED STEPS TO REMOVE THE 125 LINKS. Thanks everyone!!! Alan

    | Kingalan1
    0

  • So I have a site right now that isn't ranking well, and we are trying everything to help it out.  One of my areas of concern is we have A LOT of old blogs that were not well written and honestly are not overly relevant.  None of them rank for anything, and could be causing a lot of duplicate content issues.  Our newer blogs are doing better and written in a more Q&A type format and it seems to be doing better. So my thought is basically wipe out all the blogs from 2010-2012 -- probably 450+ blog posts. What do you guys think?

    | netviper
    1

  • In light of all the Google updates in 2013, have you updated/changed your title tag best practices? Is the format of (Keyword | Brand)  still working well for your optimization efforts or have you started incorporating an approach similar to this format . (Keyword in a Sentence | Brand) Thanks in advance for your opinions.

    | SEO5Team
    0

  • Here is one website - listdose.com Alexa rank - 28,665 They have around 1000 pages, But 80% keywords used by them have 0 search count. They target only one keyword per page. So how are they earning good money and how are they ranking well in alexa without having any good search count kewyords ? Is this good idea to target 0 search count keywords to create a blog.

    | ross254sidney
    0

  • Hi I am currently working on a page where I have some of the content across all pages. Rewriting it to make it unique is not an option I am afraid. I came across a tag called Googleon/off that will tell google not to index a certain part of a give webpage but will this ensure that it is not seen as dupplicate content? https://developers.google.com/search-appliance/documentation/610/admin_crawl/Preparing

    | AndersDK
    0

  • Say you have a listing of movies. In that listing, there are 5 different view types. One has the scenes broken out. Another has only the box covers. Two of the views have movie descriptions, but others don't. Still, the listings themselves are the same, and you only want the default view to be indexed. Is it appropriate to use canonicals in this case? The alternative is to noindex the other views, but the site already has rankings and deep links. If Google does see the pages as unique and we apply a canonical, could we be penalized or would they merely ignore it?

    | LahomaManagement
    0

  • We have a google penalty  (artificial links) we have checked our link profile with link detox, and we found that a group of links that has no follow tag have been classified as toxic (stats websites mostly). But should we remove those links or what? They are no follow, it shoud be enough. Should we include this links on the spreadsheet anyway? Should we include and add "no action taken"? How would you proceed in that case? Note: I know link detox is not great, but it helped us to collect data. But we have now to make decisions about the results, and I'm new on this and I have doubts. I would appreciate your help Thank you!!!

    | teconsite
    0

  • Hello,
    we received a manual action (partial match) for pure spam for a site of ours. The date is not sure, because we didn't receive any notification in mail or inside Google Webmaster Tools dashboard, so all we can say for sure is that we noticed that the manual action page wasn't empty anymore in 10/03/2013. Some context: our Google traffic got a big hit on 07/20/2013, losing around 60% out of 250k visits per day. At first we thought it was an algorithmic penalisation related to Panda update. It already happened a few times in the past: losing part of Google traffic and having it back usually a couple of months after, often even better than before. We were really surprised at first to be deemed as pure spam given that the domain is ours since it was created 7 years ago, that we have never employed black hat techniques and that our efforts were always put into building valuable pages for users instead of using spam techniques to deceive them. But after noticing the manual action, we obviously thought that this was the actual reason for our traffic sudden drop. So we tried to figure out from the 4 URLs that Google reported as examples of the pure spam affected pages, what issues on our site could have been misinterpreted for pure spam. We also checked all the webmaster guidelines and fixed the issues we thought we could not be fully compliant with. All this process lasted for 3 months, after which we submitted our reconsideration request on 12/16/2013.
    On 01/07/2013 we got the following answer: We've reviewed your site and found no manual actions by the webspam team that would directly affect your site's ranking in Google's search results. You can use the Manual Actions page in Webmaster Tools to view actions currently applied to your site.
    Of course, there may be other issues with your site that could affect its ranking. Google determines the order of search results using a series of computer programs known as algorithms. We make hundreds of changes to our search algorithms each year, and we employ more than 200 different signals when ranking pages. As our algorithms change and as the web (including your site) changes, some fluctuation in ranking will happen from time to time as we make updates to present the best results to our users.
    If your site isn't appearing in Google search results, or if it's performing more poorly than it once did, check out our Help Center to identify and fix potential causes of the problem. Now we are really puzzled because Google is saying 2 opposite things: We still have a pure spam manual action, and we don't have a manual action (as per their newest response to our reconsideration request).
    We could find online a few cases somehow similar to our own, with Google apparently giving contradictory communications about manual actions, but none of them helped to build a clear explanation. I don't want to enter into the merits of the reasons of the penalisation or whether it was or wasn't deserved, but rather knowing if anyone had the same experience or has any guess on what happened.
    What we could think of is some bug or problem related to synching between different pieces of Google but still, after some days, the manual action notice is always there on Google Webmaster Tools and nothing changed in our traffic. We are now thinking about sending a second reconsideration request asking to update our Google Webmaster Tools manual actions page accordingly to our current actual status.
    What do you think? thank you very much

    | mylittlepwny
    0

  • Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
    2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality:  http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results:  Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index:  robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages.  I say "force" because of the crawl budget required.  Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links.  Best of both worlds:  crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution:  using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.

    | browndoginteractive
    0

  • Our e-commerce site has been on Big Commerce for about a year now. One thing many SEO folks had told us is that having a blog located at /blog was going to help more than a subdomain blog. option. BC has never had the option to have a blog hosted on their platform (/blog) until now. I am now wondering, since we have lost traffic in the past and are trying everything we can to regain it, if we should purchase the Wordpress Site Redirect upgrade and move the subdomain blog (blog.) to the new site option /blog. Any help or feedback from you is very much appreciated.  I have attached a screenshot of our main website vs. our blog from Open Site Explorer in case it helps anything. I29Tw5P

    | josh330
    0

  • We are a large, enterprise site with many pages (some on our CMS and some old pages that exist outside our CMS). Every month we submit various an XML site map. Some pages on our site can no longer be found via following links from one page to another (orphan pages). Some of those pages are important and some not. Is it worth our while to create a HTML site map?  Does any one have any recent stats or blog posts to share, showing how a HTML site map may have benefited a large site. Many thanks

    | CeeC-Blogger
    0

  • My website was on first page google couple of months ago, now nothing. Shows up in Bing page one. Some queries/pages still showing OK, but some not at all. Example "residential elevators illinois" found nowhere. http://www.accesselevator.net is the website. Have found 900 poor quality links and used disavow tool. Any further suggestions? Their Page Rank also went from a 3 to a 2. Implemented nofollow on all outgoing links. Need advice.

    | trailblazerzz9
    0

  • Hi, My website URL is: www.nixiweb.com Before June of 2013 my website was always shown at first or second place at google when searching for "hosting gratis". After June of 2013 my website has disappeared from all searches, it only appears when I search for the site name, eg: "nixiweb" or “www.nixiweb.com” At webmaster tools, the search queries table only shows queries related to my website name (eg: "nixiweb" or “xixiweb”), and none related to any other keyword. Can anybody help me understanding which is the problem with my site? Thanks

    | nixiweb
    0

  • http://www.virante.org/blog/2013/12/19/authorshippocalypse-google-authorship-penguin-finally-appeared/ Search Engine Land reported that Google confirms that Authorship results in search are being intentionally reduced. It appears that the Matt Cutts-promised reductions to the amount of Google Authorship results being shown in Google Search has begun. Do we need to remove a Google Authorship tag from the blog? Because it hurts the ranking?

    | ross254sidney
    0

  • If you use Google to search georgefox.edu for "doctor of business administration", the first search result is http://www.georgefox.edu/business/dba/ - I'll refer to this page as the DBA homepage from here on. The second page is http://www.georgefox.edu/offices/sfs/grad/tuition/business/dba/ - I'll refer to this page as the DBA program costs page from here on. Search: https://www.google.com/search?q=doctor+of+business+administration+site%3Ageorgefox.edu This appears to hold true no matter what your geographic location is set to on Google. George Fox University is located in Newberg, Oregon. If you search for "doctor of business administration" with your geographic location set to a location beyond a certain distance away from Newberg, Oregon, the first georgefox.edu result is the DBA homepage. Set your location on Google to Redmond, Oregon
    Search: https://www.google.com/search?q=doctor+of+business+administration But, if you set your location a little closer to home, the DBA homepage disappears from the top 50 search results on Google. Set your location on Google to Newberg, Oregon
    Search: https://www.google.com/search?q=doctor+of+business+administration Now the first georgefox.edu page to appear in the search results is the DBA program costs page. Here are the locations I have tested so far: First georgefox.edu search result is the DBA homepage Redmond, OR Eugene, OR Boise, ID New York, NY Seattle, WA First georgefox.edu search result is the DBA program costs page Newberg, OR Portland, OR Salem, OR Gresham, OR Corvallis, OR It appears that if your location is set to within a certain distance of Newberg, OR, the DBA homepage is being pushed out of the search results for some reason. Can anyone verify these results? Does anyone have any idea why this is happening?

    | RCF
    0

  • Hi, we are now involved in a google penalty issue (artificial links – global – all links). We were very surprised, cause we only have 300 links more less, and most of those links are from stats sites, some are malware (we are trying to fight against that), and other ones are article portals.  We have created a spreadsheet with the links and we have analyzed them using Link Detox. Now we are sending emails, so that they can be removed, or disavow the links what happen is that we have very few links, and in 99% of then we have done nothing to create that link. We have doubts about what to do with some kind of links. We are not sure them to be bad. We would appreciate your opinion. We should talk about two types: Domain stats links Article portals Automatically generated content site I would like to know if we should remove those links or disavow them These are examples Anygator.com. We have 57 links coming from this portal. Linkdetox says this portal is not dangerous http://es.anygator.com/articulo/arranca-la-migracion-de-hotmail-a-outlook__343483 more examples (stats or similar) www.mxwebsite.com/worth/crearcorreoelectronico.es/ and from that website we have 10 links in wmt, but only one works. What do you do on those cases? Do you mark that link as a removed one? And these other examples… what do you think about them? More stats sites: http://alestat.com/www,crearcorreoelectronico.es.html http://www.statscrop.com/www/crearcorreoelectronico.es Automated generated content examples http://mrwhatis.net/como-checo-mi-correo-electronico-yaho.html http://www.askives.com/abrir-correo-electronico-gmail.html At first, we began trying to delete all links, but… those links are not artificial, we have not created them, google should know those sites. What would you do with those sites? Your advices would be very appreciated. Thanks 😄

    | teconsite
    0

  • Hi MOZland, With a new (our first e-commerce) client, we're going through a massive learning curve in handling a site of substantial size and complexity for the first time. While we've weeded out most of the on-page stuff that needed sorting, and we're in the process of dumping poor links implemented by previous SEO/online marketing efforts, do you have any suggestions about how to take a big e-commerce site forward in 2014, especially concerning technical pitfalls and link building efforts (and given that guest blogging has become something of a faux pas). Cheers, M

    | Martin_S
    0

  • Greetings MOZ Community: According to site audit by a reputable SEO firm last November, my commercial real estate web site has a toxic link profile which is very weak (about 58% of links qualified as toxic). The SEO firm suggests than we immediately start pruning the link profile, requesting removal of the toxic links and eventually filing a link disavow file with Google for links that web masters will not agree to remove. While removing toxic links, the SEO firm proposes to simultaneously solicit very high quality links, to try to obtain 7-12 high quality links per month. My question is the following: is it putting the cart before the horse to work on link building without optimizing pages (with Yoast) for specific keywords? I would think that Google considers how each page is optimized for specific terms; which terms are used within the link structure, as well as terms within the meta tags. My site is partially optimized, but optimization has never been done thoroughly. Should the pages of the site be optimized for the top 25-30 terms before link building begins. Or can that be done at a later stage. Note that my link profile is pretty atrocious. My site at the moment is receiving about 1,000 unique visitors a week from organic search. However 70% of the traffic is from terms that are not relevant. The firm that did my audit claims that removal of the toxic links while building some new links is imperative and that optimization for keywords can wait somewhat. Any thoughts?/ Thanks for your assistance. Alan

    | Kingalan1
    0

  • Hi, I encountered the following page on Zales:
    http://engagementring.theprestigediamondcollection.com/NewEngagementRing/NewEring.aspx As you scroll down more items pop up (the well known Pinterest style).
    Would Google bot be able to enter the product pages? I don't assume the bot "scrolls"... Thanks

    | BeytzNet
    0

  • I have lot of links (10000+) built against Exact match anchor text so what is solution to that now? Other than disavowing them all, May I change the anchor text of those links (From Exact Match To Brand Name or naked URL)? Does Google have algorithms to detect anchor text changes and if so, do those algorithms detect these sorts of changes and raise a red flag on sites doing it. I respect your opinions but please only comment if you are sure about it because I am already facing a penalty so can't afford to get another.

    | Ishrat-Khan
    1

  • A small blog owner has asked if they can word-for-word republish one of our blog articles on their own blog. I'm not sure how to respond. We're don't do any outreach to submit or duplicate our articles throughout the web... so this isn't something being done in mass. And this could be a great signal to Google that somebody else is vouching for the quality of our article, right? However, I'm a bit concerned about word-for-word duplicating. Normally, if somebody is interested in re-publishing, both the re-publisher and our website would get more value out of it if they re-publisher added some form of commentary or extra value to our post when citing it, right? This small blog just started releasing a segment in which they've titled "guest blog Thursday". And given the recent concerns with guest blogging (even though I'm not sure this is the classical sense of guest blogging), I'm even more concerned. Any ideas on how I should respond?

    | dsbud
    0

  • Hello Mozzers, I am undertaking a site audit and have just noticed that the developer has left the development site up and it has indexed. They 301d from pages on old site to equivalent pages on new site but seem to have allowed the development site to index, and they haven't switched off the development site. So would the best option be to redirect the development site pages to the homepage of the new site (there is no PR on dev site and there are no links incoming to dev site, so nothing much to lose...)? Or should I request equivalent to equivalent page redirection? Alternatively I can simply ask for the dev site to be switched off and the URLs removed via WMT, I guess... Thanks in advance for your help!  🙂

    | McTaggart
    1

  • Hello everyone.  We recently posted some of our research to Wikipedia as references in the "External Links" section.  Our research is rigorous and has been referenced by a number of universities and libraries (an example: https://www.harborcompliance.com/information/company-suffixes.php).  Anyway, I'm wondering if these Wikipedia links have any value beyond of course adding to the Wiki page's information.  Thanks!

    | Harbor_Compliance
    0

  • One of our most popular products has a very authoritative product page, which is great for marketing purposes, but not so much for current users. When current users search for "product x login" or "product x sign in", instead of getting to the login page, they see the product page - it adds a couple of clicks to their experience, which is not what we want. One of the problems is that the actual login page has barely any content, and the content that it does carry is wrapped around <iframes>. Due to political and security reasons, the web team is reluctant to make any changes to the page, and one of their arguments is that the login page actually ranks #1 for a few other products (at our company, the majority of logins originate from the same domain). </iframes> To add to the challenge - queries that do return the login page as #1 result (for some of our other products) actually do not reference the sign-in domain, but our old domain, which is now a 301 redirect to the sign-in domain. To make that clear - **Google is displaying the origin domain in SERPs, instead of displaying the destination domain. ** The question is - how do we get this popular product's login page to rank higher than the product page for "login" / "sign in" queries? I'm not even sure where we should point links to at this point - the actual sign in domain or the origin domain? I have the redirect chains and domain authority for all of the pages involved, including a few of our major competitors (who follow the same login format), and will be happy to share it privately with a Moz expert. I'd prefer not to make any more information publicly available, so please reach out via private message if you think you can help.

    | leosaraceni
    0

  • I'm getting very interested in Amazon Web Services lately as an alternative to reseller hosting. Anyone have experience on their cloud with Wordpress sites?  Would be nice to be able to spin up sites to faster service if needed, but not sure if its more pricey and not ready for that adoption yet.

    | williammarlow
    1

  • Is it better to use the robots meta noindex, follow tag for paging, (page 2, page 3) of Category Pages which lists items within each category or just let Google index these pages Before Panda I was not using noindex because I figured if page 2 is in Google's index then the items on page 2 are more likely to be in Google's index. Also then each item has an internal link So after I got hit by panda, I'm thinking well page 2 has no unique content only a list of links with a short excerpt from each item which can be found on each items page so it's not unique content, maybe that contributed to Panda penalty. So I place the meta tag noindex, follow on every page 2,3 for each category page. Page 1 of each category page has a short introduction so i hope that it is enough to make it "thick" content (is that a word :-))  My visitors don't want long introductions, it hurts bounce rate and time on site. Now I'm wondering if that is common practice and if items on page 2 are less likely to be indexed since they have no internal links from an indexed page Thanks!

    | donthe
    0

  • I have several different sites which link to each other (for valid reasons...sister companies etc). Would it be better if these were hosted from different web hosting firms? And if they are hosted by the same hosting company would it be better if they had different accounts and different IP addresses? Not sure I understand C blocks etc. Any tutorial on here about that? I wouls assume it would look better to Google if the links were not from the same IP address. Thanks.

    | Ebtec
    0

  • I have a real estate website. The site has all residential properties for sale in a certain State (MLS property listings). These properties also appear on 100's of other real estate sites, as the data is pulled from a central place where all Realtors share their listings. Question: will having these MLS listings indexed and followed by Google increase the ratio of duplicate vs original content on my website and thus negatively affect ranking for various keywords? If so, should I set the specific property pages as "no index, no follow" so my website will appear to have less duplicate content?

    | khi5
    0

  • I have a few older websites that SERP well, and I am considering merging some or all of them into a new related website that I will be launching regardless. My old websites display real estate listings and not much else. Each website is devoted to showing homes for sale in a specific neighborhood. The domains are all in the form of Neighborhood1CityHomes.com, Neighborhood2CityHomes.com, etc. These sites SERP well for searches like "Neighborhood1 City homes for sale" and also "Neighborhood1 City real estate" where some or all of the query is in the domain name. Google simply points to the top of the domain although each site has a few interior pages that are rarely used. There is next to zero backlinking to the old domains, but each links to the other with anchor text like "Neighborhood1 Cityname real estate". That's pretty much the extent of the link profile. The new website will be a more comprehensive search portal where many neighborhoods and cities can be searched. The domain name is a nonsense word .com not related to actual key words. The structure will be like newdomain.com/cityname/neighborhood-name/ where the neighborhood real estate listings are that would replace the old websites, and I'd 301 the old sites to the appropriate internal directories of the new site. The content on the old websites is all on the home page of each, at least the content for searches that matter to me and rank well, and I read an article suggesting that Google assigns additional authority for top level pages (can I link to that here?). I'd be 301-ing each old domain from a top level to a 3rd level interior page like www. newdomain/cityname/neighborhood1/. The new site is better than the old sites by a wide margin, especially on mobile, but I don't want to lose all my top positions for some tough phrases. I'm not running analytics on the old sites in question, but each of the old sites has extensive past history with AdWords (which I don't run any more). So in theory Google knows these old sites are good quality.

    | Gogogomez
    0

  • I have blog website - http://uncutweb.com/ My website ranked for keywords - What Color Shoes To Wear With Gold Dress, Keywords is having Moz Difficulty Score: 35% with A grade moz On page Score. But why my website is not ranked for What Color Shoes To Wear With Purple Bridesmaid Dress or **What Color Shoes To Wear With Coral Dress???**They have less difficulty score and having A grade.

    | ross254sidney
    0

  • I have a real estate website and use rel=next/prev for paginated real estate result pages. I understand "no index, follow" is not needed for the paginated pages. However, my case is a bit unique: this is real estate site where the listings also show on competitors sites. So, I thought, if I "no index, follow" the paginated pages that would reduce the amount of duplicate content on my site and ultimately support my site ranking well. Again, I understand "no index, follow" is not needed for paginated pages when using rel=next/prev, but since my content will probably be considered fairly duplicate, I question if I should do anyway.

    | khi5
    0

  • I have an old site that we are re-building, and also moving form Yahoo Stores to Big Commerce. yahoo uses site.com/page.html and BC uses site.com/page. Is there any SEO benefit to keeping the old .html format? some of the pages on the old site have no links to them from external sites. Do they even need re-directs, or should I just let Google find the new page equivalents when they crawl the new version of the site? While some of the old pages (primarily product pages) have OK urls, others have obscure product numbers as the URL. Obviously the latter need re-directing to a more relevant page, but what about situations like this:
    _/accessory-product.html _ > product-accessory
    In this example, the existing URL is fine, except for the .html extention. If I just used the old URL, would having a mix of /sample.html and /sample pages hurt me? Thanks in advance for your help and input! Dave

    | Grabapple
    0

  • Our website, christnotes.org has historically ranked very well in it's space. We have always been in top 3 positions for daily bible verse related searches. There have been no fluctuations in rankings until it took a hit around September 4th through October 14th with approximately 35% drop in PVs and over 60% drop in traffic from Google. The site fully recovered google traffic mid-Oct. On November 24th the site was once again hit, this time with a 50% drop in pageviews and over 75% drop in traffic from google. Google Analytics Image depicting the two drops attached. When the first drop hit, we checked everything - bad links, broken URLs, page speed, etc. There was a slight increase in page speed so we did a little tweaking and made some improvements (8.36 second page load to 5.5) This time around, I can find no cause and no areas that need fixed to recover our rankings and traffic. Very confused on Google dropping rank then recovering after what looks like a page speed fix and then dropping again a month later. Any suggestions???? KGOgzEm

    | KristieWahlquist
    0

  • I am a bit confused at internal linking for ecommerce site. Is it wise to link say all "boots" term in the review section to the boots page? Zappos is doing this. Wouldn't this incur penguin penalty? Since all internal anchor to that page is "boots" ? Scroll down to the bottom and checkout their reviews: http://www.zappos.com/tony-lama-6071l Is this the wise way to go about doing internal linking? Thanks

    | WayneRooney
    0

  • A few months ago I changed the description of one of the pages on my site.
    And I noticed that Google does not display the entire description of his search results. Description page is: "Get yourself a personalized name necklace, we offer a huge range of silver, gold and gold plated name necklaces." And Google only shows this line: "Get yourself a personalized name necklace, we offer a huge ... " Did someone have an idea why is that? 2EPSLGX.png

    | Tiedemann_Anselm
    0

  • I cannot strip out brand data on the 'not provided' keywords in Google analytics. Is this not possible anymore? I understand we cannot get specific keywords but can we no longer strip out brand on organic traffic in Google analytics for keywords that are 'not provided' ?

    | pauledwards
    0

  • For weeks, Yahoo consistently contributed just over 80% of what I got from Bing. Suddenly for the last two weeks the Yahoo and Bing graphs diverged, with Yahoo traffic dropping to 50% of Bing's. Any ideas? Did Yahoo make any deals with companies to give them better ranking? Have they suddenly started adding more ads above the fold? Any thoughts? Thanks

    | GilReich
    0

  • I  decided to go fully WP on my band agency website to help with SEO. I have lost loads of rankings even though we redirected old pages to the new urls. it means i am loosing lots of business atm so I am desperate to find out what I thought was a better SEO design than before. We target geographical and genres in search and they have turned to poop too! Would anyone advise me what I have done wrong and if I need to create some more sales pages to help? site is http://www.themorrisagency.co.uk Thank you, thank you  in advance guys... Daniel Morris http://www.themorrisagency.co.uk

    | Agentmorris
    0

  • Recently I migrated three websites from www.product.com to www.brandname.com/product. Two of these site are performing as normal when it comes to SEO but one of them lost half of its traffic and dropped in rankings significantly. All pages have been properly redirected, onsite SEO is intact and optimized, and all pages are indexed by Search engines. Has anyone had experience with this type of migration that could give some input on what a possible solution could be? Any help would be greatly appreciated!

    | AlexVelazquez
    0

  • Hello Everyone, I have a same problem with my 3 websites that Google is not showing right title tags of inner pages of my websites goldcoast-plumbers.com: http://screencast.com/t/2AEzDcoTkWF accountants-goldcoast.com.au: metalrecyclers-brisbane.com.au One common thing is all these websites is All in one SEO Pack Plugin for SEO Is it a problem? Thanks in advance for your help! Regards

    | Asjad
    0

  • We have a tourism related site. We list annual events. Right now the URL extension includes the year. I assume it is better to keep the same page and update the dates, thereby keeping any links, ranking  trust and authority we built. Is that the best strategy by updating the event info with the new dates? I would assume with a new page for the new year we would be starting over again and would have too much similar content and link diffusion. And in the future are we better off not including the year in the URL extension?

    | Ebtec
    0

  • I have a scenario where a 'draft' website was built using Google Sites, and published using a Google Sites sub domain. Consequently, the 'same' website was rebuilt and published on its own domain. So effectively there were two sites, both more or less identical, with identical content. The first website was thoroughly indexed by Google. The second website has not been indexed at all - I am assuming for the obvious reasons ie. that Google is viewing it as an obvious rip-off of the first site / duplicate content etc. I was reluctant to take down the first website until I had found an effective way to resolve this issue long-term => ensuring that in future Google would index the second 'proper' site. A permanent 301 redirect was put forward as a solution - however, believe it or not, the Google Sites platform has no facility for implementing this. For lack of an alternative solution I have gone ahead and taken down the first site. I understand that this may take some time to drop out of Google's index, however, and I am merely hoping that eventually the second site will be picked up in the index. I would sincerely appreciate an advice or recommendations on the best course of action - if any! - I can take from here. Many thanks! Matt.

    | collectedrunning
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.