Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Latest Questions

Have an SEO question? Search our Q&A forum for an answer; if not found, use your Moz Pro subscription to ask our incredible community of SEOs for help!


  • Hey guys, Does anyone know what Google's stance is on backlinks that come via a form, WordPress theme or badge. For example if I offer website security and provide badges for websites that are malware clean (with a back link to my website) and 100 websites sign up to my website will this be deemed as bad practice in Google's eyes? Also if I create a free WordPress theme with a backlink to myself? The second question sounds like I'm providing content for a link which seems okay but the first one can go either way. Thanks

    Link Building | | conversiontactics
    0

  • That is a question I am sure many of your have been asking since they launched the product several weeks ago. Cemper claims they helped get a penalty removed in 3 days by using this product.  Sounds great doesn't it?  Maybe even sounds too good to be true.  Well, here is my experience with it. We have been working to get a site's rankings back up for several months now.  While it has no penalty, it clearly got hit by the algo change.  So we have been very busy creating new content and attempting to remove as much "keyword rich" links as possible. This really hasn't been working very well at all, so when I heard about link detox boost I thought this was the answer to our prayers.  The basic idea is link detox boost forces google to crawl your bad links so it know you no longer have links from those sites or have disavowed them. So we ran it and it was NOT cheap.  Roughly $300.  Now, 3 weeks after running it, the report only shows it has actually crawled 25% of our links, but they assure us it is a reporting issue and the full process has ran its course. The results.  No change at all.  Some of our rankings are worse, some are better, but nothing worth mentioning. Many products from Link Research Tools are very good, but i'm afraid this isn't one of them. Anyone else use this product?  What were your results?

    Intermediate & Advanced SEO | | netviper
    2

  • This is been a big annoyance for me. My site was on wordpress before and was infected with malware at one point. We were blocked by major AV, Google, and etc but we got the infection cleaned up quickly and got unblocked. Problem now is that Facebook, we are still blocked on. We try to post a link and says we are harmful. We got unblocked from facebook and we posted but than they blocked us again. We have submitted multiple requests on that form but got no answer. As a marketing agency, having our facebook page blocked to our site is bad. What can we do in this situation? We have no direct contact with anyone at facebook nor do they reply to our requests. I do own a .net to my domain, should we start using that instead of .com or what? We really need to find a way out of this. It is hurting our reputation.

    Social Media | | Tech-Critic
    0

  • I just pulled a search term report for all of 2013 from my PPC account.  What I got was 673,000 rows of terms that have garnered at least 1 impression in 2013.  This is exactly what I was looking for. My issue is that the vast majority of terms are geo-modified to include the city, the city and state or the zip code.  I am trying to remove the geographic information to get to a list of root words people are interested in based on their search query patterns. Does anyone know how to remove all city, state and zip codes quickly without having to do a find and replace for each geo-modifier in excel? for example, if i could get a list of all city and state combinations in the US and a list of all zip codes, and put that list on a separate tab and then have a macro find and remove from the original tab any instances of anything from the second tab, that would probably do the trick.  Then I could remove duplicates and have my list of root words.

    Moz Pro | | dsinger
    0

  • I've been looking at large image packages through iStock, Getty, Fotolia and 123RF, but before spending a bunch of money, I wanted to get some of your feedback on Creative Commons images. Should be worried that something found on Google Images > Search Tools > Usage Rights section can be used without issue or legal threats from the big image companies so long as they are appropriately referenced? AND will using these types of images and linking to the sources have any affect on SEO efforts or make the blog/website look spammy in Google's eyes because we need to link to the source? How are you using Creative Commons images and is there anything I should be aware of in the process of searching, saving, using, referencing, etc? Patrick

    Intermediate & Advanced SEO | | WhiteboardCreations
    0

  • I have 2,871 keywords that I need to check Google rank for on 4 separate domains.  Does anyone know of any FREE tools or plugins available that will allow for this volume that wont get my IP banned by Google? Even the Moz Rank Checker only allows to enter 1 keyword at a time for up to 200 per day.  Who would seriously enter in 200 keywords, one by one all day, every day?

    Moz Bar | | dsinger
    0

  • I just got a Mozcrawl back and see lots of errors for overly dynamic urls. The site is a villa rental site that gives users the ability to search by bedroom, amenities, price, etc, so I'm wondering what the best way to keep these types of dynamically generated pages with urls like /property-search-page/?location=any&status=any&type=any&bedrooms=9&bathrooms=any&min-price=any&max-price=any from indexing. Any assistance will be greatly appreciated : )

    Technical SEO | | wcbuckner
    0

  • We are getting "Multiple meta descriptions found!" error when testing meta in the Chrome MozBar extension. We are using the Wordpress All in One SEO plugin. Thinking there may be a conflict with default meta description being blank and needing removed to not conflict with meta generated by All in One SEO. Curious if anyone has come across this and any info on eradicating the issue would be greatly appreciated. Most likely a newbie question on my part. Thanks!

    Moz Pro | | departikan
    0

  • We are launching a new site within the next 48 hours.  We have already purchased the 30 day trial and we will continue to use this tool once the new site is launched. Just looking for some tips and/or best practices so we can compare the old data vs. the new data moving forward....thank you in advance for your response(s). PB3

    Moz Pro | | Issuer_Direct
    0

  • I'm looking at starting a few Adwords campaigns (for search only) and targeting a niche where there is very little competition. Lets assume I get a very good CTR on my ads. How much could I expect to pay per click? Does Adwords have any minimums CPCs? Or is it actually possible to get penny clicks in Adwords?

    Affiliate Marketing | | shawn81
    0

  • Does Google view the colon as a keyword separator like it does with the pipe (|) character? Currently, our site automatically constructs the title tag based on the page name given by the user.  Long ago, we started using the colon character to visually separate the brand & model of the product from the size, and as a result, all of our title tags have been constructed this way. This was done more to make it easier to read for humans than for search engines.  My question is - should I consider getting rid of the colon from our title tags? To give more info, our website sells tires. So, for any given model of tire, there might be 25-100 different individual sizes.  The tags are constructed as follows:  (brand)(model) : (size).   Here's an example from our site: GENERAL ALTIMAX ARCTIC : 225/45R17 91Q The brand is General Tire, the model is the Altimax Arctic and the size is 225/45R17 91Q Since this entire string really constitutes the full product name, should I remove the colon so that Google views it that way?  Or, since I have used a colon instead of a pipe, will Google simply ignore it and treat the entire string as one keyword phrase?

    On-Page Optimization | | kcourtem
    0

  • We want to create 5-10 near-duplicates of our homepage to use as landing pages – nearly all same text, but some different images.  We want to make sure Google doesn't ding us for duplicate content.  Is the best way to do that to tag each of these pages with "noindex"?

    Paid Search Marketing | | HopeIndu
    1

  • Hello everyone.  We recently posted some of our research to Wikipedia as references in the "External Links" section.  Our research is rigorous and has been referenced by a number of universities and libraries (an example: https://www.harborcompliance.com/information/company-suffixes.php).  Anyway, I'm wondering if these Wikipedia links have any value beyond of course adding to the Wiki page's information.  Thanks!

    Intermediate & Advanced SEO | | Harbor_Compliance
    0

  • If I have a page in English, which exist on 100 other websites, we have a case where my website has duplicate content. What if I use Google Translate to translate the page from English to Japanese, as the only website doing this translation will my page get credit for producing original content? Or, will Google view my page as duplicate content, because Google can tell it is translated from an original English page, which runs on 100+ different websites, since Google Translate is Google's own software?

    Intermediate & Advanced SEO | | khi5
    0

  • www.heartwavemedia.com / Wordpress / All in One SEO pack I understand Google values unique titles and content but I'm unclear as to the difference between changing the page url slug and the seo title. For example: I have an about page with the url "www.heartwavemedia.com/about" and the SEO title San Francisco Video Production | Heartwave Media | About I've noticed some of my competitors using url structures more like "www.competitor.com/san-francisco-video-production-about" Would it be wise to follow their lead? Will my landing page rank higher if each subsequent page uses similar keyword packed, long tail url?  Or is that considered black hat? If advisable, would a url structure that includes "san-francisco-video-production-_____" be seen as being to similar even if it varies by one word at the end? Furthermore, will I be penalized for using similar SEO descriptions ie. "San Francisco Video Production | Heartwave Media | Portfolio" and San Francisco Video Production | Heartwave Media | Contact" or is the difference of one word "portfolio" and "contact" sufficient to read as unique? Finally...am I making any sense?  Any and all thoughts appreciated...

    White Hat / Black Hat SEO | | keeot
    0

  • Hi, I've been looking online for the best locations to purchase expired domains with existing Page Authority/ Domain Authority attached to them. So far I've found: http://www.expireddomains.net
    http://www.domainauthoritylinks.com
    http://moonsy.com/expired_domains/ These site's are great but I'm wondering if I'm potentially missing other locations? Any other recommendations? Thanks.

    White Hat / Black Hat SEO | | VelasquezEF
    1

  • Hi, Sorry if this rambles on. There's a few details that kind of convolute this issue so I'll try and be as clear as possible. The site in question has been online for roughly 5 years. It's established with many local citations, does well in local SERPs (working on organic results currently), and represents a business with 2 locations in the same county. The domain is structured as location1brandname.com. The site was recently upgraded from a 6-10 page static HTML site with loads of duplicate content and poor structure to a nice, clean WordPress layout. Again, Google is cool with it, everything was 301'd properly, and our rankings haven't dropped (some have improved). Here's the tricky part: To properly optimize this site for our second location, I am basically building a second website within the original, but customized for our second location. It will be location1brandname.com/secondcity and the menu will be unique to second-city service pages, unique NAP on footer, etc. I will then update our local citations with this new URL and hopefully we'll start appearing higher in local SERPs for the second-city keywords that our main URL isn't currently optimized for. The issue I have is that our root domain has our first city location in the domain and that this might have some negative effect on ranking for the second URL. Conversely, starting on a brand new domain (secondcitybrandname.com) requires building an entire new site and being brand new. My hunch is that we'll be fine making root.com/secondcity that locations homepage and starting a new domain, while cleaner and compeltely separate from our other location, is too much work for not enough benefit. It seems like if they're the same company/brand, they should be on the same sitee. and we can use the root juice to help. Thoughts?

    Local Website Optimization | | kirmeliux
    0

  • Hi all! Okay, here's the scoop. 33% of our site visitors use Safari. 18% of our visitors are on either an iPad or iPhone. According to Google Analytics, our average page load time for visitors using Safari is 411% higher than our site average of 3.8 second. So yes, average page load time pages loading in Safari is over 20 seconds...totally unacceptable, especially considering the large percentage of traffic using it. While I understand that there are some parameters beyond our control, it is in our own best interest to try to optimize our site for Safari. We've got to do better than 20 seconds. As you might have guessed, it's also killing conversation rates on visits from that browser. While every other browser posted double-digit improvements in conversion rates over the last several months, the conversion rate for Safari visitors is down 36%...translating into 10's of thousands in lost revenue. Question for anyone out there gifted in Web design and particular Web Dev....Do you think that it's possible/reasonable to attempt to "fix" our current site, which sits on an ancient platform with ancient code, or is this just not realistic? Would a complete redesign/replatform be the more realistic (and financially sound) way to go? Any insights, experiences and recommendations would be greatly appreciated. If you're someone interested in spec'-ing out the project and giving us a cost estimate please private message me. Thanks so much!

    Conversion Rate Optimization | | danatanseo
    1

  • Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
    2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality:  http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results:  Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index:  robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages.  I say "force" because of the crawl budget required.  Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links.  Best of both worlds:  crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution:  using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.

    Intermediate & Advanced SEO | | browndoginteractive
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.