Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • We have a medium traffic site (www.boatshed.com) which sells used boats, the site does fairly well for popular search phrases, often ranking on first page. A common way for people to search is by boat manufacturer, for example "sunseeker for sale" or "sunseeker 33 for sale". To service those searches, we have search results page with URL's like: "/used-boats-for-sale/sunseeker" and "/used-boats-for-sale/sunseeker/33" (i.e. make and model). This is fine for common makes but we have a lot of makes where we might have just one which, when sold, then leaves the page with no boats to show. It could then be just weeks till we get another one or sometimes years. Once a manufacturer has no boats for sale, we automatically remove the link to that page from the site and from the sitemap. These pages are now being flagged as soft 404s in Webmaster tools. Currently these pages still work and just show a "No results found" message. I am unsure of how to deal with these pages. Options as I see them: Add a "no-index, follow" tag to the pages and continue to remove them from the sitemap. My concern is that when we do get a new boat for sale, the page will not rank again or take a long time to be re-indexed. Add value to the 'no results found' page - for example, show listings for similar boats. If I do this (which makes sense from a usability perspective), would it be acceptable to leave these pages with an "index" tag? 404 them - my concern being this basically says "this page has been permanently removed" when actually it will probably have content again soon. 301 redirect to a page of similar boats with a message that we don't have any of that specific type at the moment.

    | pbscreative
    0

  • Hey Mozzers, I have spent some time researching proper backlink analysis, and then I have been going through some of the steps. Here are a few questions that I have had in the process. Why would backlink tools like OSE and Ahrefs return different results for (say): "www.domain.com" vs "domain.com"? I noticed that competitors have almost 6x the backlinks as I do, but when I look at where those links are coming from, they are coming from old sites with moderate DA (under 10-30), but many are not current. I also noticed that many of these sites have links placed site-wide so that there are maybe 6+ referring pages per domain. So I guess my question is, how powerful are these links? Am I better off building relationships with bloggers, even though they only offer one link per page? Ultimately it will take me a long time to build the same quantity of links, but it seems like many of these competitors' links are old fashioned, but still moderately effective. Any help is appreciated, you guys have always been so helpful!

    | evan89
    0

  • I have an active website with many users adding dozens of comments on the many pages of the site daily.  I'm am wondering if it would be good for the overall ranking strength of the site if I were to add a forum to it (in a subdirectory, like forum.mysite.com). On one hand, I can see the forum posts as thin content, which Google wouldn't care for. On the other hand, I see the additional user engagement on the site, which I think Google would like. I know the benefits it can have to the users, but for this question, all I want to know is if this would be seen by Google as a plus or a minus for my site, assuming the forum succeeded in becoming popular. I don't want to do anything that will diminish the value of my site in Google's eyes. Thank you.

    | bizzer
    0

  • Hello Moz, I have came to realize that my blogger is copying and posting the exact content from another domain to my site. She has provided reference at the end of each article. I was wondering if this is a good practice and if it will have any SEO impact on my website. Thanks a lot for the help

    | businessowner
    0

  • Hi Everyone, I have 10 websites which are all of good standing and related. My visitors would benefit of knowing about the other websites but I don't want to trigger a google penalty by linking them all together. Ideally I'd also like to pass on importance through the links as well. How would you proceed in this situation? Advice would be greatly appreciated, Peter.

    | RoyalBlueCoffee
    0

  • The site that I am optimising has a problem with duplicate pages being indexed as a result of the load balancer (which is required and set up by the hosting company). The load balancer passes the site through to 2 different URLs: www.domain.com www2.domain.com Some how, Google have indexed 2 of the same URLs (which I was obviously hoping they wouldn't) - the first on www and the second on www2. The hosting is a mirror image of each other (www and www2), meaning I can't upload a robots.txt to the root of www2.domain.com disallowing all. Also, I can't add a canonical script into the website header of www2.domain.com pointing the individual URLs through to www.domain.com etc. Any suggestions as to how I can resolve this issue would be greatly appreciated!

    | iam-sold
    0

  • Hi guys, what are your thoughts using bit.ly links as internal links on blog posts of a website? Some posts have 4/5 bit.ly links going to other pages of our website (noindexed pages). I have nofollowed them so no seo value is lost, also the links are going to noindexed pages so no need to pass seo value directly. However what are your thoughts on how Google will see internal links which have essential become re-direct links? They are bit.ly links going to result pages basically. Am I also to assume the tracking for internal links would also be better using google analytics functionality? is bit.ly accurate for tracking clicks? Any advice much appreciated, I just wanted to double check this.

    | pauledwards
    0

  • Hi, I'm not too sure what to do about this or what to think of it. This magically appeared in my companies robots.txt file (literally magically appeared/text is below) User-agent: Baiduspider
    User-agent: Baiduspider-video
    User-agent: Baiduspider-image
    Disallow: / I know that Baidu is the Google of China, but I'm not sure why this would appear in our robots.txt all of a sudden. Should I be worried about a hack? Also, would I want to disallow Baidu from crawling my companies website? Thanks for your help,
    -Reed

    | IceIcebaby
    0

  • Hello All, I have a Hire Website with many categories and individual location pages for each of the 70 depots we operate. However, being dynamic pages, we have thousands of thin content pages. We have decided to only concentrate on our best performing locations and get rid of the rest as its physically impossible to write unique content for all our location pages for every categories. Therefore my question is. Would it cause me problems by having to many 301's for the location pages I am going to re-direct ( i was only going to send these back to the parent category page) or should I just 404 all those location pages and at some point in the future when we are in a position to concentrate on these locations then redo them with new content ?  in terms of url numbers It would affect a few thousand 301's or 404's depending on people thoughts. Also , does anyone know what percentage of thin content on a site should be acceptable ?.. I know , none is best in an ideal world but it would be easier if there we could get away with a little percentage. We have been affected by Panda , so we are trying to tidy things up as best at possible, Any advice greatly appreciated? thanks Peter

    | PeteC12
    0

  • So, I've just started using MOZ since I've decided I wanna be an "expert" in SEO.
    I run a couple of successful websites in Denmark and I've had some SEO guy do some SEO a few years back, but now I wanna learn this myself. I've already read a lot of books, blogs on the subject and talked with several SEO "experts". Anyways, I have a concrete "problem" which I need some help on deciding what to do. Its the same issue / dilemma on all my sites. Dilemma
    On my site i have a menu-section called Articles and tips. As the name implies it's basically articles and tips on subjects related to the site.
    The articles are both informal for the users and I also use these to attract new users on specific keywords.
    The articles are not "spam" articles or quickly made articles, the actually give good information to the users and are wellwritten and so. I've hired a girl to create more articles, so there will be a good flow on articles, interviews and so on soon. Some SEO guys tells me, that I should create and use a external blog "instead" and post the articles there instead of on my site. (ex www.newsiteblog.com) And another SEO guy tells me that I should run a blog on my own site (ex www.ownsite.com/blog) , where I post the articles. I have a really hard time deciding what is the best way, since I hear all kinds of ideas, and really dont know who to trust. My own idea is, that it seems "stupid" to take content from the site and put on external blog.
    Then I would also have to create a new blog, and point links from that to my site and so. Any of you guys have any ideas? Sorry for my bad english.

    | KasperGJ
    0

  • Greetings MOZ community: What is best practices when it comes to creating outbound links to other websites? Will adding such links improve the MOZ domain authority of my site? How many outbound links should be added to each page? For example I run a commercial real estate web site in New York. About 20 pages are written about neighborhoods. If several outbound links are added to each neighborhood page, and these links point to pages that provide further information about that neighborhood, will my neighborhood pages where the links originates from see improved ranking (or ranking potential)? Are these outbound links a critical SEO factor? Thanks everyone!!
    Alan

    | Kingalan1
    0

  • Generally speaking, is it better to use robots.txt or rel=noindex to prevent duplicate pages from being indexed?

    | TheaterMania
    0

  • If an article is published on The Times (for example), can Google by-pass the subscription sign-in to read the content and index the links in the article? Example: http://www.thetimes.co.uk/tto/life/property/overseas/article4245346.ece In the above article there is a link to the resort's website but you can't see this unless you subscribe. I checked the source code of the page with the subscription prompt present and the link isn't there. Is there a way that these sites deal with search engines differently to other user agents to allow the content to be crawled and indexed?

    | CustardOnlineMarketing
    0

  • Hi Mozzers I read an interesting post over on Authority Labs this morning about title tag length and how Google changes the way they are displayed. The author Brian advises that, "if you want your title tag to remain unchanged, it's worth making sure that you're staying within the 50-59 character window and that your titles fit with the content of the page". This got me thinking... Given the limited amount of title tag characters that are now shown in the SERPs, I find it difficult to include a primary keyword, secondary keyword and the company name. So, if you're a lesser known brand is it worth sacrificing your company name in the title tags of deeper pages for a secondary keyword to help with rankings, or even a special offer to grab a users eye in the SERPs? What are people's views on this? Thanks Anthony

    | Tone_Agency
    0

  • I was doing an audit on our site and searching for duplicate content using some different terms from each of our pages.  I came across the following result: www.sswug.org/url/32639 redirects to our website.  Is that normal?  There are hundreds of these url's in google all with the exact same description.  I thought it was odd.  Any ideas and what is the consequence of this?

    | Sika22
    0

  • Is it possible to have good SEO without links and with only quality content? Have you any experience?

    | Alex_Moravek
    2

  • Hi guys, I have embarked on a new site creation. The site is being created from scratch and very custom. Basically the site allows people to review certain products and services. If each review completed by users is seen as a seperate page by google ... is this considered deceptive or a likelihood of being slapped with a thin content penalty? Basically 1 product may have hundreds of reviews naturally over time. Some may be really short and some may be longer. the reason why i would like the user reviews to be seen as seperate pages is because I want google to understand that people are regularly interacting with the main content page. Any advice in this area would be really appreciated.

    | irdeto
    0

  • Hi We have a retail site and a blog that goes along with the site. The blog is very popular and the MD wanted a link from the blog back to the main retail site. However as this is a site wide link on the blog, am I right in thinking this really should be  no follow link. The link is at the top of every page. Thanks in advance for any help

    | Andy-Halliday
    0

  • Hi folks, I may be shooting WAY off the mark here for it to be laughable, but I wondered if anyone else was thinking about this. I was trying to get to sleep last night, but was thinking about rankings (as you do... You DO think about rankings instead of counting sheep don't you... I'm not weird or anything am I... AM I?) and it occurred to me that maybe Google uses frequency of brand queries as a ranking signal - was wondering if anyone had done any research into this? Assuming that if more people are searching for a brand name, then there must be an outside influence on this behaviour (offline ads or editorial for example) - and this all points to a site or company being popular or interesting - maybe Google looks at the growth in brand name queries, and boosts based on this... I have done no research into this (I was just thinking about it instead of counting sheep last night... because I probably AM weird...) but was wondering what people here thought of this. Also, I don't have time (or intelligence TBH) to run an experiment on this, but maybe one of you bright sparks would? Best wishes, Amelia PS - if I'm being STOOPID please be gentle with me 😉

    | CommT
    0

  • A company has a TLD (top-level-domain) which every single product: company.com/product/name.html The company also has subdomains (tailored to a range of products) which lists a choosen selection of the products from the TLD - sort of like a feed: subdomain.company.com/product/name.html The content on the TLD & subdomain product page are exactly the same and cannot be changed - CSS and HTML is slightly differant but the content (text and images) is exactly the same! My concern (and rightly so) is that Google will deem this to be duplicate content, therfore I'm going to have to add a rel cannonical tag into the header of all subdomain pages, pointing to the original product page on the TLD. Does this sound like the correct thing to do? Or is there a better solution? Moving on, not only are products fed onto subdomain, there are a handfull of other domains which list the products - again, the content (text and images) is exactly the same: other.com/product/name.html Would I be best placed to add a rel cannonical tag into the header of the product pages on other domains, pointing to the original product page on the actual TLD? Does rel cannonical work across domains? Would the product pages with a rel cannonical tag in the header still rank? Let me know if there is a better solution all-round!

    | iam-sold
    0

  • Hey All, I just can't figure this out. My site has been ranking well for years, i've never done anything suspicious with it and since the penguin update, my rankings have dropped across the board but only by about 4 - 8 places each, some terms have went up from nowhere to page 8 etc. I don't think i've been hit with a penalty, so I don't know what the problem is or how to recover from it. Does anybody have any ideas on what could be wrong? Update: Perhaps some sites that were linking to mine have been hit with a penalty? Update 2: I just found myself somehow in some spammy link network for 600 sites that looked identical, I don't know how or why my website is in this! I have disavowed all of these links 5 days ago, no change to rankings. pY80Dzi

    | Paul_Tovey
    0

  • So i have stumbled across an interesting issue with a new SEO client. They just recently launched a new website and implemented a proper 301 redirect strategy at the page level for the new website domain. What is interesting is that the new website is now indexed in Google BUT the old website domain is also still indexed in Google? I even checked the Google Cached date and it shows the new website with a cache date of today. The redirect strategy has been in place for about 30 days. Any thoughts or suggestions on how to get the old domain un-indexed in Google and get all authority passed to the new website?

    | kchandler
    0

  • Google isn't a fan of aggregation, but sometimes it is a good way to fill out content when you cannot cover every news story there is. What I'm wondering is if anyone has continued to do any form of aggregation based on a category and hide that url from Google. Example: example.com/industry-news/ -- is where you'd post aggregation stories but you block robots from crawling that. We wouldn't be doing this for search value just value to our readers. Thoughts?

    | meistermedia
    0

  • Hi everyone, So a little background - my company launched a new website (http://www.everyaction.com). The homepage is currently hosted on an amazon s3 bucket while the blog and landing pages are hosted within Hubspot. My question is - is that going to end up hurting our SEO in the long run? I've seen a much slower uptick in search engine traffic than I'm used to seeing when launching new sites and I'm wondering if that's because people are sharing the blog.everyaction.com url on social (which then wouldn't benefit just everyaction.com?) Anyways, a little help on what I should be considering when it comes to subdomains would be very helpful. Thanks, Devon

    | EveryActionHQ
    0

  • We are trying to figure out why the search result for the term "au pair" is not matching our designated title tag or anything on our page. If you search "au pair", please see the result for the domain interexchange.org. We do not see this problem with other search terms.

    | jrjames83
    0

  • My client is running 4 websites on ModX CMS and using the same database for all the sites. Roger has discovered that one of the sites has 2050 302 redirects pointing to the clients other sites. The Sitemap for the site in question includes 860 pages.  Google Webmaster Tools has indexed 540 pages. Roger has discovered 5200 pages and a Site: query of Google reveals 7200 pages. Diving into the SERP results many of the pages indexed are pointing to the other 3 sites.  I believe there is a configuration problem with the site because the other sites when crawled do not have a huge volume of redirects. My concern is how can we remove from Google's index the 2050 pages that are redirecting to the other sites via a 302 redirect?

    | tinbum
    0

  • I'm managing a site that has thousands of pages due to all of the dynamic parameter strings that are being generated. It's a real estate listing site that allows people to create a listing, and is generating lots of new listings everyday. The Moz crawl report is continually flagging A LOT (25k+) of the site pages for duplicate content due to all of these parameter string URLs. Example: sitename.com/listings & sitename.com/listings/?addr=street name Do I really need to do anything about those pages? I have researched the topic quite a bit, but can't seem to find anything too concrete as to what the best course of action is. My original thinking was to add the rel=canonical tag to each of the main URLs that have parameters attached. I have also read that you can bypass that by telling Google what parameters to ignore in Webmaster tools. We want these listings to show up in search results, though, so I don't know if either of these options is ideal, since each would cause the listing pages (pages with parameter strings) to stop being indexed, right? Which is why I'm wondering if doing nothing at all will hurt the site? I should also mention that I originally recommend the rel=canonical option to the web developer, who has pushed back in saying that "search engines ignore parameter strings." Naturally, he doesn't want the extra work load of setting up the canonical tags, which I can understand, but I want to make sure I'm both giving him the most feasible option for implementation as well as the best option to fix the issues.

    | garrettkite
    0

  • Used an alternate domain name to send print advertising traffic to so we could measure effectiveness. The domain does not have any content on it and only forwards to our actual domain. The issue is neither domain shows up when people search the alternate domain name without .com. The question is, will putting up content under that domain, including meta data, and submitting the site to Google as well as running fetch as Google make the alternate domain name to show up... it's an extremely unique domain name and there's basically zero competition for it.

    | Dom441
    0

  • Hi, I have some confusion about how our blog subdomain is handled in our sitemap.  We have our main website, example.com, and our blog, blog.example.com. Should we list the blog subdomain URL in our main sitemap?  In other words, is listing a subdomain allowed in the root sitemap? What does the final structure look like in terms of the sitemap and robots file?  Specifically: **example.com/sitemap.xml ** would I include a link to our blog subdomain (blog.example.com)? example.com/robots.xml would I include a link to BOTH our main sitemap and blog sitemap? blog.example.com/sitemap.xml would I include a link to our main website URL (even though it's not a subdomain)? blog.example.com/robots.xml does a subdomain need its own robots file? I'm a technical SEO and understand the mechanics of much of on-page SEO.... but for some reason I never found an answer to this specific question and I am wondering how the pros do it.  I appreciate your help with this.

    | seo.owl
    0

  • Disclaimer: I am not a developer During a recent site migration I have seen a bit of an increase in WMT of 404 errors on pages ending .php. Click on the link in WMT and it just shows as File Not Found - no 404 page. There are about 20 in total showing in webmaster tools and I want to advise the IT department what to do. What is the best way to deal with this for on-page best practice? Thanks

    | Blaze-Communication
    0

  • My company wants to set up two or three blogs (on previously unused domains) with the idea being to disseminate good content that gets picked up in SERPs and acts as a lead generator, shows us to be authorities in our market, creates brand (or individual employee who's doing the blogging) awareness etc... From scratch, what are all the boxes that should be ticked to make this work from the outset? What are the must haves?With all the ideals in place, how long could it realistically take to make this work? What are some pitfalls to look out for? Any advice in general will be appreciated. Thanks, M

    | Martin_S
    0

  • Hello, We launched and updated version of our site, mainly design changes and some functionality. 3 days after the launch we vanished from the rankings, previous page one results were now out of the top 100. We have identified some of the issues with the new site and chose to restore the old well ranking site. My question is how long might it take for the ranking to come back, if at all? The drop happened on the third day and the site was restored on the third day. We are now on day 6. Using GWT with have used fetch as Google and resubmitted the site map. Any help would be gladly received. Thanks James

    | JamesBryant
    0

  • hi, quick question, I've made a new instillation of wordpress at sussexchef.com/dev and I'm about to start building pages, obvoisly I'm going to move it to sussexchef.com when its all looking right. when I choose my page address links/ permalinks thingy, should I use new url names that don't already exist on the old site? or should I keep the old url names so I don't get loads of 404's, but include the "dev/" in the url name? Eg the old address sussexchef.com/home should I use sussexchef.com/dev/home or sussexchef.com/home-sussex-caterers while building the development site? I'm guessing the later my help out in google searches too? But if I use Dev in the url shurly I will have to go through almost 100 pages removing the dev/ and also changing all the links too? This would be days of work!
    So confused! I'd really appreciate your help here. Ben

    | SussexChef83
    0

  • Hi MOZers, I need some advice for my website: http://www.scorepromotions.ca/ I recently changed the sitemap submitted to GWT from http://www.scorepromotions.ca/sitemap.xml to http://www.scorepromotions.ca/google-sitemap.php I deleted the previously submitted XML sitemap from GWT on Friday & submitted the PHP sitemap on the advice of our developer. On Saturday, I noticed that all our organic rankings disappeared. So, I changed the PHP sitemap back to XML sitemap on Sunday. I am hoping to see my organic rankings recover to previous levels. Does anyone have any advice or experience to share about this issue ? Ankush

    | ScorePromotions
    0

  • What is the most efficient way to import search volume information into excel? We have 130K keywords that we need search volume information for.

    | nicole.healthline
    0

  • Hi, we have an alternate "list view" version of every one of our search results pages The list view has its own URL, indicated by a URL parameter I'm concerned about wasting our crawl budget on all these list view pages, which effectively doubles the amount of pages that need crawling When they were first launched, I had the noindex meta tag be placed on all list view pages, but I'm concerned that they are still being crawled Should I therefore go ahead and also apply a robots.txt disallow on that parameter to ensure that no crawling occurs? Or, will Googlebot/Bingbot also stop crawling that page over time? I assume that noindex still means "crawl"... Thanks 🙂

    | ntcma
    0

  • Hi I am hearing that Penguin 3.0 has rolled out within the last hour or so, can anyone confirm this and secondly has anyone been affected? My rankings don't seen to have changed, neither has my main competitors but it might just be too early to tell. Thanks Andy

    | Andy-Halliday
    3

  • Hi, It's a duplicate content question.  We sell products (vacation rental homes) on a number of websites as well as our own. Generally, these affiliate sites have a higher domain authority and much more traffic than our site. The product content (text, images, and often availability and rates) is pulled by our affiliates into their websites daily and is exactly the same as the content on our site, not including their page structure. We receive enquiries by email and any links from their domains to ours are nofollow. For example, all of the listing text on mysite.com/listing_id is identical to my-first-affiliate-site.com/listing_id and my-second-affiliate-site.com/listing_id. Does this count as duplicate content and, if so, can anyone suggest a strategy to make the best of the situation? Thanks

    | McCaldin
    0

  • The following pdf is cached by google: http://www.sba.gov/sites/default/files/files/REFERRAL%20LIST%20OF%20BOND%20AGENCIES_Florida.pdf However, OpenSiteExplorer is not listing any of the links as found in it. With such an authoritative site, I would think Google would value this, right?  None of the sites listed rank well though and OpenSiteExplorer's inability to see the links makes me wonder if Google provides these sites any value at all. Is there any link juice or brand mention value here for Google?

    | TheDude
    0

  • I have a site that's 2 years old and it completely disappeared from the search results. It wasn't a manual action because I never got a notice in webmaster tools. Could the site reappear? A lot of people on Odesk say the site will reappear after a few days.

    | The_Kiwi_Man
    0

  • Hi Im quite inexperienced on drupal (normally an umbraco user!) and im having some difficulty with the Metatags on the CMS. I have been applying Meta Title and descriptions to the individual pages however they only appear when i preview the page and not when the page is saved. When i go into the metatag section located at /admin/config/search/metatags i am given a list of settings including Global: Front Page and Node. Im sure the reason it keeps defaulting the metatags back is to do with this but im not sure what to change to apply my own Thanks in advance

    | TheZenAgency
    1

  • Hi Everyone, I have a client who has two website platforms, one of them is mandated by the manufacturer and the other is the one we use and is linked up to our Google Plus/Maps/etc. accounts. The one that is manufacturer mandated is showing up on the Google Knowledge graph and this is not ideal for us. Unfortunately, we cannot get rid of the other site because it is mandated. So how do we go about fixing this issue? I Had a few ideas, and I'd like to know if they would work. If you can think of something that's outside of the box, I'd appreciate it. 1.) Put a rel=canonical across the website 2.) Remove all keywords that might trigger it to show up on the knowledge graph from the URL of the non ideal site 3.) Go for a .net or .us domain. Do these kind of domains have less authority and are less likely to show up in a google search? Thanks!

    | oomdomarketing
    0

  • I could use a second opinion about moving content from some inactive sites to my main site. Once upon a time, we had a handful of geotargeted websites set up targeting various cities that we serve. This was in addition to our main site, which was mostly targeted to our primary office and ranked great for those keywords. Our main site has plenty of authority, has been around for ages, etc. We built out these geo-targeted sites with some good landing pages and kept them active with regularly scheduled blog posts which were unique and either interesting or helpful. Although we had a little success with these, we eventually saw the light and realized that our main site was strong enough to rank for these cities as well, which made life a whole lot easier, not to mention a lot less spammy. We've got some good content on these other sites that I'd like to use on our main site, especially the blog posts. Now that I've got it through my head that there's no such thing as a duplicate content penalty, I understand that I could just start moving this content over so long as I put a 301 redirect in place where the content used to be on these old sites. Which leads me to my question. Our SEO was careful not to have these other websites pointing to our main site to avoid looking like we were trying to do something shady from a link building perspective. His concern is that these redirects would undermine that effort and having a bunch of redirects from a half dozen sites could end up hurting us somehow. Do you think that is the case? What he is suggesting we do is remove all of the content that we'd like to use and use Webmaster Tools to request that this content be removed from the index. Then, after the sites have been recrawled, we'll check for ourselves to confirm they've been removed and proceed with using the content however we'd like. Thoughts?

    | LeeAbrahamson
    0

  • My company is conducting a link detox for a client, and it seems like every tool we utilize is giving us a different answer on how many links we actually have.  the numbers range anywhere from 4,000 to 200,000. Does anyone have any suggestions as to what tools will give us an accurate count, and will also email the webmasters on your behalf requesting the links removal?  We are trying to have this process be as automated as possible to save time on our end.

    | lightwurx
    0

  • Based on the following article http://homebusiness.about.com/od/yourbusinesswebsite/a/google-alerts.htm in order to check if you are included you need to run site:domain.com and click the news search tab. If you are not there then... I ran the test on MOZ and got no results which surprised me. Next step according to :https://support.google.com/news/publisher/answer/40787?hl=en#ts=3179198 is to submit your site for inclusion. Should I? Will it help? P.S.
    This is a followup question to the following: http://moz.com/community/q/what-makes-a-site-appear-in-google-alerts-and-does-it-mean-anything

    | BeytzNet
    0

  • My colleague owns a domain (A) for about 10 years that he does not use. The domain's content is the same as my company's website (B) content. 
    Question: Can I 301 redirect domain A to domain B's homepage or is it better he just closes down his website since this would not be SEO best practices? thank you

    | khi5
    0

  • Hi guys and girls! Just putting a new site live, we changed the URL from one thing to another and I created a 301 file redirecting the urls like for like. The developer installing it has created a different file with columns like: RewriteRule ^page/ http://www.site/page [R=301,L] RewriteRule ^/page/ http://www.site/page [R=301,L] What's the difference? The page redirects but is there a difference between the 301 redirect and this URL rewrite in terms of SEO and link value?

    | shloy23-294584
    0

  • Sorry, this has been worked out.

    | ColeLusby
    0

  • Hi All, I recently started using Google Alerts more and more and while sites I support never appear there (not surprising) I recently noticed few very poor and low quality sites that do. This site for example appears quite a bit in its niche. So to my questions... What makes a site appear in Google Alerts? And does it mean anything? Thanks

    | BeytzNet
    0

  • Hi, I found some excessive cross domain linking from a separate blog to the main company website. It sounds like best practice is to cut back on this, but I don't have any proof of this. I'm cautious about cutting off existing links; we removed two redundant domains that had a huge number of links pointing to the main site almost 1 year ago, but didn't see any correlated improvement in rankings or traffic per se. Hoping some people can share a success story after pruning off excessive cross linking either for their own website or for a client's. Thanks 🙂

    | ntcma
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.