Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Category: Moz Pro

Discuss the Moz Pro tools with other users.

Subcategories

  • Chat keyword research strategy and how Keyword Explorer helps you do your best work.

  • Cover all things links and the industry-leading link data discoverable in Link Explorer.


  • Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain.  The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ).  Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible.  The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site.  In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
    <ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
    <ifmodule mod_mime.c=""># DEFLATE by extension
    AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
    RewriteBase /
    RewriteCond %{HTTP:Accept-Encoding} gzip
    RewriteRule .* - [E=W3TC_ENC:_gzip]
    RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
    RewriteRule .* - [E=W3TC_PREVIEW:_preview]
    RewriteCond %{REQUEST_METHOD} !=POST
    RewriteCond %{QUERY_STRING} =""
    RewriteCond %{REQUEST_URI} /$
    RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
    RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
    RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
    RewriteBase /
    RewriteRule ^index.php$ - [L]
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
    RewriteEngine on
    RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
    RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
    Disallow:
    Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
    RewriteBase /WPnewsiteDee/
    RewriteRule ^index.php$ - [L]
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
    Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg

    | SEOguy1
    0

  • Hi,
    I'd like to offer free website health checks (basic audits) and am wondering what tools other people use for this? It would be good to use something that presents the data well. Moz is great but it gets expensive if I want to offer these to many businesses in the hope of taking on just a few as clients and doing a full manual audit for them. So far I've tried seositecheckup.com (just checks a single page though), metaforensics.io and mysiteauditor. Thanks!

    | CamperConnect14
    0

  • SEOmoz says their Domain / Page Authority is logarithmic, meaning that lower rankings are easier to get, higher rankings harder to get. Makes sense. But does anyone know what logarithmic equation they use? I'm using the domain and page authority as one metric in amongst other metrics in my keyword analysis. I can't have some metrics linear, others exponential and the SEOmoz one logarithmic.

    | eatyourveggies
    0

  • Hello, How can I see which videos from Youtube that has my domain inserted in their description url drive traffic to my domain? I can see in GA how many visitors are coming from Youtube to my domain, but I can't see what Youtube video pages has driven traffic. Any help?

    | xeonet32
    0

  • So the ratio is MozTrust to MozRank, but what is this good for?  What can I deduce from this and what can I use it for?

    | MarloSchneider
    2

  • Hello Moz! I just subscribed for your Moz Pro program. Amazing stuff! On open site explorer, I found a number of links to my site from a page called with a very high page authority and high domain authority, but also a high spam score (8 or 9, one with a 10). I say multiple spam scores, because it's strange, there are what appears variations of the same url, and each one is considered a link.  For instance, there's an abc.linkstomysite.com and xyz.linktomysite.com, and 123.linktomysite.com... there are about 15 of these (all with the spam scores mentioned above)! This must have been some old SEO work done I payed for back in the prehistoric SEO days. However, my fear is the following: Removing these links, and then losing some potentially strong link juice.  I don't have many high DA or PA links to my site, and these are some major ones. The domain in question "linktomysite.com", when entered into OSE, only has a spam score of 4, and it has a domain authority of 45 and page authority of 37.  My site has a spam score of 2 and no messages from google regarding a penalty, but an overall reduction in google traffic over the years (just keeps slowly dropping... as if a weight is pulling me down?) What do you think, should I leave, or remove?  The linkstomysite page is just a LONG page full of links, with short descriptions, nothing of value, but with a an old domain age (relatively). Most important for me is keeping at least some ranking/visibility, while I personally work on building quality links and helpful content. thanks!

    | DavidC.
    0

  • Hi What is the best tool out there to doing a full brand audit for company.  They've created at least 15 websites and have possibly many domain names which have been ordered by different people over many years. I need to find the quickest and best tool to do this with. Any suggestions would be great.

    | Cocoonfxmedia
    0

  • Currently i am using MOZ pro tool under moz analyticls >> Moz Competitive Link Metrics >> history having a graph "Linking C-Blocks" Please help me understanding Linking C-Blocks, what is, How to build, how to define ...

    | shankar333
    4

  • Hey everybody, I'm going through all my sites and disavowing crap links. However, I'm having trouble distinguishing which high DA sites to disavow. What would you do? For example:
    https://moz.com/researchtools/ose/spam-analysis?site=busca.starmedia.com&target=domain&source=subdomain&page=1&sort=spam_score and https://moz.com/researchtools/ose/spam-analysis?site=cc879fe.activerain.com&target=domain&source=subdomain&page=1&sort=spam_score They both have tons of backlinks - both good and crap. The first has a DA of 72 and a Moz spam score of 4/17 and the second has a DA of 86 and a Moz spam score of 9/17

    | MEllsworth
    1

  • Can you add a user to your MOz account or to a single campaign?

    | cschwartzel
    1

  • Hi There Another obvious question to some I hope. I ran my first report using the Moz crawler and I have a bunch of pages with temporary redirects as a medium level issue showing up. Trouble is the pages don't exist so they are being redirected to my custom 404 page. So for example I have a URL in the report being called up from lord only knows where!: www.domain.com/pdf/home.aspx This doesn't exist, I have only 1 home.aspx page and it's in the root directory! but it is giving a temp redirect to my 404 page as I would expect but that then leads to a MOZ error as outlined.  So basically you could randomize any url up and it would give this error so I am trying to work out how I deal with it before Google starts to notice or before a competitor starts to throw all kinds at my site generating these errors. Any steering on this would be much appreciated!

    | Raptor-crew
    0

  • What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO.  They are not my own websites.  I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage.  I'm an academic looking at science communication.  I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit.  Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit?  What do I miss out on?  Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using.  (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation?  The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?

    | scienceisrad
    0

  • Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
    Disallow: /*numberOfStars=0 User-agent: rogerbot
    Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!

    | Blacktie
    0

  • i have a website that features more than 9,000 pages. i'm trying to figure out which ones have missing or incorrect title tags. Should I start with screaming frog??

    | SapphireCo
    0

  • I sell Decorative nutcrackers of various sizes. At this time, I use the term "inches". I was looking at my competitor the other day, and when I did a search on his site, using the term inches did not provide me with a result. I was forced to use ex. "10" nutcracker" instead of "10 inch nutcracker". Is there a preferable usage for seo purposes. Thanks! my site: http://www.nutcrackerballetgifts.com/category/5/10-Inch-Nutcrackers.html his ite: http://www.kurtadler.com/Search?search=10"+nutcracker

    | NutcrackerBalletGifts
    0

  • Hello guys and gals, This is a very odd one, I've a client's website and most of the crawlers I'm using are giving me weird/ wrong results. For now lets focus on screaming frog, when I crawl the site it will list e.g. meta titles as missing (not all of them though), however going into the site the title is not missing, and Google seems to be indexing the site fine. The robots.txt are not affecting the site (I've also tried changing the user agent). The other odd thing is SF gives a 200 code but as a status tells me "connection refused" even though it's giving me data. I'm unable to share the clients site, has any one else seen this very odd issue? And solutions for it? Many thanks in advanced for any help,

    | GPainter
    0

  • How could I begin to research this to find out why we have less total links and a lower domain authority. Our DA has gone down two points and our total links dropped by a few thousand.  I'd like to find out why that happened but I'm just not sure where to start.  Thanks!

    | Sika22
    0

  • I'm frustrated, so want to ask a stupid question....My site.. www.seadwellers.com outranks my biggest competitor in most Moz catagories... www.rainbowreef.us ...EXCEPT Facebook likes...(he has a ton)  **And yet, rainbowreef.us outranks me in most keywords on
    Google?!  I know it's not simple...but  Can anyone take a quick peek and give me any insight as to why???  ** Example "Dive Key Largo" keyword...he is #1 and I am #5...typical in the most important keywords!

    | sdwellers
    0

  • Hey Moz'ers - I just added a new site to my Moz Pro account and when I got the crawl report back there was a ton of 404 errors (see attached). I realize the best way to fix these is to manually go through every single error and see what the issue is... I just don't have time right now, and I don't have a team member that can jump on this either, but realize this will be a huge boost to this client if/when I get these resolved... So my question is: Is there a quicker way to get these resolved? Is there an outsourcing company that can fix my clients errors correctly? Thanks for the help in advance:) wBhzEeV

    | 2Spurs
    0

  • Technical help required - please!
    In our Duplicate Content Pages Report I see a lot of duplicate pages that are created by one URL plus several versions of the same page with the dynamic content, for example,
    http://www.georgebrown.ca/immigranteducation/programs
    http://www.georgebrown.ca/school-program.aspx?id=1909&Sortid=Study
    http://www.georgebrown.ca/school-program.aspx?id=1909&Sortid=Term
    http://www.georgebrown.ca/school-program.aspx?id=1909&Sortid=Certification
    http://www.georgebrown.ca/school-program.aspx?id=1909&Sortid=Title How do we solve it?

    | GBCweb
    0

  • Hey guys, I have been managing a few websites and have input them into Moz for crawl reports, etc. For a while I have noticed that we were getting a gratuitous amount of errors when it came to the number of missing meta tags.  It was numbering in the 200's.  The sites were in place before I got here and a lot of the older posts no one had even attempted to include tags, links of the page or anything. As they are all Wordpress Sites and they all already had the Yoast/Wordpress SEO plug-in installed on them, I decided I would go through each post and media file one at a time and update their meta tags via the plug in.  I personally did this so I know that I added and saved each one, however the Moz crawl reports continue to show that we are missing roughly 200 meta tags.  I've seen a huge drop off in 404 errors and stuff since I went through and double checked everything on the sites, however the meta tag errors persist. Is this the case that Moz is not recognizing the tags when it crawls because I used the Yoast Plugin?  Or would you say that the plugin is the issue and I should find another way to add meta tags to the pages and posts on the site?  My main concern is that if Moz is having issues crawling the sites, is Google also seeing the same thing? The URLS include:
    sundancevacationsblog.com
    sundancevacationsnews.com
    sundancevacationscharities.com Any help would be appreciated!

    | MOZ.info
    0

  • Good afternoon everyone. I wanted to pose a question to the group about keywords and the "on-page optimization -  grade a page tool." I have a list of keywords that I am trying to rank for. Some of them are not ranked in the top 50, so on the keyword ranking tool it gives you the 51+ message in the rank column. For the items that are ranked I can try to improve them by looking at grade a page and typing in the URL and keyword. It will then give me a score and suggestions on how to improve it. With that being said, is there an easy way to find out which pages I should be optimizing those keywords which rank at 51+ for,  besides typing the keywords in Google and seeing what URL it associates with the specific keyword? I hope the question above is clear.

    | trumpfinc
    0

  • My website is in wordpress and i was using Premium SEO Pack and for what i read is not so good as ai thought it will be, so i will like to know from someone who have use a plugin or Yoast plugin what is the best options for me to make my website ranking better

    | dawgroup
    1

  • Hello, First post here. I am a IT infrastructure engineer that does application and deployment work mostly,  but new to SEO. That being said, I am a firm believer in demming wheel and defined process/flow charts. I have a website up, built as best practice, its a Magento CE open source Ecommerce store.  After some initial research,  it seems I really need to define my keywords before I start branding my landing pages and such. So i am going to have to go back and implement the keywords and evaluate. But as I am reading it seems most SEO people try their keywords in google ad words first? Then implement? I am hoping to find some flow or process sheets on how successful Keyword research is done via step by step process and evaluation. And explanations of each. How many keywords to start with per landing page, how many to try and get in the H1 headers, paragraphs, URL. How long to leave keywords in to test? Also, I am a start up company that is competing with the big boys in my market space. I know I cant compete for the big keywords in my market? What is my best strategy for getting any kind of ranking as a small business in a global market? SEO is a mysterious, intriguing thing to me! Very much reminding me of one of my favorite whimsical quotes I will leave you will now, and thanks for reading. -Alex "Invention, my dear friends, is 93% perspiration, 6% electricity, 4% evaporation, and 2% butterscotch ripple." - Willy Wonka

    | iamgreenminded
    0

  • Over the past couple of months I have seen a couple of my campaigns have key phrases that change from week to week from being in the top 3 to being >51.  I worked pretty hard on the sites to get them mostly #1 for their preferred key phrases, and haven't been working that hard lately, but not sure what to make of it.  It looks as though those keyphrases that dropped this week to >51 will probably be back in the top 3.  Makes it diffuclt when you are trying to prepare a report for your client. Any suggestions are welcomed, and Happy New Year!

    | chill986
    1

  • For the last week we have hosted Live Chat on www.freedomltd.com desktop and tablet. The latest MOZ ranking reports shows 4 increase, 86 decrease and 170 unchanged keywords. Could Live Chat have anything to do with this?

    | henandstag
    0

  • A search term in MOZ shows the monthly search volume to be 49K. In Google, the same term shows the search volume at only 1300 monthly searches. Which do I trust? Thanks, Don

    | rcman
    0

  • I use Internet Officer tool to see the 302 redirect but I check the redirects in the CPanel and there are none. In the .htaccess there are none either. I don't know where else to look 😞 The url is http://servicioshosting.com Can you guys help me? I can't set up a campaign because Google can't crawl the website. I can't setup the Facebook OpenGraph because of the redirect. error.jpg

    | vanessacolina
    0

  • I'd like to crawl our ecommerce site to see how deep (clicks from home page) pages are. I want to verify that every category, sub-category, and product detail page is within three clicks of the home page for googlebot. Suggestions? Thanks!

    | Garmentory
    0

  • I am using the Magento shopping cart, and 99% of my duplicate content errors come from the login page.  The URL looks like: http://www.site.com/customer/account/login/referer/aHR0cDovL3d3dy5tbW1zcGVjaW9zYS5jb20vcmV2aWV3L3Byb2R1Y3QvbGlzdC9pZC8xOTYvY2F0ZWdvcnkvNC8jcmV2aWV3LWZvcm0%2C/ Or, the same url but with the long string different from the one above.  This link is available at the top of every page in my site, but I have made sure to add "rel=nofollow" as an attribute to the link in every case (it is done easily by modifying the header links template). Is there something else I should be doing?  Do I need to try to add canonical to the login page?  If so, does anyone know how to do it using XML?

    | kdl0
    1

  • I used the crawl tool and it return a 404 error for several pages that I no longer have published in Wordpress. They must still be on the server somewhere? Do you know how to remove them? I think they are not a file on the server like an html file since Wordpress uses databases? I figure that getting rid of the 404 errors will improve SEO is this correct? Thanks, David

    | DJDavid
    0

  • I am getting a list of crawl errors in Moz because I am using a 302 redirect when people click on an item using the quickview add to cart eg:http://copyfaxes.com/cart/quickadd?partno=4061 will redirect them to the viewshoppingcart page. Is this wrong should this be a 301 redirect? There is no link juice to pass. Thanks

    | copyfaxes1
    0

  • I recently designed a new website to replace an old site. We managed to hold 90% ranking and traffic by keeping the same URS's and content. Now that we have completed that we are now updating the whole site. It is an commerce website. Some of the items we were selling we are getting from a new vendor. If I move these products from one one vendor to another will this affect SEO? Here is an example. I have a product called "Green Zipper Sweater". This product is Anchored via Manufacturer and Category. The URL and Title Tag are  green-zipper-sweater. If the Sweater was made by "Nike Green Sweaters" and our new supplier is "Gap Yellow and Green Sweaters". If I change the the Manufacturer and now put it under "Gap Green and Yellow Sweaters" will this affect my ranking. We are continuing to stock products from the original supplier "Nike Green Sweaters" and we have an aggressive SEO plan and are ranking very well for "Nike Green Sweaters". We also have good product ranking so "Green Zipper Sweater" brings us a lot of traffic. I want to be sure I do not loose ranking for  the product page "Green Zipper Sweater" and the Brand "Nike Green Sweaters" Any advice would be appreciated.

    | robbieire
    0

  • My company website is built on WordPress. It receives very few crawl errors, but it do regularly receive a few (typically 1-2 per crawl) "429 : Received HTTP status 429" errors through Moz. Based on my research, my understand is that my server is essentially telling Moz to cool it with the requests. That means it could be doing the same for search engines' bots and even visitors, right? This creates two questions for me, which I would greatly appreciate your help with: Are "429 : Received HTTP status 429" errors harmful for my SEO? I imagine the answer is "yes" because Moz flags them as high priority issues in my crawl report. What can I do to eliminate "429 : Received HTTP status 429" errors? Any insight you can offer is greatly appreciated! Thanks,
    Ryan

    | ryanjcormier
    0

  • what is relation between domain age and domain authority? Old registered domain help for domain authority higher or not? if so, but i am still in confused, http://www.green-lotus-trekking.com/ this is too old domain but authority is only 33?

    | agsln
    1

  • I got a crawl issue that 82% of site pages have missing title tags
    All this pages are ashx files (4400 pages). 
    Should I better removed all this files from google ?

    | thlonius
    0

  • Hello dear. I have a client in IRAN and I want to track Persian keywords.But the MOZ tracker keywords doesn't support that country. Please help and advise me how can I use track keywords in IRAN?

    | allyunit
    1

  • When setting up a campaign for keywords and selected search engines. I am targeting a UK market so obviously Google.co.uk is my main target. However when I set up the campaign i select Google as the SE but it asks me if I want United Kingdom or Great Britain. I selected United Kingdom and it has been returning 0 results for rankings even though I can physically see them in the SERPS. Does anyone know exactly what is the difference between the UK and GB options. I assume I should of specified GB but not sure. Any help would be appreciated. Thanks

    | hanv
    0

  • Hi How long aproximately does G take to pass authority via a 301 from an old page to its new replacement page ? Does Moz Page Authority reflect this in its score once G has passed it ? All Best
    Dan

    | Dan-Lawrence
    3

  • Hi all, Can anyone recommend a tool that will allow me to put in a list of about 200 domains that are then checked for a link back to a specific domain? I know I can do various link searches and use Google site: command on a site by site basis, but it would be much quicker if there was a tool that could take the list of domains I am expecting a link  on and then find if that link exists and if so on what page etc. Hope this makes sense  otherwise I have to spend a day doing it by hand - not fun! Thanks,
    charles.

    | MrFrisbee
    0

  • Hello Mozzers! Say there is a website with 100 pages and a domain authority of 25. If the number of pages on this website increases to 10,000 can that decrease its domain authority or affect it in any way?

    | MozAddict
    0

  • G'd everyone, I need help with understanding how special characters impact SEO.  Eg. é , ë ô in words Does anyone have good insights or reference material regarding the treatment of Special Characters by Google Search Engine? how Page Title / Meta Desc with Special Chars are being index  & Crawl Best Practices when it comes to URLs - uses of Unicode, HTML entity references - when are where? any disadvantage using special characters Does special characters in URL have any impact on SEO performance & User search, experience. Thanks heaps, Amy

    | LabeliumUSA
    0

  • I have the credentials in the URL correctly but it will continue to fail authentication. I will not post them obviously but is there a problem with the API currently? I tried creating new credentials  Also I have used this before so I am sure it is not a problem with the credentials. I somehow managed to get Chrome to show the data. Firefox will not and the code i have written also return authentication failed. This is a bug on your end. Please fix it ASAP.

    | ColumK
    0

  • Good afternoon Mozzers, One of our clients is a real estate agent and on that site there is a search field that will allow a person to search by filtered categories. Currently, the URL structure makes a new URL for each filter option and in my Moz reports I get the report that there is missing meta data. However, the page is the same the filter options are different so I am at a loss as to how to proper tag our site to optimize those URL's. Can I rel canonical the URL's or alt rel them? I have been looking for a solution for a few days now and like I said I am at a loss of how to properly resolve these warning messages, or if I should even be concerned with the warning messages from Moz (obviously I should be concerned, they are warning messages for a reason). Thank you for your assistance in advance!

    | Highline_Ideas
    0

  • I used 301 redirects to point several versions of the homepage to www.site.com. i was just rereading moz's beginners guide to seo, and it uses that scenario as an example for rel canonical, not 301 redirects. Which is better? My understanding is that 301s remove all doubt of getting links to the wrong version and diluting link equity.

    | kimmiedawn
    0

  • I am new to Moz (as a member), so I am not sure if Moz has a tool that I need. I don't want this post to be about self promotion, so I will keep it short. Our business helps increase conversions and sales for online businesses. Our ideal prospects belongs to some key categories of businesses like ecommerce, saas etc. However, I would like to know the estimated volume of traffic for a website before approaching them and introducing our service. So if there was a tool I could use to estimate the volume of visitors a specific website receives on average a day or month, it would be hugely beneficial.Obviously, these are prospective clients, so we do not have access to their system or their analytics. I just want to get an estimate. So for example, if I entered the domain abc.com into the system, I would hope it could tell me, that abc.com gets an average of 900 unique visitors a day. I don't need too much detail like geographic locations etc, but it would be a bonus having that additional information. I also don't mind paying for a tool that's quality. So it doesn't have to be free.

    | RyanShahed
    0

  • We are launching a new site within the next 48 hours.  We have already purchased the 30 day trial and we will continue to use this tool once the new site is launched. Just looking for some tips and/or best practices so we can compare the old data vs. the new data moving forward....thank you in advance for your response(s). PB3

    | Issuer_Direct
    0

  • I just pulled a search term report for all of 2013 from my PPC account.  What I got was 673,000 rows of terms that have garnered at least 1 impression in 2013.  This is exactly what I was looking for. My issue is that the vast majority of terms are geo-modified to include the city, the city and state or the zip code.  I am trying to remove the geographic information to get to a list of root words people are interested in based on their search query patterns. Does anyone know how to remove all city, state and zip codes quickly without having to do a find and replace for each geo-modifier in excel? for example, if i could get a list of all city and state combinations in the US and a list of all zip codes, and put that list on a separate tab and then have a macro find and remove from the original tab any instances of anything from the second tab, that would probably do the trick.  Then I could remove duplicates and have my list of root words.

    | dsinger
    0

  • What Exactly Does "Linking Root Domains" mean?? And how does it affect your ranking for certain Keywords?? Thanks

    | Caseman
    57

  • We are getting "Multiple meta descriptions found!" error when testing meta in the Chrome MozBar extension. We are using the Wordpress All in One SEO plugin. Thinking there may be a conflict with default meta description being blank and needing removed to not conflict with meta generated by All in One SEO. Curious if anyone has come across this and any info on eradicating the issue would be greatly appreciated. Most likely a newbie question on my part. Thanks!

    | departikan
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.