Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • SEO Q&A
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • Case Studies
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      What is your Brand Authority?
      Moz

      What is your Brand Authority?

      Check yours now
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • SEO Q&A

        Insights & discussions from an SEO community of 500,000+.

      Unlock flexible pricing & new endpoints
      Moz API

      Unlock flexible pricing & new endpoints

      Find your plan
    • Blog
    • Why Moz
      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • Case Studies

        Explore how Moz drives ROI with a proven track record of success.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Intermediate & Advanced SEO
    4. How to prevent 404's from a job board ?

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    How to prevent 404's from a job board ?

    Intermediate & Advanced SEO
    2
    3
    929
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • jlane9
      jlane9 last edited by

      I have a new client with a job listing board on their site.

      I am getting a bunch of 404 errors as they delete the filled jobs.

      Question:

      Should we leave the the jobs pages up for extra content and entry points to the site and put a notice like this job has been filled, please search our other job listings ?

      Or should I no index - no follow these pages ?

      Or any other suggestions - it is an employment agency site.

      Overall what would be the best practice going forward - we are looking at probably 20 jobs / pages per month.

      1 Reply Last reply Reply Quote 0
      • jlane9
        jlane9 @bradkrussell last edited by

        Thank you Brad,

        There are a few options and before I choose I wanted to get some feedback from some Seo pro's like yourself as you never know when I may learn something new or find a better solution to the things I do.

        I def. do not want all those  404's on the site.

        1 Reply Last reply Reply Quote 0
        • bradkrussell
          bradkrussell last edited by

          From my understanding it's pretty rare to keep old jobs on an employment agency site?

          Can't you just redirect each URL to a similar job, job category or other relevant section on the site? Better than getting a heap of 404s!

          jlane9 1 Reply Last reply Reply Quote 0
          • 1 / 1
          • First post
            Last post

          Got a burning SEO question?

          Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


          Start my free trial


          Browse Questions

          Explore more categories

          • Moz Tools

            Chat with the community about the Moz tools.

          • SEO Tactics

            Discuss the SEO process with fellow marketers

          • Community

            Discuss industry events, jobs, and news!

          • Digital Marketing

            Chat about tactics outside of SEO

          • Research & Trends

            Dive into research and trends in the search industry.

          • Support

            Connect on product support and feature requests.

          • See all categories

          Related Questions

          • rickyporco

            After hack and remediation, thousands of URL's still appearing as 'Valid' in google search console. How to remedy?

            I'm working on a site that was hacked in March 2019 and in the process, nearly 900,000 spam links were generated and indexed. After remediation of the hack in April 2019, the spammy URLs began dropping out of the index until last week, when Search Console showed around 8,000 as "Indexed, not submitted in sitemap" but listed as "Valid" in the coverage report and many of them are still hack-related URLs that are listed as being indexed in March 2019, despite the fact that clicking on them leads to a 404. As of this Saturday, the number jumped up to 18,000, but I have no way of finding out using the search console reports why the jump happened or what are the new URLs that were added, the only sort mechanism is last crawled and they don't show up there. How long can I expect it to take for these remaining urls to also be removed from the index? Is there any way to expedite the process? I've submitted a 'new' sitemap several times, which (so far) has not helped. Is there any way to see inside the new GSC view why/how the number of valid URLs in the indexed doubled over one weekend?

            Intermediate & Advanced SEO | | rickyporco
            0
          • rastellop

            Does Google ignore content styled with 'display:none'?

            Do you know if an H1 within a div that has a 'display: none' style applied will still be crawled and evaluated by Google? We have that situation on this page on line 136: view-source:https://www.junk-king.com/services/items-we-take/foreclosure-cleanouts Of course we also have an H1 up at the top of the page and are concerned that the second one will cause interference with our SEO efforts. I've seen conflicting and inconclusive information on line - not sure. Thanks for any help.

            Intermediate & Advanced SEO | | rastellop
            0
          • McTaggart

            Why do people put xml sitemaps in subfolders? Why not just the root? What's the best solution?

            Just read this: "The location of a Sitemap file determines the set of URLs that can be included in that Sitemap. A Sitemap file located at http://example.com/catalog/sitemap.xml can include any URLs starting with http://example.com/catalog/ but can not include URLs starting with http://example.com/images/." here: http://www.sitemaps.org/protocol.html#location Yet surely it's better to put the sitemaps at the root so you have:
            (a) http://example.com/sitemap.xml 
            http://example.com/sitemap-chocolatecakes.xml
            http://example.com/sitemap-spongecakes.xml 
            and so on... OR this kind of approach - 
            (b) http://example/com/sitemap.xml
            http://example.com/sitemap/chocolatecakes.xml and 
            http://example.com/sitemap/spongecakes.xml I would tend towards (a) rather than (b) - which is the best option? Also, can I keep the structure the same for sitemaps that are subcategories of other sitemaps - for example - for a subcategory of http://example.com/sitemap-chocolatecakes.xml I might create http://example.com/sitemap-chocolatecakes-cherryicing.xml - or should I add a sub folder to turn it into http://example.com/sitemap-chocolatecakes/cherryicing.xml Look forward to reading your comments - Luke

            Intermediate & Advanced SEO | | McTaggart
            0
          • andyheath

            Will disallowing URL's in the robots.txt file stop those URL's being indexed by Google

            I found a lot of duplicate title tags showing in Google Webmaster Tools. When I visited the URL's that these duplicates belonged to, I found that they were just images from a gallery that we didn't particularly want Google to index. There is no benefit to the end user in these image pages being indexed in Google. Our developer has told us that these urls are created by a module and are not "real" pages in the CMS. They would like to add the following to our robots.txt file Disallow: /catalog/product/gallery/ QUESTION: If the these pages are already indexed by Google, will this adjustment to the robots.txt file help to remove the pages from the index? We don't want these pages to be found.

            Intermediate & Advanced SEO | | andyheath
            0
          • MyPetWarehouse

            Duplicate Content through 'Gclid'

            Hello, We've had the known problem of duplicate content through the gclid parameter caused by Google Adwords. As per Google's recommendation - we added the canonical tag to every page on our site so when the bot came to each page they would go 'Ah-ha, this is the original page'. We also added the paramter to the URL parameters in Google Wemaster Tools. However, now it seems as though a canonical is automatically been given to these newly created gclid pages; below https://www.google.com.au/search?espv=2&q=site%3Awww.mypetwarehouse.com.au+inurl%3Agclid&oq=site%3A&gs_l=serp.3.0.35i39l2j0i67l4j0i10j0i67j0j0i131.58677.61871.0.63823.11.8.3.0.0.0.208.930.0j3j2.5.0....0...1c.1.64.serp..8.3.419.nUJod6dYZmI Therefore these new pages are now being indexed, causing duplicate content. Does anyone have any idea about what to do in this situation? Thanks, Stephen.

            Intermediate & Advanced SEO | | MyPetWarehouse
            0
          • mark_baird

            Do Q&A 's work for SEO

            If I create a good community in my particular field on my SEO site and have a quality Q&A section like this etc (ripping of MOZ's idea here sorry, I hope it's ok) will the long term returns be worth the effort of creating and man ageing this. Is the user created content of as much use as I think it will be?

            Intermediate & Advanced SEO | | mark_baird
            0
          • esiow2013

            May know what's the meaning of these parameters in .htaccess?

            Begin HackRepair.com Blacklist RewriteEngine on Abuse Agent Blocking RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Bolt\ 0 [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:craftbot@yahoo.com [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} CazoodleBot [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Custo [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Default\ Browser\ 0 [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^DIIbot [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^DISCo [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} discobot [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^eCatch [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ecxi [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^EmailCollector [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^FlashGet [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^GetRight [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^GrabNet [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Grafula [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} GT::WWW [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} heritrix [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^HMView [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} HTTP::Lite [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ia_archiver [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} IDBot [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} id-search [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} id-search.org [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^InterGET [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^InternetSeer.com [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} IRLbot [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ISC\ Systems\ iRc\ Search\ 2.1 [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Java [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^JetCar [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^larbin [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} libwww [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} libwww-perl [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Link [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} LinksManager.com_bot [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} linkwalker [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} lwp-trivial [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Maxthon$ [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} MFC_Tear_Sample [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^microsoft.url [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} Microsoft\ URL\ Control [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} Missigua\ Locator [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Mozilla.*Indy [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Mozilla.NEWT [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^MSFrontPage [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Navroad [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^NearSite [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^NetAnts [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^NetSpider [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^NetZIP [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Nutch [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Octopus [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} panscient.com [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^pavuk [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} PECL::HTTP [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^PeoplePal [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} PHPCrawl [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} PleaseCrawl [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^psbot [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^RealDownload [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^ReGet [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Rippers\ 0 [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} SBIder [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^SeaMonkey$ [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^sitecheck.internetseer.com [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} Snoopy [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} Steeler [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^SuperBot [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Surfbot [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Toata\ dragostea\ mea\ pentru\ diavola [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} URI::Fetch [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} urllib [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} User-Agent [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} Web\ Sucker [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} webalta [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WebAuto [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^[Ww]eb[Bb]andit [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} WebCollage [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WebCopier [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WebFetch [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WebReaper [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WebSauger [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WebStripper [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WebZIP [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} Wells\ Search\ II [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} WEP\ Search [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Wget [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Widow [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WWW-Mechanize [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} zermelo [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^Zeus [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ^(.)Zeus.Webster [NC,OR]
            RewriteCond %{HTTP_USER_AGENT} ZyBorg [NC]
            RewriteRule ^. - [F,L] Abuse bot blocking rule end End HackRepair.com Blacklist

            Intermediate & Advanced SEO | | esiow2013
            1
          • rayvensoft

            Do links to PDF's on my site pass "link juice"?

            Hi, I have recently started a project on one of my sites, working with a branch of the U.S. government, where I will be hosting and publishing some of their PDF documents for free for people to use.  The great SEO side of this is that they link to my site.  The thing is, they are linking directly to the PDF files themselves, not the page with the link to the PDF files.  So my question is, does that give me any SEO benefit? While the PDF is hosted on my site, there are no links in it that would allow a spider to start from the PDF and crawl the rest of my site.   So do I get any benefit from these great links?  If not, does anybody have any suggestions on how I could get credit for them.  Keep in mind that editing the PDF's are not allowed by the government. Thanks.

            Intermediate & Advanced SEO | | rayvensoft
            0

          Get started with Moz Pro!

          Unlock the power of advanced SEO tools and data-driven insights.

          Start my free trial
          Products
          • Moz Pro
          • Moz Local
          • Moz API
          • Moz Data
          • STAT
          • Product Updates
          Moz Solutions
          • SMB Solutions
          • Agency Solutions
          • Enterprise Solutions
          Free SEO Tools
          • Domain Authority Checker
          • Link Explorer
          • Keyword Explorer
          • Competitive Research
          • Brand Authority Checker
          • MozBar Extension
          • MozCast
          Resources
          • Blog
          • SEO Learning Center
          • Help Hub
          • Beginner's Guide to SEO
          • How-to Guides
          • Moz Academy
          • API Docs
          About Moz
          • About
          • Team
          • Careers
          • Contact
          Why Moz
          • Case Studies
          • Testimonials
          Get Involved
          • Become an Affiliate
          • MozCon
          • Webinars
          • Practical Marketer Series
          • MozPod
          Connect with us

          Contact the Help team

          Join our newsletter
          Moz logo
          © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
          • Accessibility
          • Terms of Use
          • Privacy

          Looks like your connection to Moz was lost, please wait while we try to reconnect.