Should i index or noindex a contact page
-
Im wondering if i should noindex the contact page im doing SEO for a website just wondering if by noindexing the contact page would it help SEO or hurt SEO for that website
-
Actually, no-indexing your contact page is usually a very bad idea, especially if you are depending on any aspect of local SEO for the site in question.
You Contact page is the ideal spot to include all your geo-location information, your properly formatted NAP (Name, Adderss, Phone) to match your Local Citations, and a Google Map of your location.
As the others have said, even if not depending on Local SEO, there's no value to just no-indexing this single page.
Paul
-
Thanks for the help.
-
Assuming you are not cannibalizing any of your other pages with keywords or title tags and there is no duplicate content on the contact page, I would index it.
-
When you don't index a contactpage, people can't find it through Google. In that way they have to go to your website to find it. I would say this isn't real user-friendly.
The no-index function is really for pages you don't want people to find through Google, like thank you pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Gradual Drop in GWT Indexed Pages for large website
Hey all, I am working on SEO for a massive sports website. The information provided will be limited but I will give you as much context as possible. I just started digging into it and have found several on-page SEO issues of which I will fix when I get to the meat of it but this seems like something else could be going on. I have attached an image below. It doesn't seem like it's a GWT bug as reported at one point either as it's been gradually dropping over the past year. Also, there is about a 20% drop in traffic in Google Analytics over this time as well. This website has hundreds of thousands of pages of player profiles, sports team information and more all marked up with JSON-LD. Some of the on-page stuff that needs to be fixed are the h1 and h2, title tags and meta description. Also, some of the descriptions are pulled from wikipedia and linked to a "view more" area. Anchor text has "sign up" language as well. Not looking for a magic bullet but to be pointed in the right direction. Where should I start checking off to ensure I cover my bases besides the on page stuff above? There aren't any serious errors and I don't see any manual penalties. There are 4,300 404's but I have seen plenty of sites with that many 404's all of which still got traffic. It doesn't look like a sitemap was submitted to GWT and when I try submitting sitemap.xml, I get a 504 error (network unreachable). Thanks for reading. I am just getting started on this project but would like to spend as much time sharpening the axe before getting to work. lJWk8Rh
Technical SEO | | ArashG0 -
Why Are Some Pages On A New Domain Not Being Indexed?
Background: A company I am working with recently consolidated content from several existing domains into one new domain. Each of the old domains focused on a vertical and each had a number of product pages and a number of blog pages; these are now in directories on the new domain. For example, what was www.verticaldomainone.com/products/productname is now www.newdomain.com/verticalone/products/product name and the blog posts have moved from www.verticaldomaintwo.com/blog/blogpost to www.newdomain.com/verticaltwo/blog/blogpost. Many of those pages used to rank in the SERPs but they now do not. Investigation so far: Looking at Search Console's crawl stats most of the product pages and blog posts do not appear to be being indexed. This is confirmed by using the site: search modifier, which only returns a couple of products and a couple of blog posts in each vertical. Those pages are not the same as the pages with backlinks pointing directly at them. I've investigated the obvious points without success so far: There are a couple of issues with 301s that I am working with them to rectify but I have checked all pages on the old site and most redirects are in place and working There is currently no HTML or XML sitemap for the new site (this will be put in place soon) but I don't think this is an issue since a few products are being indexed and appearing in SERPs Search Console is returning no crawl errors, manual penalties, or anything else adverse Every product page is linked to from the /course page for the relevant vertical through a followed link. None of the pages have a noindex tag on them and the robots.txt allows all crawlers to access all pages One thing to note is that the site is build using react.js, so all content is within app.js. However this does not appear to affect pages higher up the navigation trees like the /vertical/products pages or the home page. So the question is: "Why might product and blog pages not be indexed on the new domain when they were previously and what can I do about it?"
Technical SEO | | BenjaminMorel0 -
3,511 Pages Indexed and 3,331 Pages Blocked by Robots
Morning, So I checked our site's index status on WMT, and I'm being told that Google is indexing 3,511 pages and the robots are blocking 3,331. This seems slightly odd as we're only disallowing 24 pages on the robots.txt file. In light of this, I have the following queries: Do these figures mean that Google is indexing 3,511 pages and blocking 3,331 other pages? Or does it mean that it's blocking 3,331 pages of the 3,511 indexed? As there are only 24 URLs being disallowed on robots.text, why are 3,331 pages being blocked? Will these be variations of the URLs we've submitted? Currently, we don't have a sitemap. I know, I know, it's pretty unforgivable but the old one didn't really work and the developers are working on the new one. Once submitted, will this help? I think I know the answer to this, but is there any way to ascertain which pages are being blocked? Thanks in advance! Lewis
Technical SEO | | PeaSoupDigital0 -
Should I noindex my blog's tag, category, and author pages
Hi there, Is it a good idea to no index tag, category, and author pages on blogs? The tag pages sometimes have duplicate content. And the category and author pages aren't really optimized for any search term. Just curious what others think. Thanks!
Technical SEO | | Rignite0 -
What is the best practice to re-index the de-indexed pages due to a bad migration
Dear Mozers, We have a Drupal site with more than 200K indexed URLs. Before 6 months a bad website migration happened without proper SEO guidelines. All the high authority URLs got rewritten by the client. Most of them are kept 404 and 302, for last 6 months. Due to this site traffic dropped more than 80%. I found today that around 40K old URLs with good PR and authority are de-indexed from Google (Most of them are 404 and 302). I need to pass all the value from old URLs to new URLs. Example URL Structure
Technical SEO | | riyas_
Before Migration (Old)
http://www.domain.com/2536987
(Page Authority: 65, HTTP Status:404, De-indexed from Google) After Migration (Current)
http://www.domain.com/new-indexed-and-live-url-version Does creating mass 301 redirects helps here without re-indexing the old URLS? Please share your thoughts. Riyas0 -
Pages not being indexed
Hi Moz community! We have a client for whom some of their pages are not ranking at all, although they do seem to be indexed by Google. They are in the real estate sector and this is an example of one: http://www.myhome.ie/residential/brochure/102-iveagh-gardens-crumlin-dublin-12/2289087 In the example above if you search for "102 iveagh gardens crumlin" on Google then they do not rank for that exact URL above - it's a similar one. And this page has been live for quite some time. Anyone got any thoughts on what might be at play here? Kind regards. Gavin
Technical SEO | | IrishTimes0 -
Duplicate page errors from pages don't even exist
Hi, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages don't even exist. My website has around 40-50 pages but SEO report shows that 375 pages have been crawled. My guess is that the errors have something to do with my recent htaccess configuration. I recently configured my htaccess to add trailing slash at the end of URLs. There is no internal linking issue such as infinite loop when navigating the website but the looping is reported in the SEOmoz's report. Here is an example of a reported link: http://www.mywebsite.com/Door/Doors/GlassNow-Services/GlassNow-Services/Glass-Compliance-Audit/GlassNow-Services/GlassNow-Services/Glass-Compliance-Audit/ btw there is no issue such as crawl error in my Google webmaster tool. Any help appreciated
Technical SEO | | mmoezzi0 -
How to verify a page-by-page level 301 redirect was done correctly?
Hello, I told some tech guys to do a page-by-page relevant 301 redirect (as talked about in Matt Cutts video https://www.youtube.com/watch?v=r1lVPrYoBkA) when a company wanted to move to a new domain when their site was getting redesigned. I found out they did a 302 redirect on accident and had to fix that, so now I don't trust they did the page-by-page relevant redirect. I have a feeling they just redirected all of the pages on the old domain to the homepage of the new domain. How could I confirm this suspicion? I run the old domain through screaming frog and it only shows 1 URL - the homepage. Does that mean they took all of the pages on the old domain offline? Thanks!
Technical SEO | | EvolveCreative0