Blocking in Robots.txt and the re-indexing - DA effects?
-
I have two good high level DA sites that target the US (.com) and UK (.co.uk). The .com ranks well but is dormant from a commercial aspect - the .co.uk is the commercial focus and gets great traffic.
Issue is the .com ranks for brand in the UK - I want the .co.uk to rank for brand in the UK.
I can't 301 the .com as it will be used again in the near future. I want to block the .com in Robots.txt with a view to un-block it again when I need it.
I don't think the DA would be affected as the links stay and the sites live (just not indexed) so when I unblock it should be fine - HOWEVER - my query is things like organic CTR data that Google records and other factors won't contribute to its value.
Has anyone ever blocked and un-blocked and whats the affects pls?
All answers greatly received - cheers GB
-
Blocking in Robots.txt doesn't affect your website DA. Instead, you can use it in a better way to help your website's ranking.
*Deal with Duplicate content - You can use it to hide a specific page from search engines while it issues duplicate content.
*Hide different types of theme templates of your website that you don't want to list in search results. -
@Bush_JSM , Depend on allowing , better when you update simply update the sitemap in google search console
-
I don't think it affects your website DA and PA.
Robots.txt helps you to block the post and pages that you don't want Google to index.
Especially all the internal links for your site. It doesn't have any link with the external links of your site.
According to me. It doesn't affect your website da pa.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What Should We Do to Fix Crawled but Not Indexed Pages for Multi-location Service Pages?
Hey guys! I work as a content creator for Zavza Seal, a contractor out of New York, and we're targeting 36+ cities in the Brooklyn and Queens areas with several services for home improvement. We got about 340 pages into our multi-location strategy targeting our target cities with each service we offer, when we noticed that 200+ of our pages were "Crawled but not indexed" in Google Search Console. Here's what I think we may have done wrong. Let me know what you think... We used the same page template for all pages. (we changed the content and sections, formatting, targeted keywords, and entire page strategy for areas with unique problems trying to keep the user experience as unique as possible to avoid duplicate content or looking like we didn't care about our visitors.) We used the same featured image for all pages. (I know this is bad and wouldn't have done it myself, but hey, I'm not the publisher.) We didn't use rel canonicals to tell search engines that these pages were special made for the areas. We didn't use alt tags until about halfway through. A lot of the urls don't use the target keyword exactly. The NAP info and Google Maps embed is in the footer, so we didn't use it on the pages. We didn't use any content about the history or the city or anything like that. (some pages we did use content about historic buildings, low water table, flood prone areas, etc if they were known for that) We were thinking of redoing the pages, starting from scratch and building unique experiences around each city, with testimonials, case studies, and content about problems that are common for property owners in the area, but I think they may be able to be fixed with a rel canonical, the city specific content added, and unique featured images on each page. What do you think is causing the problem? What would be the easiest way to fix it? I knew the pages had to be unique for each page, so I switched up the page strategy every 5-10 pages out of fear that duplicate content would start happening, because you can only say so much about for example, "basement crack repair". Please let me know your thoughts. Here is one of the pages that are indexed as an example: https://zavzaseal.com/cp-v1/premier-spray-foam-insulation-contractors-in-jamaica-ny/ Here is one like it that is crawled but not indexed: https://zavzaseal.com/cp-v1/premier-spray-foam-insulation-contractors-in-jamaica-ny/ I appreciate your time and concern. Have a great weekend!
Local SEO | | everysecond0 -
Google Index Issue
2 months ago, I registered a domain named www.nextheadphone.com I had a plan to learn SEO and create a affiliate blog site. In my website I had 3 types of content. Informative Articles Headphone Review articles Product Comparision Review articles Problem is, Google does not index my informative articles. I dont know the reasons. https://www.nextheadphone.com/benefits-of-noise-cancelling-headphones/
Content Development | | NextHeadphone
https://www.nextheadphone.com/noise-cancelling-headphones-protect-hearing/ Is there anyone who can take a look and find the issues why google is not indexing my articles? I will be waiting for your reply0 -
Unsolved Moz crawler not crawling on my site
Hi all, im facing an issue where moz crawler is unable to crawl my site. The following error keeps showing Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. This is my robots.txt file : https://www.wearefutureheads.com/robots.txt I'm not sure what else am I missing.. can anyone help
Product Support | | teikh0 -
Unsolved What would the exact text be for robots.txt to stop Moz crawling a subdomain?
I need Moz to stop crawling a subdomain of my site, and am just checking what the exact text should be in the file to do this. I assume it would be: User-agent: Moz
Getting Started | | Simon-Plan
Disallow: / But just checking so I can tell the agency who will apply it, to avoid paying for their time with the incorrect text! Many thanks.0 -
How to know which pages are indexed by Google?
So apparently we have some sites that are just duplicates of our original main site but aiming at different markets/cities. They have completely different urls but are the same content as our main site with different market/city changed. How do I know for sure which ones are indexed. I enter the url into Google and its not there. Even if I put in " around " it. Is there another way to query google for my site? Is there a website that will tell you which ones are indexed? This is probably a dumb question.
Technical SEO | | greenhornet770 -
Problem with re-direction
Please help me to solve a problem with redirection. I re-do the
Technical SEO | | NadiaFL
site and move it to new domain page-by-page as recommended. I use WordPress Redirect plugin. I did everything as written 10
days ago but don't see redirection. For example, old page http://aurora17.com/?page_id=2485 New page http://njcruise.org/alaska-cruise-tour/ Where is a problem? How to solve it?0 -
Webmaster woes - should I re-direct or re-structure?
Hey guys, I'll get straight to the point - a small (growing) website I'm working on has a number links pointing to it from totally irrelevant sites (66, to be precise). These were built by an SEO company prior to me working on the site, and lead to an over-optimisation penalty for one keyword. This number doesn't sound large, but proportionally (to all other links), it is. It didn't used to be, but a lot of the links coming in have now 'died', and the domains they came from are now just parked. Anyway, I have managed to contact pretty much all the webmasters, and 27 of these links have been removed. Unfortunately - as I'm sure many people know all too well - a good handful of the contacted webmasters haven't replied, and the bad links still remain on their websites (either in-content or on links pages). I have decided to 'refresh' the website with some new (and better) content - providing much more information and a valuable resource. My question is - what should I do? Should I just replace the content on the existing pages (slightly altering the URL structure to match the topic more) and 301 the old URLs to the new ones? Or should I delete the pages and create new ones - thus making sure this particular section of the site isn't affected by any bad in-bound links? I'm more inclined to opt for the latter option, and 'start fresh' with the pages - so I know I've got total control over them, but wanted to get the opinion of the community before I made a decision. Thanks in advance for your responses! Nick
Technical SEO | | Danapollo0 -
Robots.txt versus sitemap
Hi everyone, Lets say we have a robots.txt that disallows specific folders on our website, but a sitemap submitted in Google Webmaster Tools that lists content in those folders. Who wins? Will the sitemap content get indexed even if it's blocked by robots.txt? I know content that is blocked by robot.txt can still get indexed and display a URL if Google discovers it via a link so I'm wondering if that would happen in this scenario too. Thanks!
Technical SEO | | anthematic0