Un-Indexing a Page without robots.txt or access to HEAD
-
I am in a situation where a page was pushed live (Went live for an hour and then taken down) before it was supposed to go live. Now normally I would utilize the robots.txt or but I do not have access to either and putting a request in will not suffice as it is against protocol with the CMS. So basically I am left to just utilizing the and I cannot seem to find a nice way to play with the SE to get this un-indexed. I know for this instance I could go to GWT and do it but for clients that do not have GWT and for all the other SE's how could I do this?
Here is the big question here: What if I have a promotional page that I don't want indexed and am met with these same limitations? Is there anything to do here?
-
No, unfortunately there is no way to prevent search engine indexation within the tags of your web page. As you mentioned earlier in your question, you can either utilize the meta robots exclusion tag or the robots.txt file.
If you are REALLY intent on blocking indexation of your promotional page and can only use the section, perhaps you can consider using an <iframe>? For example, create a totally new page with your promotional copy and blocked by robots.txt while ensuring you have NO links pointing to it. Then on your promotional page use the <iFrame> tag to extract the content from the robots.txt blocked copy.</p> <p>Honestly, I'm not sure if it'll prevent indexation since I've never tried it before but just an idea.</p> <p>Good luck and tell us how it goes if you do! =]</p></iframe>
-
Yeah the page was definitely indexed and that is how I found it. The issue is pretty much over at this point as this was supposed to be a surprise announcement later this week but people found it up, posted it to forums and well...so much for that. It was a client side error so I am not worried.
Now what I want to figure out is how to make sure that, if I am running a promotional page for specific traffic during a promo period and do not want the page indexed and am limited to only alter within the , it doesn't get indexed...Is this possible?
-
Great answer - "bingahoo" - love that.
-
I know this may sound obvious but I thought I would ask anyways: are you sure your page was indexed?
To check if this is the case go to Google or Bingahoo and type in **site:websiteURL. **If your page in question does NOT show up then you don't have a problem.
However, if it does then I would urge you to quickly register your client's website with GWT and request a URL removal. Also, if you want the page to get de-indexed "faster" I would recommend taking down the page altogether and implementing a 301 Permanent Redirect to a relevant page. If you don't have a relevant page then server up a header response of 404 Not Found.
Of course, if that is too technical and you don't have development resources then you can just delete all the content on the page (or insert a "coming soon" image) and no one would be the wiser. =]
I hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to stop robots.txt restricting access to sitemap?
I'm working on a site right now and having an issue with the robots.txt file restricting access to the sitemap - with no web dev to help, I'm wondering how I can fix the issue myself? The robots.txt page shows User-agent: * Disallow: / And then sitemap: with the correct sitemap link
Technical SEO | | Ad-Rank0 -
All of my pages are indexed except for 1\. How could that be?
Yesterday we were ranking #4 for our main keyword and today we're not even indexed. Not robots.txt issue, we've just added a rel canonical to page and submitted our sitemap again. What else could we do?
Technical SEO | | paulb.credible0 -
Robots.txt blocking Addon Domains
I have this site as my primary domain: http://www.libertyresourcedirectory.com/ I don't want to give spiders access to the site at all so I tried to do a simple Disallow: / in the robots.txt. As a test I tried to crawl it with Screaming Frog afterwards and it didn't do anything. (Excellent.) However, there's a problem. In GWT, I got an alert that Google couldn't crawl ANY of my sites because of robots.txt issues. Changing the robots.txt on my primary domain, changed it for ALL my addon domains. (Ex. http://ethanglover.biz/ ) From a directory point of view, this makes sense, from a spider point of view, it doesn't. As a solution, I changed the robots.txt file back and added a robots meta tag to the primary domain. (noindex, nofollow). But this doesn't seem to be having any effect. As I understand it, the robots.txt takes priority. How can I separate all this out to allow domains to have different rules? I've tried uploading a separate robots.txt to the addon domain folders, but it's completely ignored. Even going to ethanglover.biz/robots.txt gave me the primary domain version of the file. (SERIOUSLY! I've tested this 100 times in many ways.) Has anyone experienced this? Am I in the twilight zone? Any known fixes? Thanks. Proof I'm not crazy in attached video. robotstxt_addon_domain.mp4
Technical SEO | | eglove0 -
23,000 pages indexed, I think bad
Thank you Thank you Moz People!! I have a successful vacation rental company that has terrible seo but getting better. When I first ran Moz crawler and page grader, I had 35,000 errors and all f's.... tons of problem with duplicate page content and titles because not being consistent with page names... mainly capitalization and also rel canonical errors... with that said, I have now maybe 2 or 3 errors from time to time, but I fix every other day. Problem Maybe My site map shows in Google Webmaster submitted 1155
Technical SEO | | nickcargill
1541 indexed But google crawl shows 23,000 pages probably because of duplicate errors or possibly database driven url parameters... How bad is this and how do I get this to be accurate, I have seen google remove tool but I do not think this is right? 2) I have hired a full time content writer and I hope this works My site in google was just domain.com but I had put a 301 in to www.domain.com becauses www. had a page authority where the domain.com did not. But in webmasters I had domain.com just listed. So I changed that to www.domain.com (as preferred domain name) and ask for the first time to crawl. www.domain.com . Anybody see any problems with this? THank you MOZ people, Nick0 -
Google indexing despite robots.txt block
Hi This subdomain has about 4'000 URLs indexed in Google, although it's blocked via robots.txt: https://www.google.com/search?safe=off&q=site%3Awww1.swisscom.ch&oq=site%3Awww1.swisscom.ch This has been the case for almost a year now, and it does not look like Google tends to respect the blocking in http://www1.swisscom.ch/robots.txt Any clues why this is or what I could do to resolve it? Thanks!
Technical SEO | | zeepartner0 -
How to solve the meta : A description for this result is not available because this site's robots.txt. ?
Hi, I have many URL for commercialization that redirects 301 to an actual page of my companies' site. My URL provider say that the load for those request by bots are too much, they put robots text on the redirection server ! Strange or not? Now I have a this META description on all my URL captains that redirect 301 : A description for this result is not available because this site's robots.txt. If you have the perfect solutions could you share it with me ? Thank You.
Technical SEO | | Vale70 -
Importance of an optimized home page (index)
I'm helping a client redesign their website and they want to have a home page that's primarily graphics and/or flash (or jquery). If they are able to optimize all of their key sub-pages, what is the harm in terms of SEO?
Technical SEO | | EricVallee340 -
Subdomain Robots.txt
If I have a subdomain (a blog) that is having tags and categories indexed when they should not be, because they are creating duplicate content. Can I block them using a robots.txt file? Can I/do I need to have a separate robots file for my subdomain? If so, how would I format it? Do I need to specify that it is a subdomain robots file, or will the search engines automatically pick this up? Thanks!
Technical SEO | | JohnECF0