Un-Indexing a Page without robots.txt or access to HEAD
-
I am in a situation where a page was pushed live (Went live for an hour and then taken down) before it was supposed to go live. Now normally I would utilize the robots.txt or but I do not have access to either and putting a request in will not suffice as it is against protocol with the CMS. So basically I am left to just utilizing the and I cannot seem to find a nice way to play with the SE to get this un-indexed. I know for this instance I could go to GWT and do it but for clients that do not have GWT and for all the other SE's how could I do this?
Here is the big question here: What if I have a promotional page that I don't want indexed and am met with these same limitations? Is there anything to do here?
-
No, unfortunately there is no way to prevent search engine indexation within the tags of your web page. As you mentioned earlier in your question, you can either utilize the meta robots exclusion tag or the robots.txt file.
If you are REALLY intent on blocking indexation of your promotional page and can only use the section, perhaps you can consider using an <iframe>? For example, create a totally new page with your promotional copy and blocked by robots.txt while ensuring you have NO links pointing to it. Then on your promotional page use the <iFrame> tag to extract the content from the robots.txt blocked copy.</p> <p>Honestly, I'm not sure if it'll prevent indexation since I've never tried it before but just an idea.</p> <p>Good luck and tell us how it goes if you do! =]</p></iframe>
-
Yeah the page was definitely indexed and that is how I found it. The issue is pretty much over at this point as this was supposed to be a surprise announcement later this week but people found it up, posted it to forums and well...so much for that. It was a client side error so I am not worried.
Now what I want to figure out is how to make sure that, if I am running a promotional page for specific traffic during a promo period and do not want the page indexed and am limited to only alter within the , it doesn't get indexed...Is this possible?
-
Great answer - "bingahoo" - love that.
-
I know this may sound obvious but I thought I would ask anyways: are you sure your page was indexed?
To check if this is the case go to Google or Bingahoo and type in **site:websiteURL. **If your page in question does NOT show up then you don't have a problem.
However, if it does then I would urge you to quickly register your client's website with GWT and request a URL removal. Also, if you want the page to get de-indexed "faster" I would recommend taking down the page altogether and implementing a 301 Permanent Redirect to a relevant page. If you don't have a relevant page then server up a header response of 404 Not Found.
Of course, if that is too technical and you don't have development resources then you can just delete all the content on the page (or insert a "coming soon" image) and no one would be the wiser. =]
I hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocking pages from Moz and Alexa robots
Hello, We want to block all pages in this directory from Moz and Alexa robots - /slabinventory/search/ Here is an example page - https://www.msisurfaces.com/slabinventory/search/granite/giallo-fiesta/los-angeles-slabs/msi/ Let me know if this is a valid disallow for what I'm trying to. User-agent: ia_archiver
Technical SEO | | Pushm
Disallow: /slabinventory/search/* User-agent: rogerbot
Disallow: /slabinventory/search/* Thanks.0 -
Get List Of All Indexed Google Pages
I know how to run site:domain.com but I am looking for software that will put these results into a list and return server status (200, 404, etc). Anyone have any tips?
Technical SEO | | InfinityTechnologySolutions0 -
Utilising Wordpress Attachment Pages Without Getting Duplicate Content Warnings.
I have a wordpres site that relies heavily on images and their usefulness. Each post links to larger sizes of the images with links back to the post and the "gallery" all images uploaded to the post. Unfortunately this goes against the "rules" and our attachment page show as duplicate content in Google (even though the image titles are different). There must be a way to utlise and make the most of attachment pages without getting duplicate content warnings?
Technical SEO | | DotP0 -
Post Site Migration - thousands of indexed pages, 4 months after
Hi all, Believe me. I think I've already tried and googled for every possible question that I have. This one is very frustrating – I have the following old domain – fancydiamonds dot net. We built a new site – Leibish dot com and done everything by the book: Individual 301 redirects for all the pages. Change of address via the GWT. Trying to maintain and improve the old optimization and hierarchy. 4 months after the site migration – we still have to gain back more than 50% of our original organic traffic (17,000 vs. 35,500-50,000 The thing that strikes me the most that you can still find 2400 indexed pages on Google (they all have 301 redirects). And more than this – if you'll search for the old domain name on Google – fancydiamonds dot net you'll find the old domain! Something is not right here, but I have no explanation why these pages still exist. Any help will be highly appreciated. Thanks!
Technical SEO | | skifr0 -
Robots.txt
I have a client who after designer added a robots.txt file has experience continual growth of urls blocked by robots,tx but now urls blocked (1700 aprox urls) has surpassed those indexed (1000). Surely that would mean all current urls are blocked (plus some extra mysterious ones). However pages still listing in Google and traffic being generated from organic search so doesnt look like this is the case apart from the rather alarming webmaster tools report any ideas whats going on here ? cheers dan
Technical SEO | | Dan-Lawrence0 -
Blocked URL's by robots.txt
In Google Webmaster Tools shows me 10,936 Blocked URL's by robots.txt and it is very strange when you go to the "Index Status" section where shows that since April 2012 robots.txt blocked many URL's. You can see more precise on the image attached (chart WMT) I can not explain why I have blocked URL's ? because I have nothing in robots.txt.
Technical SEO | | meralucian37
My robots.txt is like this: User-agent: * I thought I was penalized by Penguin in April 2012 because constantly i'am losing visitors now reaching over 40%. It may be a different penalty? Any help is welcome because i'm already so saturated. Mera robotstxt.jpg0 -
Why is my office page not being indexed?
Good Morning from 24 degrees C partly cloudy wetherby UK 🙂 This page is not being indexed by Google:
Technical SEO | | Nightwing
http://www.sandersonweatherall.co.uk/office-to-let-leeds/ 1st Question Ive checked robots txt file no problems, i'm in the midst of updating the xml sitemap (it had the old one in place). It only has one link from this page http://www.sandersonweatherall.co.uk/Site-Map/ So is the reason oits not being indexed just a simple case of lack if SEO juice from inbound links so the remedy lies in routing more inbound links to the offending page? 2nd question Is the quickest way to diagnose if a web address is not being indexed to cut and paste the url in the Google search box and if it doesnt return the page theres a problem? Thanks in advance, David0 -
Existing Pages in Google Index and Changing URLs
Hi!! I am launching a newly recoded site this week and had a another noobie question. The URL structure has changed slightly and I have installed a 301 redirect to take care of that. I am wondering how Google will handle my "old" pages? Will they just fall out of the index? Or does the 301 redirect tell Google to rewrite the URLs in the index? I am just concerned I may see an "old" page and a "new" page with the same content in the index. Just want to make sure I have covered all my bases. Thanks!! Lynn
Technical SEO | | hiphound0