Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Dynamically Inserting Noindex With Javascript
-
Hello,
I have a broken plugin creating hundreds of WP-Content directory pages being indexed by Google. I can not access the source code of these pages to add a noindex to them. The page URL's all have the plugin name within them. In order to resolve the issue, I wrote a solution with javascript to dynamically add in a noindex tag to any URL containing the plugin name. Would this noindex be respected by Google and is there a way to immediately check that it is respected?
Currently, I can not delete the plugin due to issues with it's php.
If you would like to view the code: https://codepen.io/trodrick/pen/Gwwaej?editors=0010
Thanks!
-
Perfect! Happy to help!
-
It seemed to work. Hopefully the noindex is respected, thank you!
-
You can! Via https://support.google.com/webmasters/answer/1663419?hl=en
- If you choose to hide a directory, then any file or directory starting with the prefix you supply will be blocked. So if you enter /folder, then /folder/somefile, foldername/somefile, and /folder.html will all be blocked.
- To hide an entire site, leave the path empty.
-
It looks like it is active. Thanks, John! Can you no-index an entire directory in GSC? I thought it was only per URL.
-
Hey there! Do you have a link to a page where it's implemented live? As long as you have the no-index, no follow in there, you should be okay. Other things you can do:
1. Use robots.txt to hide the directory of pages that the plugin is outputting
2. Sign into the Google Search Console and no-index the same directory
I would do 1 and 2 to help speed up things once the no-index, no-follow tag is in place.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Using NoIndex Tag instead of 410 Gone Code on Discontinued products?
Hello everyone, I am very new to SEO and I wanted to get some input & second opinions on a workaround I am planning to implement on our Shopify store. Any suggestions, thoughts, or insight you have are welcome & appreciated! For those who aren't aware, Shopify as a platform doesn't allow us to send a 410 Gone Code/Error under any circumstance. When you delete or archive a product/page, it becomes unavailable on the storefront. Unfortunately, the only thing Shopify natively allows me to do is set up a 301 redirect. So when we are forced to discontinue a product, customers currently get a 404 error when trying to go to that old URL. My planned workaround is to automatically detect when a product has been discontinued and add the NoIndex meta tag to the product page. The product page will stay up but be unavailable for purchase. I am also adjusting the LD+JSON to list the products availability as Discontinued instead of InStock/OutOfStock.
Technical SEO | | BakeryTech
Then I let the page sit for a few months so that crawlers have a chance to recrawl and remove the page from their indexes. I think that is how that works?
Once 3 or 6 months have passed, I plan on archiving the product followed by setting up a 301 redirect pointing to our internal search results page. The redirect will send the to search with a query aimed towards similar products. That should prevent people with open tabs, bookmarks and direct links to that page from receiving a 404 error. I do have Google Search Console setup and integrated with our site, but manually telling google to remove a page obviously only impacts their index. Will this work the way I think it will?
Will search engines remove the page from their indexes if I add the NoIndex meta tag after they have already been index?
Is there a better way I should implement this? P.S. For those wondering why I am not disallowing the page URL to the Robots.txt, Shopify won't allow me to call collection or product data from within the template that assembles the Robots.txt. So I can't automatically add product URLs to the list.0 -
Duplicate content, although page has "noindex"
Hello, I had an issue with some pages being listed as duplicate content in my weekly Moz report. I've since discussed it with my web dev team and we decided to stop the pages from being crawled. The web dev team added this coding to the pages <meta name='robots' content='max-image-preview:large, noindex dofollow' />, but the Moz report is still reporting the pages as duplicate content. Note from the developer "So as far as I can see we've added robots to prevent the issue but maybe there is some subtle change that's needed here. You could check in Google Search Console to see how its seeing this content or you could ask Moz why they are still reporting this and see if we've missed something?" Any help much appreciated!
Technical SEO | | rj_dale0 -
Can I still monitor noindex, nofollow pages with Google Analytics?
I have a private/login site where all pages are noindex, nofollow. Can I still monitor external site links with Google Analytics?
Technical SEO | | jasmine.silver0 -
Images, CSS and Javascript on subdomain or external website
Hi guy's, I came across webshops that put images, CSS and Javascript on different websites or subdomains. Does this boost SEO results? On our Wordpress webshop all the sourcescodes are placed after our own domainname: www.ourdomainname.com/wp-includes/js/jquery/jquery.js?ver=1.11.3'
Technical SEO | | Happy-SEO
www.ourdomainname.com/wp-content/uploads/2015/09/example.jpg Examples of other website: Website 1:
https://www.zalando.nl/heren-home/ Sourcecode:
https://secure-i3.ztat.net//camp/03/d5/1a0168ac81f2ffb010803d108221.jpg
https://secure-media.ztat.net/media/cms/adproduct/ad-product.min.css?_=1447764579000 Website 2:
https://www.bol.com/nl/index.html Sourcecode:
https://s.s-bol.com/nl/static/css/main/webselfservice.1358897755.css
//s.s-bol.com/nl/upload/images/logos/bol-logo-500500.jpg Website 3:
http://www.wehkamp.nl/ Sourcecode:
https://static.wehkamp.nl/assets/styles/themes/wehkamp.color.min.css?v=f47bf1
http://assets.wehkamp.com/i/wehkamp/350-450-layer-SDD-wk51-v3.jpg0 -
Lost with conical, nofollow noindex. Not sure how to use it on a dyanmic php site with multiple region select options
I have a site with multiple regions the main page after a region is selected is login.php but the regions are defined by ?rid=11 , 12, etc. These are being picked up as duplicate content but they are all different regions. As i hired external php coders to develop most of the site I am scared to start meddling with any of the raw code and would like some advise on how to not show these as duplicate content. should i use noindex nofollow or connical? if Connical how do i set it up on the main login.php page? p.s. i am an extreme nube to seo
Technical SEO | | moby1230 -
Can dynamically translated pages hurt a site?
Hi all...looking for some insight pls...i have a site we have worked very hard on to get ranked well and it is doing well in search. The site has about 1000 pages and climbing and has about 50 of those pages in translated pages and are static pages with unique urls. I have had no problems here with duplicate content and that sort of thing and all pages were manually translated so no translation issues. We have been looking at software that can dynamically translate the complete site into a handfull of languages...lets say about 5. My problem here is these pages get produced dynamically and i have concerns that google will take issue with this aswell as the huge sudden influx of new urls....as now we could be looking at and increase of 5000 new urls. (which usually triggers an alarm) My feeling is that it could be risking the stability of the site that we have worked so hard for and maybe just stick with the already translated static pages. I am sure the process could be fine but fear a manual inspection and a slap on the wrist for having dynamically created content?? and also just risk a review trigger period. These days it is hard to know what could get you in "trouble" and my gut says keep it simple and as is and dont shake it up?? Am i being overly concerned? Would love to here from others who have tried similar changes and also those who have not due to similar "fear" thanks
Technical SEO | | nomad-2023230 -
How valuable is content "hidden" behind a JavaScript dropdown really?
I've come across a method implemented by some SEO agencies to fill up pages with somehow relevant text and hide it behind a javascript dropdown. Does Google fall for such cheap tricks? You can see this method used on these pages for example (just scroll down to the bottom) - it's all in German, but you get the idea I guess: http://www.insider-boersenbrief.de/ http://www.deko-und-kerzenshop.de/ How is you experience with this way of adding content to a site? Do you think it is valuable or will it get penalised?
Technical SEO | | jfkorn0 -
Thoughts about stub pages - 200 & noindex ok, or 404?
With large database/template driven websites it is often possible to get a lot of pages with no content on them. What are the current thoughts regarding these pages with no content, options; Return a 200 header code with noindex meta tag Return a 404 page & header code Something else? Thanks
Technical SEO | | slingshot0