kubernetes use nfs persistent volumes with a root user in a pod












0















ok I am banging my head now agaainst he wall for serveral days...



my usecase:
I am on my own bare metal cloud, I am running ubuntu machines and setup kubernetes on 4 machines one master 3 workers. I created a private registry and cert-manager etc etc.



also nfs shares are on the worker nodes



i have a piece of software that has to run as root inside a pod now i want this root user to store data on a presistent volume on a nfs share.



root_squash is biting me in the but...



I have created volumes and claims and all works ok if i am not root inside the pod. when root the files on the nfs shares are squashed to nobody:nogroup and the root user inside the pod can no longer use them...



what to do?



1) export nfs share with the no_root_squash option but this seems like a very bad idea given security issues, not sure if this can be mitigated by firewall rules alone?



2) i trid all kinds of securityContext options for fsGroup and uid en gid mount options, all work ok as long as you are not root in de pod... but i am not sure if i understand this in full so



My pc yaml:



apiVersion: v1
kind: PersistentVolume
metadata:
name: s03-pv0004
annotations:
pv.beta.kubernetes.io/gid: "1023"
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/k8s/pv0004
server: 212.114.120.61


As you van see i created a dedicated nfsuser with uid 1023 and use this to make the pods store data as this user... works fine as long as i am not rooot inside the pods...



The pods i am running are MarkLogic pods in a stateful set like so:



apiVersion: apps/v1
kind: StatefulSet
metadata:
name: marklogic
namespace: default
spec:
selector:
matchLabels:
app: marklogic
serviceName: "ml-service"
replicas: 3
template:
metadata:
labels:
app: marklogic
spec:
securityContext:
fsGroup: 1023
... more


runAsUser: 1023 works but again not if i want to be root inside the pod...



My question: Can it be done, run a pod as root and still use nfs as persistent volume with a secure nfs share(that is not using no_root_squash) ???



or do i need to drop the idea of nfs and move to an alternative like glusterfs?










share|improve this question























  • I have a similar setup, and personally use no_root_squash, but maybe give a try to the dynamic NFS client provisioner: github.com/helm/charts/tree/master/stable/… It seems it creates the files with 0777 permissions - not sure on how it would reflect on your NFS drive though: github.com/kubernetes-incubator/external-storage/blob/master/… Additionally, you can specify the client IPs for the NFS drive in /etc/exports file, so imo no_root_squash shouldn't be a big security issue.

    – Utku Özdemir
    Nov 25 '18 at 2:10


















0















ok I am banging my head now agaainst he wall for serveral days...



my usecase:
I am on my own bare metal cloud, I am running ubuntu machines and setup kubernetes on 4 machines one master 3 workers. I created a private registry and cert-manager etc etc.



also nfs shares are on the worker nodes



i have a piece of software that has to run as root inside a pod now i want this root user to store data on a presistent volume on a nfs share.



root_squash is biting me in the but...



I have created volumes and claims and all works ok if i am not root inside the pod. when root the files on the nfs shares are squashed to nobody:nogroup and the root user inside the pod can no longer use them...



what to do?



1) export nfs share with the no_root_squash option but this seems like a very bad idea given security issues, not sure if this can be mitigated by firewall rules alone?



2) i trid all kinds of securityContext options for fsGroup and uid en gid mount options, all work ok as long as you are not root in de pod... but i am not sure if i understand this in full so



My pc yaml:



apiVersion: v1
kind: PersistentVolume
metadata:
name: s03-pv0004
annotations:
pv.beta.kubernetes.io/gid: "1023"
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/k8s/pv0004
server: 212.114.120.61


As you van see i created a dedicated nfsuser with uid 1023 and use this to make the pods store data as this user... works fine as long as i am not rooot inside the pods...



The pods i am running are MarkLogic pods in a stateful set like so:



apiVersion: apps/v1
kind: StatefulSet
metadata:
name: marklogic
namespace: default
spec:
selector:
matchLabels:
app: marklogic
serviceName: "ml-service"
replicas: 3
template:
metadata:
labels:
app: marklogic
spec:
securityContext:
fsGroup: 1023
... more


runAsUser: 1023 works but again not if i want to be root inside the pod...



My question: Can it be done, run a pod as root and still use nfs as persistent volume with a secure nfs share(that is not using no_root_squash) ???



or do i need to drop the idea of nfs and move to an alternative like glusterfs?










share|improve this question























  • I have a similar setup, and personally use no_root_squash, but maybe give a try to the dynamic NFS client provisioner: github.com/helm/charts/tree/master/stable/… It seems it creates the files with 0777 permissions - not sure on how it would reflect on your NFS drive though: github.com/kubernetes-incubator/external-storage/blob/master/… Additionally, you can specify the client IPs for the NFS drive in /etc/exports file, so imo no_root_squash shouldn't be a big security issue.

    – Utku Özdemir
    Nov 25 '18 at 2:10
















0












0








0








ok I am banging my head now agaainst he wall for serveral days...



my usecase:
I am on my own bare metal cloud, I am running ubuntu machines and setup kubernetes on 4 machines one master 3 workers. I created a private registry and cert-manager etc etc.



also nfs shares are on the worker nodes



i have a piece of software that has to run as root inside a pod now i want this root user to store data on a presistent volume on a nfs share.



root_squash is biting me in the but...



I have created volumes and claims and all works ok if i am not root inside the pod. when root the files on the nfs shares are squashed to nobody:nogroup and the root user inside the pod can no longer use them...



what to do?



1) export nfs share with the no_root_squash option but this seems like a very bad idea given security issues, not sure if this can be mitigated by firewall rules alone?



2) i trid all kinds of securityContext options for fsGroup and uid en gid mount options, all work ok as long as you are not root in de pod... but i am not sure if i understand this in full so



My pc yaml:



apiVersion: v1
kind: PersistentVolume
metadata:
name: s03-pv0004
annotations:
pv.beta.kubernetes.io/gid: "1023"
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/k8s/pv0004
server: 212.114.120.61


As you van see i created a dedicated nfsuser with uid 1023 and use this to make the pods store data as this user... works fine as long as i am not rooot inside the pods...



The pods i am running are MarkLogic pods in a stateful set like so:



apiVersion: apps/v1
kind: StatefulSet
metadata:
name: marklogic
namespace: default
spec:
selector:
matchLabels:
app: marklogic
serviceName: "ml-service"
replicas: 3
template:
metadata:
labels:
app: marklogic
spec:
securityContext:
fsGroup: 1023
... more


runAsUser: 1023 works but again not if i want to be root inside the pod...



My question: Can it be done, run a pod as root and still use nfs as persistent volume with a secure nfs share(that is not using no_root_squash) ???



or do i need to drop the idea of nfs and move to an alternative like glusterfs?










share|improve this question














ok I am banging my head now agaainst he wall for serveral days...



my usecase:
I am on my own bare metal cloud, I am running ubuntu machines and setup kubernetes on 4 machines one master 3 workers. I created a private registry and cert-manager etc etc.



also nfs shares are on the worker nodes



i have a piece of software that has to run as root inside a pod now i want this root user to store data on a presistent volume on a nfs share.



root_squash is biting me in the but...



I have created volumes and claims and all works ok if i am not root inside the pod. when root the files on the nfs shares are squashed to nobody:nogroup and the root user inside the pod can no longer use them...



what to do?



1) export nfs share with the no_root_squash option but this seems like a very bad idea given security issues, not sure if this can be mitigated by firewall rules alone?



2) i trid all kinds of securityContext options for fsGroup and uid en gid mount options, all work ok as long as you are not root in de pod... but i am not sure if i understand this in full so



My pc yaml:



apiVersion: v1
kind: PersistentVolume
metadata:
name: s03-pv0004
annotations:
pv.beta.kubernetes.io/gid: "1023"
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/k8s/pv0004
server: 212.114.120.61


As you van see i created a dedicated nfsuser with uid 1023 and use this to make the pods store data as this user... works fine as long as i am not rooot inside the pods...



The pods i am running are MarkLogic pods in a stateful set like so:



apiVersion: apps/v1
kind: StatefulSet
metadata:
name: marklogic
namespace: default
spec:
selector:
matchLabels:
app: marklogic
serviceName: "ml-service"
replicas: 3
template:
metadata:
labels:
app: marklogic
spec:
securityContext:
fsGroup: 1023
... more


runAsUser: 1023 works but again not if i want to be root inside the pod...



My question: Can it be done, run a pod as root and still use nfs as persistent volume with a secure nfs share(that is not using no_root_squash) ???



or do i need to drop the idea of nfs and move to an alternative like glusterfs?







kubernetes root nfs persistent-volumes






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 23 '18 at 19:14









Hugo KoopmansHugo Koopmans

762819




762819













  • I have a similar setup, and personally use no_root_squash, but maybe give a try to the dynamic NFS client provisioner: github.com/helm/charts/tree/master/stable/… It seems it creates the files with 0777 permissions - not sure on how it would reflect on your NFS drive though: github.com/kubernetes-incubator/external-storage/blob/master/… Additionally, you can specify the client IPs for the NFS drive in /etc/exports file, so imo no_root_squash shouldn't be a big security issue.

    – Utku Özdemir
    Nov 25 '18 at 2:10





















  • I have a similar setup, and personally use no_root_squash, but maybe give a try to the dynamic NFS client provisioner: github.com/helm/charts/tree/master/stable/… It seems it creates the files with 0777 permissions - not sure on how it would reflect on your NFS drive though: github.com/kubernetes-incubator/external-storage/blob/master/… Additionally, you can specify the client IPs for the NFS drive in /etc/exports file, so imo no_root_squash shouldn't be a big security issue.

    – Utku Özdemir
    Nov 25 '18 at 2:10



















I have a similar setup, and personally use no_root_squash, but maybe give a try to the dynamic NFS client provisioner: github.com/helm/charts/tree/master/stable/… It seems it creates the files with 0777 permissions - not sure on how it would reflect on your NFS drive though: github.com/kubernetes-incubator/external-storage/blob/master/… Additionally, you can specify the client IPs for the NFS drive in /etc/exports file, so imo no_root_squash shouldn't be a big security issue.

– Utku Özdemir
Nov 25 '18 at 2:10







I have a similar setup, and personally use no_root_squash, but maybe give a try to the dynamic NFS client provisioner: github.com/helm/charts/tree/master/stable/… It seems it creates the files with 0777 permissions - not sure on how it would reflect on your NFS drive though: github.com/kubernetes-incubator/external-storage/blob/master/… Additionally, you can specify the client IPs for the NFS drive in /etc/exports file, so imo no_root_squash shouldn't be a big security issue.

– Utku Özdemir
Nov 25 '18 at 2:10














1 Answer
1






active

oldest

votes


















0














I have moved from nfs storage to a local storage claim option in kubernetes. This can be annotated so a pod that needs a PV lands on the same node each time it is recreated ...






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53451923%2fkubernetes-use-nfs-persistent-volumes-with-a-root-user-in-a-pod%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    I have moved from nfs storage to a local storage claim option in kubernetes. This can be annotated so a pod that needs a PV lands on the same node each time it is recreated ...






    share|improve this answer




























      0














      I have moved from nfs storage to a local storage claim option in kubernetes. This can be annotated so a pod that needs a PV lands on the same node each time it is recreated ...






      share|improve this answer


























        0












        0








        0







        I have moved from nfs storage to a local storage claim option in kubernetes. This can be annotated so a pod that needs a PV lands on the same node each time it is recreated ...






        share|improve this answer













        I have moved from nfs storage to a local storage claim option in kubernetes. This can be annotated so a pod that needs a PV lands on the same node each time it is recreated ...







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 30 '18 at 10:42









        Hugo KoopmansHugo Koopmans

        762819




        762819
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53451923%2fkubernetes-use-nfs-persistent-volumes-with-a-root-user-in-a-pod%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Wiesbaden

            Marschland

            Dieringhausen