Kubernetes - yaml模板配置集合

2023年02月17日 16:03:31    [原创]


一、Deployment

  

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-nginx-test
  namespace: default
spec:
  strategy:         # 滚动更新策略
    type: Recreate  # Recreate表示的是删除式更新,也称为单批次更新;滚动更新(RollingUpdate)
  replicas: 2
  selector:
    matchLabels:
      app: nginx-deployment

  template:
    metadata:
      labels:
        app: nginx-deployment
    spec:
      containers:
      - name: nginx
        image: nginx:1.16
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80

replicas: 指定副本集应该运行的副本数量。
selector.matchLabels: 定义标签选择器,用来确定哪些 Pods 应该由这个 Deployment 管理。
template: 描述了 Pod 的模板,实际创建的每个 Pod 将基于此模板。
containers: 定义了容器列表及其配置,包括镜像、端口等。
volumeMounts: 定义了容器内的卷挂载点,关联到 volumes 定义的存储。
volumes: 定义了 Pod 使用的存储卷,这里是一个空目录 emptyDir 作为示例。
  

二、StatefulSet

1. 通过StorageClass挂载nfs

--推荐方式,会自动创建pv和pvc


#无头服务
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
#创建StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx #必须匹配 spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # 默认值是 1
  minReadySeconds: 10 # 默认值是 0,注意:低版本不支持此选项
  template:
    metadata:
      labels:
        app: nginx #必须匹配spec.selector.matchLabels,StatefulSet通过这个matchLabels来和该标签匹配,以确定管理的pod实例
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: Swr.cn-north-4.myhuaweicloud.com/library/nginx 
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www         #这里的名称必须和volumeClaimTemplates里的name保持一致
          mountPath: /tmp   #卷挂载到容器内的路径
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]  #ReadWriteOncePod
      storageClassName: "nfs-class"  #这里必须和StorageClass的metadata.name保持一致
      resources:
        requests:
          storage: 100m

2. 直接挂载nfs


---
#创建StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx #必须匹配 spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # 默认值是 1
  minReadySeconds: 10 # 默认值是 0
  template:
    metadata:
      labels:
        app: nginx #必须匹配spec.selector.matchLabels,StatefulSet通过这个matchLabels来和该标签匹配,以确定管理的pod实例
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: Swr.cn-north-4.myhuaweicloud.com/library/nginx 
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www         #这里的名称必须和volumeClaimTemplates里的name保持一致
          mountPath: /tmp   #卷挂载到容器内的路径
      volumes:
      - name: www
        nfs:
          server: "192.168.1.131"    #NFS SERVER的ip
          path: "/root/nfs_root"     #NFS SERVER的共享目录
          readOnly: true

3. 通过pvc挂载nfs


---
#创建StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx #必须匹配 spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # 默认值是 1
  minReadySeconds: 10 # 默认值是 0
  template:
    metadata:
      labels:
        app: nginx #必须匹配spec.selector.matchLabels,StatefulSet通过这个matchLabels来和该标签匹配,以确定管理的pod实例
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: Swr.cn-north-4.myhuaweicloud.com/library/nginx 
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www         #这里的名称必须和volumeClaimTemplates里的name保持一致
          mountPath: /tmp   #卷挂载到容器内的路径
      volumes:

      - name: www
        persistentVolumeClaim:
          claimName: nfs-pvc1  #这里是pvc的名称

4. 挂载宿主机


和3方式相同,只不过pv要创建成local pv,然后基于local pv创建pvc后和pod绑定

三、DaemonSet


apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      # 这些容忍度设置是为了让该守护进程集在控制平面节点上运行
      # 如果你不希望自己的控制平面节点运行 Pod,可以删除它们
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
      # 可能需要设置较高的优先级类以确保 DaemonSet Pod 可以抢占正在运行的 Pod
      # priorityClassName: important
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
          

四、Service

无头服务访问格式:{pod名}.{服务名}.{命名空间}.svc.cluster.local

示例1:


apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx1"},"name":"nginx-svc","namespace":"default"},"spec":{"ports":[{"nodePort":30442,"port":80,"protocol":"TCP","targetPort":80}],"selector":{"app":"nginx1"},"type":"NodePort"}}
  creationTimestamp: "2024-12-22T01:25:55Z"
  labels:
    app: nginx1
  name: nginx-svc
  namespace: default
  resourceVersion: "10176"
  selfLink: /api/v1/namespaces/default/services/nginx-svc
  uid: efe09419-ee2d-4f2d-b4c7-c4448b69cb91
spec:
  clusterIP: 10.1.77.83
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30442
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx1
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

示例2:


apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx-svc
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: ClientIP #基于客户端IP保持会话粘性
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 3600    #最大会话停留时间
  selector:                 #其中spec.selector对应上文中nginx.yaml的spec.selector.matchLabels对应标签。
    app: nginx
  type: NodePort

selector: 用于匹配后端 Pods 的标签,确保 Service 能正确路由到对应的 Pods。
ports: 定义了服务的端口映射,包括协议、外部访问端口和服务监听的容器端口。
type: 指定服务的类型,如 ClusterIP、NodePort、LoadBalancer 或 ExternalName,决定了服务的可访问方式。

五、Ingress


apiVersion: apps/v1
kind: DaemonSet   # 从Deployment改为DaemonSet
metadata:
   name: nginx-ingress-controller
   namespace: ingress-nginx
   labels:
     app.kubernetes.io/name: ingress-nginx
     app.kubernetes.io/part-of: ingress-nginx
 spec:
   #replicas: 1   # 注释掉
       nodeSelector:
         kubernetes.io/hostname: k8s-master   # 修改处
       # 如下几行为新加行  作用【允许在master节点运行】
       tolerations:
       - key: node-role.kubernetes.io/master
         effect: NoSchedule
           ports:
             - name: http
               containerPort: 80
               hostPort: 80    # 添加处【可在宿主机通过该端口访问Pod】
               protocol: TCP
             - name: https
               containerPort: 443
               hostPort: 443   # 添加处【可在宿主机通过该端口访问Pod】
               protocol: TCP

六、PV/PVC


#nfs PV
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv1
  namespace: default
  labels:
    pv: nfs-pv1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  # Recycle 删除PVC会同步删除PV | Retain 删除PVC不会同步删除PV
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /data/nfstest/share/pv1
    server: 10.20.1.20
    readOnly: false

---
# 关联storageClass pv
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv1
  namespace: default
  labels:
    pv: nfs-pv1
spec:
  capacity:
    storage: 100m
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-class
  #mountOptions:       #不一定所有类型都支持
  #  - xx
  #  - xx
  nfs:
    path: /root/nfs_root
    server: 192.168.1.131
    readOnly: false

---
#local pv    
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 100Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete  #Recycle | Retain
 #storageClassName: local-storage       #可选,如果要用storageClass管理则开启此项
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - example-node

---
# PVC 编排,通过selector查找PV,K8S里的资源查找都是通过selector查找label标签
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc1
  namespace: default
  labels:
    pv: nfs-pvc1
spec:
  resources:
    requests:
      storage: 100Mi
  accessModes:
    - ReadWriteOnce
  selector:
    matchLabels:
      pv: nfs-pv1

---
# Pod挂载PVC,这里为了测试,直接通过node节点的hostPort暴露服务
apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: dev1
  labels:
    app: webapp
spec:
  containers:
    - name: webapp
      image: Swr.cn-north-4.myhuaweicloud.com/library/nginx 
      imagePullPolicy: IfNotPresent
      ports:
        - containerPort: 80
          hostPort: 8081
      volumeMounts:
        - name: workdir
          mountPath: /usr/share/nginx/html
  volumes:
    - name: workdir
      persistentVolumeClaim:
        claimName: nfs-pvc1

七、StorageClass

Role:角色,其实是定义一组对Kubernetes资源(命名空间级别)的访问规则。
RoleBinding:角色绑定,定义了用户和角色的关系。
ClusterRole:集群角色,其实是定义一组对Kubernetes资源(集群级别,包含全部命名空间)的访问规则。
ClusterRoleBinding:集群角色绑定,定义了用户和集群角色的关系。


#设为默认storageclass命令
kubectl patch storageclass -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

#创建StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-class
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs-client-provisioner       #这里的名称必须和provisioner的metadata.name保持一致
volumeBindingMode: WaitForFirstConsumer   #Immediate,创建pv后直接和pvc绑定
allowVolumeExpansion: true                #允许动态扩容
reclaimPolicy: Retain
#parameters:                              #可选,不同的存储系统有不同的参数
mountOptions: - discard                   #可选,不是所有PV都支持,这可能会在块存储层启用 UNMAP/TRIM

---
#创建ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: default # 或者你指定的命名空间

# RBAC配置,确保provisioner有权限操作存储资源
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  namespace: default  #和ServiceAccount保持一致
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

---
#创建RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nfs-provisioner-binding
  namespace: default   #和ServiceAccount保持一致
subjects:
- kind: ServiceAccount
  name: nfs-provisioner
  namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
#创建NFS Client Provisioner
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace: default   #和ServiceAccount保持一致
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-provisioner    #和ServiceAccount的name保持一致
      containers:
      - name: nfs-client-provisioner
        #image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner:latest 
        image: registry.cn-hangzhou.aliyuncs.com/k8s-image-mirrors/nfs-subdir-external-provisioner:v4.0.2
        env:
        - name: PROVISIONER_NAME
          value: "nfs-client-provisioner"       #provisioner名称,请确保与metadata.name名称保持一致
        - name: NFS_SERVER
          value: "192.168.1.131"                #nfs server的ip         
        - name: NFS_PATH
          value: "/root/nfs_root"               #nfs server的共享目录    
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes         #这里必须设置为persistentvolumes  
      volumes:
      - name: nfs-client-root
        nfs:
          server: "192.168.1.131"  #nfs server的ip         
          path: "/root/nfs_root"   #nfs server的共享目录  

---
#创建pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs"   #与nfs-StorageClass.yaml metadata.name保持一致
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Mi

---
#创建pod
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: Swr.cn-north-4.myhuaweicloud.com/library/nginx 
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && sleep 3600"   #创建一个SUCCESS文件后退出
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/tmp"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim  #与PVC名称保持一致

八、Hostpath


apiVersion: v1
kind: Pod
metadata:
  name: test-webserver
spec:
  os: { name: linux }
  nodeSelector:
    kubernetes.io/os: linux
  containers:
  - name: test-webserver
    image: registry.k8s.io/test-webserver:latest
    volumeMounts:
    - mountPath: /var/local/aaa
      name: mydir
    - mountPath: /var/local/aaa/1.txt
      name: myfile
  volumes:
  - name: mydir
    hostPath:
      # 确保文件所在目录成功创建。
      path: /var/local/aaa
      type: DirectoryOrCreate
  - name: myfile
    hostPath:
      path: /var/local/aaa/1.txt
      type: FileOrCreate

九、NFS


apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: registry.k8s.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /my-nfs-data
      name: www
  volumes:
  - name: www
    nfs:
      server: 192.168.1.185      #NFS serverIP地址
      path: /root/nfs_root       #NFS server挂载点
      readOnly: true

十、Pod


apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2024-12-22T00:17:11Z"
  generateName: nginx1-df447b444-
  labels:
    app: nginx1
    pod-template-hash: df447b444
  name: nginx1-df447b444-lggcd
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: nginx1-df447b444
    uid: 90dc4f84-0c91-49a2-8944-b638c366e343
  resourceVersion: "11165"
  selfLink: /api/v1/namespaces/default/pods/nginx1-df447b444-lggcd
  uid: 67e818ce-46b5-4c0f-8e64-ee5a93425b86
spec: 
     containers:
  - image: Swr.cn-north-4.myhuaweicloud.com/library/nginx
    imagePullPolicy: Always
    name: nginx
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-6hqqq
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: node1
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-6hqqq
    secret:
      defaultMode: 420
      secretName: default-token-6hqqq
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-12-22T00:17:11Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2024-12-22T23:16:06Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2024-12-22T23:16:06Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2024-12-22T00:17:11Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://81ef93bf0a326801a640f6fcc323e9311c36e4ab01df4c0cd31f7e79ec0aee47
    image: Swr.cn-north-4.myhuaweicloud.com/library/nginx:latest
    imageID: docker-pullable://Swr.cn-north-4.myhuaweicloud.com/library/nginx@sha256:066edc156bcada86155fd80ae03667cf3811c499df73815a2b76e43755ebbc76
    lastState:
      terminated:
        containerID: docker://b82b85d271f4b02a82bf9059d0cad0233cfb215a85ed916dce2ea6d5b6e52a33
        exitCode: 0
        finishedAt: "2024-12-22T01:38:26Z"
        reason: Completed
        startedAt: "2024-12-22T00:17:12Z"
    name: nginx
    ready: true
    restartCount: 1
    state:
      running:
        startedAt: "2024-12-22T23:16:05Z"
  hostIP: 192.168.1.132
  phase: Running
  podIP: 10.244.1.14
  qosClass: BestEffort
  startTime: "2024-12-22T00:17:11Z"

十一、ConfigMap


apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      extraArgs:
        authorization-mode: Node,RBAC
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: v1.15.0
    networking:
      dnsDomain: cluster.local
      podSubnet: 10.244.0.0/16
      serviceSubnet: 10.1.0.0/16
    scheduler: {}
  ClusterStatus: |
    apiEndpoints:
      master:
        advertiseAddress: 192.168.1.9
        bindPort: 6443
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterStatus
kind: ConfigMap
metadata:
  creationTimestamp: "2024-12-21T23:49:39Z"
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "156"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
  uid: fadc3cb1-268e-4e8f-8200-aa42d59593de

1. 使用Configmap作为环境变量

将 ConfigMap 以卷的形式进行挂载和使用 Configmap 作为环境变量的 Pod 示例:


#ConfigMap 
apiVersion: v1
kind: ConfigMap
metadata:
  name: game-demo
data:
  # 类属性键;每一个键都映射到一个简单的值
  player_initial_lives: "3"
  ui_properties_file_name: "user-interface.properties"

  # 类文件键
  game.properties: |
    enemy.types=aliens,monsters
    player.maximum-lives=5    
  user-interface.properties: |
    color.good=purple
    color.bad=yellow
    allow.textmode=true

#pod    
apiVersion: v1
kind: Pod
metadata:
  name: configmap-demo-pod
spec:
  containers:
    - name: demo
      image: alpine
      command: ["sleep", "3600"]
      env:
        # 定义环境变量
        - name: PLAYER_INITIAL_LIVES # 请注意这里和 ConfigMap 中的键名是不一样的
          valueFrom:
            configMapKeyRef:
              name: game-demo           # 这个值来自 ConfigMap
              key: player_initial_lives # 需要取值的键
        - name: UI_PROPERTIES_FILE_NAME
          valueFrom:
            configMapKeyRef:
              name: game-demo
              key: ui_properties_file_name
      volumeMounts:
      - name: config
        mountPath: "/config"
        readOnly: true
  volumes:
  # 你可以在 Pod 级别设置卷,然后将其挂载到 Pod 内的容器中
  - name: config
    configMap:
      # 提供你想要挂载的 ConfigMap 的名字
      name: game-demo
      # 来自 ConfigMap 的一组键,将被创建为文件
      items:
      - key: "game.properties"
        path: "game.properties"
      - key: "user-interface.properties"
        path: "user-interface.properties"

2. 将ConfigMap以卷的形式挂载


apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: redis
    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"
      readOnly: true
  volumes:
  - name: foo
    configMap:
      name: myconfigmap

3. 使用Configmap作为环境变量

使用 Configmap 在 Pod 中设置环境变量:
  1.对于 Pod 规约中的每个容器,为要使用的每个 ConfigMap 键添加一个环境变量到 env[].valueFrom.configMapKeyRef 字段。
  2.修改你的镜像和/或命令行,以便程序查找指定环境变量中的值。
下面是一个将 ConfigMap 定义为 Pod 环境变量的示例:
以下 ConfigMap (myconfigmap.yaml) 存储两个属性:username 和 access_level:

apiVersion: v1
kind: ConfigMap
metadata:
  name: myconfigmap
data:
  username: k8s-admin
  access_level: "1"

以下命令将创建 ConfigMap 对象:


kubectl apply -f myconfigmap.yaml

以下 Pod 将 ConfigMap 的内容用作环境变量:configmap/env-configmap.yaml


apiVersion: v1
kind: Pod
metadata:
  name: env-configmap
spec:
  containers:
    - name: app
      command: ["/bin/sh", "-c", "printenv"]
      image: busybox:latest
      envFrom:
        - configMapRef:
            name: myconfigmap

envFrom 字段指示 Kubernetes 使用其中嵌套的源创建环境变量。 内部的 configMapRef 通过 ConfigMap 的名称引用之,并选择其所有键值对。 将 Pod 添加到你的集群中,然后检索其日志以查看 printenv 命令的输出。 此操作可确认来自 ConfigMap 的两个键值对已被设置为环境变量:


kubectl apply -f env-configmap.yaml
kubectl logs pod/ env-configmap

输出类似于:


...
username: "k8s-admin"
access_level: "1"
...

有时 Pod 不需要访问 ConfigMap 中的所有值。 例如,你可以有另一个 Pod 只使用 ConfigMap 中的 username 值。 在这种使用场景中,你可以转为使用 env.valueFrom 语法,这样可以让你选择 ConfigMap 中的单个键。 环境变量的名称也可以不同于 ConfigMap 中的键。例如:


apiVersion: v1
kind: Pod
metadata:
  name: env-configmap
spec:
  containers:
  - name: envars-test-container
    image: nginx
    env:
    - name: CONFIGMAP_USERNAME
      valueFrom:
        configMapKeyRef:
          name: myconfigmap
          key: username

在从此清单创建的 Pod 中,你将看到环境变量 CONFIGMAP_USERNAME 被设置为 ConfigMap 中 username 的取值。 来自 ConfigMap 数据中的其他键不会被复制到环境中。 需要注意的是,Pod 中环境变量名称允许的字符范围是有限的。 如果某些变量名称不满足这些规则,则即使 Pod 可以被启动,你的容器也无法访问这些环境变量。


4. 不可变更的 ConfigMap 状态


Kubernetes v1.21 [stable]
Kubernetes 特性 Immutable Secret 和 ConfigMap 提供了一种将各个 Secret 和 ConfigMap 设置为不可变更的选项。对于大量使用 ConfigMap 的集群 (至少有数万个各不相同的 ConfigMap 给 Pod 挂载)而言,禁止更改 ConfigMap 的数据有以下好处:
保护应用,使之免受意外(不想要的)更新所带来的负面影响。
通过大幅降低对 kube-apiserver 的压力提升集群性能, 这是因为系统会关闭对已标记为不可变更的 ConfigMap 的监视操作。
你可以通过将 immutable 字段设置为 true 创建不可变更的 ConfigMap。 例如:
apiVersion: v1
kind: ConfigMap
metadata:
  ...
data:
  ...
immutable: true

十二、Secret

1. 创建secret并挂载到pod中示例


apiVersion: v1
kind: Secret
metadata:
  name: dotfile-secret
data:
  .secret-file: dmFsdWUtMg0KDQo=
  .yixiao: eWl4aWFvCg==
---
apiVersion: v1
kind: Pod
metadata:
  name: secret-dotfiles-pod
spec:
  volumes:
    - name: secret-volume
      secret:
        secretName: dotfile-secret
        optional: true  #该secret是可选的
  containers:
    - name: dotfile-test-container
      image: registry.k8s.io/busybox
      command:
        - ls
        - "-l"
        - "/etc/secret-volume"
      volumeMounts:
        - name: secret-volume
          readOnly: true
          mountPath: "/etc/secret-volume"

2. 将 Secret 标记为不可更改

禁止更改现有 Secret 的数据有下列好处:
1:防止意外(或非预期的)更新导致应用程序中断
2:(对于大量使用 Secret 的集群而言,至少数万个不同的 Secret 供 Pod 挂载), 通过将 Secret 标记为不可变,可以极大降低 kube-apiserver 的负载,提升集群性能。 kubelet 不需要监视那些被标记为不可更改的 Secret。
说明:
一旦一个 Secret 或 ConfigMap 被标记为不可更改,撤销此操作或者更改 data 字段的内容都是不可能的。 只能删除并重新创建这个 Secret。现有的 Pod 将维持对已删除 Secret 的挂载点 -- 建议重新创建这些 Pod。

可以通过将 Secret 的 immutable 字段设置为 true 创建不可更改的 Secret。 例如:

apiVersion: v1
kind: Secret
metadata:
  ...
data:
  ...
immutable: true

3. 创建包含一些 SSH 密钥的 Secret


kubectl create secret generic ssh-key-secret --from-file=ssh-privatekey=/path/to/.ssh/id_rsa --from-file=ssh-publickey=/path/to/.ssh/id_rsa.pub

4. 使用 Secret 数据来建立 SSH 连接

现在你可以创建一个 Pod,在其中访问包含 SSH 密钥的 Secret,并通过卷的方式来使用它:


apiVersion: v1
kind: Pod
metadata:
  name: secret-test-pod
  labels:
    name: secret-test
spec:
  volumes:
  - name: secret-volume
    secret:
      secretName: ssh-key-secret
  containers:
  - name: ssh-test-container
    image: mySshImage
    volumeMounts:
    - name: secret-volume
      readOnly: true
      mountPath: "/etc/secret-volume"

容器命令执行时,秘钥的数据可以在下面的位置访问到:
/etc/secret-volume/ssh-publickey
/etc/secret-volume/ssh-privatekey

容器就可以随便使用 Secret 数据来建立 SSH 连接。

十三、mountOptions


  mountOptions:
    - actimeo=0
    - lookupcache=none,sync
    - sync