目录
频道首页
📚 k8s安装 -Kubeadmin
收藏
0
xy20118 最近修改于 2024-08-29 09:26:33

前言

Kubeadmin也是一个工具,提供kubeadm init和kubeadm join,用于快速部署K8S集群,相对简单。

kubeadm: 是一个安装工具,方便快捷的安装K8S系统。

一、安装之前需要准备什么? 三台机器 系统UBUNTU22.04.4

| IP | 配置 | 节点 | | ------ | ------ | ------ | | 192.168.64.100 | 4c4G | master-1 | | 192.168.64.102| 4c4G | node-1 | | 192.168.64.103 | 4c4G | node-2 |

初始化系统环境

三台机器都需要操作

##修改节点名称
    hostnamectl hostname master-1
    hostnamectl hostname node-1
    hostnamectl hostname node-2

##设置hosts解析    
192.168.64.100 master-1
192.168.64.102 node-1
192.168.64.103 node-2

##关闭防火墙
ufw disable


##禁掉swapp 分区 (kubernetes强制要求禁用)
swappoff -a
sed -i '/swap.img/s/^/#/' /etc/fstab  #永久禁用
free #查看是否禁用成功 


##换apt 为阿里源
可参考根据对应版本更换:https://developer.aliyun.com/mirror/ubuntu?spm=a2c6h.13651102.0.0.6d071b11wMRAR5 

vim /etc/apt/sources.list 
#替换就可以

##更新索引 
apt  update   

安装容器引擎: 之前:K8S 1.23版本之前包含1.23,使用docker作为容器引擎。 1.24之后,containerd取代了docker containerd: 是一个开源的容器运行时工具,它为容器提供了核心功能。 OCI是一个开放标准组织,其主要目标是推动容器技术的开放标准化,以促进容器生态系统的发展和互操作性。 docker和containerd的区别和联系: containerd取代了docker引擎,目的是提高效率。

安装containerd

    apt install containerd[=版本号] -y  #默认是最新版本
    /lib/systemd/system/containerd.service (二进制安装参考此配置)
    mkdir /etc/containerd/   # 存放containerd的配置文件
    #配置containerd
    containerd config default > /etc/containerd/config.toml
    修改pause镜像地址:
    65行 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"  #阿里镜像地址
        #sandbox_image = "harbor.hiuiu.com/kubernetes/google_containers/pause:3.9" #本地harbor镜像地址
    #修改镜像加速配置
        168       [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        169   [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        170      endpoint = ["https://0lbqdwuy.mirror.aliyuncs.com"]

    #修改为true
        137             SystemdCgroup =  true

安装crictl工具

可以选择性安装 一般使用nerdctl

#下载crictl安装包
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.1/crictl-v1.26.1-linux-amd64.tar.gz

##本地上传1.29.0版本    
mkdir /usr/local/bin/crictl
tar xvf crictl-v1.29.0-linux-amd64.tar.gz -C /usr/local/bin/crictl
vim /etc/profile
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/bin/crictl
source /etc/profile
crictl -v
#配置crictl
    cat > /etc/crictl.yaml <<EOF
    runtime-endpoint: "unix:///run/containerd/containerd.sock"
    image-endpoint: "unix:///run/containerd/containerd.sock"
    timeout: 10
    debug: false
    EOF

安装nerdctl工具

#下载路径
wget https://github.com/containerd/nerdctl/releases/download/v1.3.0/nerdctl-1.3.0-linux-amd64.tar.gz
    #本地上传安装
    tar xvf nerdctl-1.7.6-linux-amd64.tar.gz -C /usr/local/bin/
    nerdctl version
    mkdir /etc/nerdctl
    cat > /etc/nerdctl/nerdctl.toml <<EOF
    namespace = "k8s.io"
    debug = false
    debug_full = false
    insecure_registry = true
    EOF

安装CNI工具

CNI是什么?
    为容器提供网桥,如果不安装CNI,容器只有host网络模式。
#安装CNI:
wget https://github.com/containernetworking/plugins/releases/download/v1.5.1/cni-plugins-linux-amd64-v1.2.0.tgz
#本地上传
mkdir -p /opt/cni/bin/
tar xvf cni-plugins-linux-amd64-v1.5.1.tgz -C /opt/cni/bin/
7.测试
     nerdctl run -it -p 8000:80 --rm   nginx --name=nginx_test  -- bash
#测试可以启动并进入容器即可  nginx可以从本地上传后 使用nerdctl load -i 文件 

初始化K8S环境

1.安装基本的软件
apt install  ipvsadm tree ipset  chrony -y
# ipvsadm 实现负载均衡
#
#chrony 时间同步服务工具
2.再次关闭防火墙和关闭selinux
systemctl stop ufw
3.配置时间服务器
为什么要配置时间服务器?(服务器时间统一 方便管理以及服务运行)
sed -i 's/pool ntp.ubuntu.com/pool 192.168.64.100/' /etc/chrony/chrony.conf
systemctl restart chronyd
    -----------------
##ntp设置时间服务器
timedatectl #查看本地时间
timedatectl set-timezone Asia/Shanghai     #设置时区
timedatectl set-time "YYYY-MM-DD HH:MM:SS"   #手动设置时间,可选 
apt install ntp -y
systemctl start ntp  #启动NTP服务
systemctl enable ntp.service #设置NTP开机自启
vim /etc/ntp.conf #修改配置文件
server ntp1.aliyun.com #设置阿里云ntp服务器
pool 192.168.64.100   #以master节点为时间服务器也是可以的
systemctl restart ntp #重启时间服务
ntpq -p  #查看同步 输出为正常
#生产环境中 可以使用计划任务 每晚定时同步时间 以保证服务器时间一致
#每晚一点自动同步时间
echo "0 1* * * ntpq  -p  ntp1.aliyun.com">> /var/spool/cron/root
crontab -l 

4.加载模块
#在kubernetes中Service有两种代理模型,一种是基于iptables的,一种是基于ipvs,两者对比ipvs的性能要高,如果想要使用ipvs模型,需要手动载入ipvs模块。

modprobe br_netfilter && lsmod |grep br_netfilter
modprobe ip_conntrack && lsmod | grep conntrack
cat >/etc/modules-load.d/modules.conf<<EOF
ip_vs
ip_vs_lc
ip_vs_lblc
ip_vs_lblcr
ip_vs_rr
ip_vs_wrr
ip_vs_sh
ip_vs_dh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
ip_tables
ip_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
xt_set
br_netfilter
nf_conntrack
overlay
EOF
systemctl restart systemd-modules-load.service
lsmod | grep -e ip_vs -e nf_conntrack

5.修改内核参数
net.ipv4.ip_forward=1
vm.max_map_count=262144
kernel.pid_max=4194303
fs.file-max=1000000
net.ipv4.tcp_max_tw_buckets=6000
net.netfilter.nf_conntrack_max=2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0

vim  /etc/sysctl.conf
sysctl -p

6.修改machin-id
cat /etc/machine-id
rm -f /etc/machine-id
systemd-machine-id-setup  #重新生成唯一ID

正式安装K8S-kubeadm方式

1.配置说明 参考上面

2.安装 kubeadm 、kubelet、 kubectl

    apt upgrade  #更新内核参数
    apt install apt-transport-https ca-certificates curl gpg
    mkdir -p -m 755 /etc/apt/keyrings
    curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg   
    #gpg一种加密方式 很少被破解 很安全 
    ## 阿里源
    echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /" |  tee /etc/apt/sources.list.d/kubernetes.list
## 安装命令
apt-get update && apt-cache madison kubeadm
apt-get install -y kubelet=1.30.3-1.1 kubeadm=1.30.3-1.1 kubectl=1.30.3-1.1


##线下本地安装 要搭建本地habro
apt-get update
apt-get upgrade
apt-get install socat ebtables conntrack -y
dpkg -i kubernetes-cni_1.4.0-1.1_amd64.deb
dpkg -i cri-tools_1.30.1-1.1_amd64.deb
dpkg -i kubeadm_1.30.3-1.1_amd64.deb 
dpkg -i kubectl_1.30.3-1.1_amd64.deb 
dpkg -i kubelet_1.30.3-1.1_amd64.deb 
  1. 拉取镜像 bash imagesFromlocal.sh 阿里源 使用 bash images_download.sh
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.30.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.30.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.30.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.30.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.12-0
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.11.1

创建MASTER

kubeadm init --apiserver-advertise-address=192.168.64.200 --apiserver-bind-port=6443 --kubernetes-version=v1.30.3 --pod-network-cidr=10.200.0.0/16 --service-cidr=10.96.0.0/16 --service-dns-domain=cluster.local --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=swap

本地镜像--image-repository=harbor.hiuiu.com/kubeadm_v1.30.3

按照提示执行:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
     export KUBECONFIG=/etc/kubernetes/admin.conf

从节点上操作

5. node操作 节点加入集群
kubeadm join 192.168.64.100:6443 --token tjg2ya.qxghlqugffx33rg6 \--discovery-token-ca-cert-hash sha256:d763b517f993d0992d168db4a2650cb2fd2cca48900b99334241be86ecb825e4

master主节点查看

查看集群节点
kubectl get nodes
NAME       STATUS     ROLES           AGE     VERSION
master-1   NotReady   control-plane   9m50s   v1.30.3
node-1     NotReady   <none>          21s     v1.30.3
node-2     NotReady   <none>          12s     v1.30.3

安装网络组件calico

calico 是一个网络和安全的解决方案。K8S的一个组件 注:Kubernetes的CNI网络插件有不少,如Calico、Flannel、Canal、Cilium和kube-router等。 Flannel适合小规模集群 Calico既适合小规模也适合大规模集群,网络策略能做到精细配置。生产环境一般使用它。 此次部署的是v3.28.0版本

项目地址: 项目地址: https://github.com/projectcalico/calico/

文档地址: 文档地址: https://docs.tigera.io/calico/latest/about/

从本地上传calico-release-v3.28.0.tgz

tar -zxvf calico-release-v3.28.0.tgz
cd release-v3.28.0/ 
cd  manifests 
#找到calico.yaml 配置文件进行修改 
vim calico.yaml 
#/CALICO_IPV4POOL_CIDR 
- name: CALICO_IPV4POOL_CIDR
value: "10.200.0.0/16"
#修改三台机器的cidr pod保持一样的网段  注意格式对齐
#master和node节点 三台机器都需要导入镜像 
cd /release-v3.28.0/images 
nerdctl load -i calico-cni.tar 
nerdctl load -icalico-node.tar 
nerdctl load -i calico-kube-controllers.tar
nerdctl images

#master节点 执行 
kubectl apply -f calico.yaml 

验证

#查看状态 都是runing为正常状态 
kubectl  get po -n kube-system 

#查看对应name 的详细信息 
kubectl describe pod <name> -n  kube-system 

配置kubectl 各节点可以使用

# 1.master 节点中 /etc/kubernetes/admin.conf 拷贝到需要运行的服务器的 /etc/kubernetes 目录中

scp /etc/kubernetes/admin.conf  IP:/etc/kubernetes

# 2. 在对应的服务器上配置环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile 

安装图形化界面Dashboard

常见的图形化界面:Dashboard、 Rancher、 Kuboard、 KubeSphere等

准备 :dashboard.v2.7.0.tar  metrics-scraper.v1.0.8.tar  两个镜像 从本地获取镜像
三个yaml文件  :dashboard-v2.7.0_online.yaml  admin-user.yaml  admin-secret.yaml  

dashboard-v2.7.0_online.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

镜像本地获取

nerdctl load -i  dashboard.v2.7.0.tar  
nerdctl load -i  metrics-scraper.v1.0.8.tar
#拉取后注意镜像地址需要和yaml文件中一致
kubectl apply -f  dashboard-v2.7.0.yaml -f admin-user.yaml -f admin-secret.yaml 
#注意优先级 dashboard yaml文件优先执行

#验证
https://IP:30000

#token获取
kubectl describe secret -n kubernetes-dashboard dashboard-admin-user

#web页面输入token 登录即可显示
内容大纲
批注笔记
📚 k8s安装 -Kubeadmin
ArticleBot
z
z
z
z
主页
会议室
Git管理
文章
云文档
看板