安装每个机器都执行

发布时间:2025-06-24 18:07:55  作者:北方职教升学中心  阅读量:218


下载

  • 11.1.2、确认内核版本
    [root@localhost ~]# uname -r5.4.273-1.el7.elrepo.x86_64[root@localhost ~]#Linux version 5.4.273-1.el7.elrepo.x86_64 (mockbuild@Build64R7)(gcc version 9.3.1 20200408(Red Hat 9.3.1-2)(GCC))#1 SMP Wed Mar 27 15:58:08 EDT 2024[root@localhost ~]#

    2、安装keepalived

  • 14、将ntp、如果已安装了ntp,查询版本信息,如果版本不对,可卸载

    查询ntp:

    rpm-qa|grepntp

    卸载:

    rpm-e--nodepsntp-xxxx

    5.3、安装kubernetes

    • 14.6.1、关闭swap交换区
      # 临时关闭Swap分区swapoff -a# 永久关闭Swap分区sed-ri's/.*swap.*/#&/'/etc/fstab# 查看下grepswap /etc/fstab

      9、生成配置文件

    • 10.3、修改主机名
    • 2.2、而正常情况下stratum这个值得范围是“0~15”。安装docker registry并做一些关联配置
      • 14.5.1、下载NTP

        下载地址:https://pkgs.org/download/ntp
        https://pkgs.org/download/ntpdate
        https://pkgs.org/download/libopts.so.25()(64bit)
        在这里插入图片描述

        在这里插入图片描述

        在这里插入图片描述

        5.2、查看操作系统和内核版本

        查看内核:

        [root@localhost ~]# uname -r3.10.0-1160.71.1.el7.x86_64[root@localhost ~]#[root@localhost ~]# cat /proc/versionLinux version 3.10.0-1160.71.1.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org)(gcc version 4.8.5 20150623(Red Hat 4.8.5-44)(GCC))#1 SMP Tue Jun 28 15:37:28 UTC 2022

        查看操作系统:

        [root@localhost ~]# cat /etc/*releaseCentOS Linux release 7.9.2009 (Core)NAME="CentOS Linux"VERSION="7 (Core)"ID="centos"ID_LIKE="rhel fedora"VERSION_ID="7"PRETTY_NAME="CentOS Linux 7 (Core)"ANSI_COLOR="0;31"CPE_NAME="cpe:/o:centos:centos:7"HOME_URL="https://www.centos.org/"BUG_REPORT_URL="https://bugs.centos.org/"CENTOS_MANTISBT_PROJECT="CentOS-7"CENTOS_MANTISBT_PROJECT_VERSION="7"REDHAT_SUPPORT_PRODUCT="centos"REDHAT_SUPPORT_PRODUCT_VERSION="7"CentOS Linux release 7.9.2009 (Core)CentOS Linux release 7.9.2009 (Core)[root@localhost ~]#

        1.2、修改主机名/hosts文件

        • 2.1、安装keepalived
          三个主节点都执行安装将keepalived包上传至三个主节点
          #解压tar-zvxfkeepalived-2.2.8.tar.gzcdkeepalived-2.2.8./configure --prefix=/opt/software/keepalived --sysconf=/etcmake&&makeinstall
          生成健康检查脚本
          vi/etc/keepalived/check_apiserver.sh# 添加内容#!/bin/bash#检测nginx是否启动了#如果nginx没有启动就启动nginxif["$(ps-ef|grep"nginx: master process"|grep-vgrep)"==""];then#重启nginxdockerrestart nginx          sleep5#nginx重启失败,则停掉keepalived服务,进行VIP转移if["$(ps-ef|grep"nginx: master process"|grep-vgrep)"==""];thensystemctl stop keepalived      fifi# 赋权chmod+x /etc/keepalived/check_apiserver.sh
          分别在三台机器修改(其中设置的192.168.115.10为VIP)
          cd/etc/keepalivedcpkeepalived.conf.sample  keepalived.conf分别编辑keepalived.conf

          192.168.115.11:

          !Configuration File forkeepalivedglobal_defs {notification_email {acassen@firewall.loc     failover@firewall.loc     sysadmin@firewall.loc   }notification_email_from Alexandre.Cassen@firewall.loc   smtp_server 192.168.200.1   smtp_connect_timeout 30router_id LVS_DEVEL   vrrp_skip_check_adv_addr   vrrp_strict   vrrp_garp_interval 0vrrp_gna_interval 0}vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"#检测脚本文件interval 5#检测时间间隔weight  -5#权重fall 2rise 1}vrrp_instance VI_1 {state MASTER                        # 主机状态master,从节点为BACKUPinterface ens33                     #设置实例绑定的网卡mcast_src_ip 192.168.115.11    # 广播的原地址,k8s-master01:192.168.115.11,k8s-master02:192.168.115.12,k8s-master03:192.168.115.13virtual_router_id 51#同一实例下virtual_router_id必须相同priority 100#设置优先级,优先级高的会被竞选为Masteradvert_int 2authentication {#设置认证auth_type PASS                #认证方式,支持PASS和AHauth_pass K8SHA_KA_AUTH       #认证密码}virtual_ipaddress {#设置VIP,可以设置多个192.168.115.10  }track_script {#设置追踪脚本chk_apiserver  }}

          192.168.115.12:

          !Configuration File forkeepalivedglobal_defs {notification_email {acassen@firewall.loc     failover@firewall.loc     sysadmin@firewall.loc   }notification_email_from Alexandre.Cassen@firewall.loc   smtp_server 192.168.200.1   smtp_connect_timeout 30router_id LVS_DEVEL   vrrp_skip_check_adv_addr   vrrp_strict   vrrp_garp_interval 0vrrp_gna_interval 0}vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"#检测脚本文件interval 5#检测时间间隔weight  -5#权重fall 2rise 1}vrrp_instance VI_1 {state BACKUP                        # 主机状态master,从节点为BACKUPinterface ens33                     #设置实例绑定的网卡mcast_src_ip 192.168.115.12    # 广播的原地址,k8s-master01:192.168.115.11,k8s-master02:192.168.115.12,k8s-master03:192.168.115.13virtual_router_id 51#同一实例下virtual_router_id必须相同priority 100#设置优先级,优先级高的会被竞选为Masteradvert_int 2authentication {#设置认证auth_type PASS                #认证方式,支持PASS和AHauth_pass K8SHA_KA_AUTH       #认证密码}virtual_ipaddress {#设置VIP,可以设置多个192.168.115.10  }track_script {#设置追踪脚本chk_apiserver  }}

          192.168.115.13:

          !Configuration File forkeepalivedglobal_defs {notification_email {acassen@firewall.loc     failover@firewall.loc     sysadmin@firewall.loc   }notification_email_from Alexandre.Cassen@firewall.loc   smtp_server 192.168.200.1   smtp_connect_timeout 30router_id LVS_DEVEL   vrrp_skip_check_adv_addr   vrrp_strict   vrrp_garp_interval 0vrrp_gna_interval 0}vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"#检测脚本文件interval 5#检测时间间隔weight  -5#权重fall 2rise 1}vrrp_instance VI_1 {state BACKUP                        # 主机状态master,从节点为BACKUPinterface ens33                     #设置实例绑定的网卡mcast_src_ip 192.168.115.13    # 广播的原地址,k8s-master01:192.168.115.11,k8s-master02:192.168.115.12,k8s-master03:192.168.115.13virtual_router_id 51#同一实例下virtual_router_id必须相同priority 100#设置优先级,优先级高的会被竞选为Masteradvert_int 2authentication {#设置认证auth_type PASS                #认证方式,支持PASS和AHauth_pass K8SHA_KA_AUTH       #认证密码}virtual_ipaddress {#设置VIP,可以设置多个192.168.115.10  }track_script {#设置追踪脚本chk_apiserver  }}

          三个机器都启动

          # 启动服务并验证systemctl daemon-reload# 开机启动并立即启动systemctl enable--nowkeepalived   在master的11节点执行:ip a show 会发现多了一个VIP

          在这里插入图片描述

          # 在任意节点执行[root@k8s-master01 keepalived]# curl 192.168.115.10192.168.115.11在master节点停止keepalived :systemctl stop keepalived,模拟事故去其他两个master执行 ipa show会发现VIP飘移到了其中一个节点# 在任意节点执行[root@k8s-master01 keepalived]# curl 192.168.115.10192.168.115.12# 结果可以看出访问从11切换到了12,说明keepalived生效了

          14、安装


  • 准备工作

    IP用途
    192.168.115.11k8s-master01
    192.168.115.12k8s-master02
    192.168.115.13k8s-master03
    192.168.115.101k8s-node01
    192.168.115.102k8s-node02
    192.168.115.10vip

    1、安装docker-registry

    将docker-registry镜像包上传至一个机器,这里选择k8s-master01
    # 解压镜像dockerload -idocker-registry.tar# 运行docker-registrymkdir-p/opt/software/registry-datadockerrun -d--nameregistry --restart=always -v/opt/software/registry-data:/var/lib/registry -p81:5000 docker.io/registry	查看是否已运行[root@k8s-master01 docker-registry]# docker psCONTAINER ID   IMAGE          COMMAND                   CREATED          STATUS          PORTS                                                                                                                     NAMES72b1ee0dd35d   registry       "/entrypoint.sh /etc…"17seconds ago   Up 15seconds   0.0.0.0:81->5000/tcp, :::81->5000/tcp                                                                                     registry

    14.5.3、

    11.2.1、下载内核离线升级包

    下载地址:https://elrepo.org/linux/kernel/el7/x86_64/RPMS/

    1.3、安装网络组件calico

    14.7.1、#下面这个配置,建议NTP Client关闭,建议NTP Server打开。下载

    下载地址:[https://github.com/Mirantis/cri-dockerd/releases](https://github.com/Mirantis/cri-dockerd/releases)选择对应的架构和版本,这里下载:[https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8-3.el7.x86_64.rpm](https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8-3.el7.x86_64.rpm)

    11.2.2、安装

    三个主节点安装。安装docker-registry
  • 14.5.3、
  • # 三个机器上运行mkdir-p/home/admin/software/docker/nginx/{conf,html,cert,logs}# 在三个机器上分别执行echo'192.168.115.11'>/opt/software/nginx/html/index.htmlecho'192.168.115.12'>/opt/software/nginx/html/index.htmlecho'192.168.115.13'>/opt/software/nginx/html/index.html	编写nginx配置文件,修改upstream处各个端口,改为三个master节点的IPvi/opt/software/nginx/conf/nginx.conf#添加内容user  nginx;worker_processes  auto;error_log  /var/log/nginx/error.log notice;pid        /var/run/nginx.pid;events {worker_connections  1024;}stream {log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log  /var/log/nginx/k8s-access.log  main;upstream k8s-apiserver {server 192.168.115.11:6443;server 192.168.115.12:6443;server 192.168.115.13:6443;}server {listen 16443;proxy_pass k8s-apiserver;}}http {include       /etc/nginx/mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  /var/log/nginx/access.log  main;sendfile        on;#tcp_nopush     on;keepalive_timeout  65;#gzip  on;include /etc/nginx/conf.d/*.conf;}
    将nginx.tar上传至三个主节点的服务器,解压镜像
    dockerload -inginx.tar
    使用docker-compose安装(三个主节点都安装)
    192.168.115.11执行:#创建目录mkdir-p/opt/software/nginx/docker-composecd/opt/software/nginx/docker-compose

    创建docker-compose.yml

    vidocker-compose.yml
    # 添加内容:version: '3'services:  nginx:    image: nginx:latest    restart: always    hostname: nginx    container_name: nginx    privileged: trueports:      - 80:80      - 443:443      - 16443:16443    volumes:      - /usr/share/zoneinfo/Asia/Shanghai:/etc/localtime:ro      - /opt/software/nginx/conf/nginx.conf:/etc/nginx/nginx.conf       # 这里是引用的配置文件,主配置文件路径是/etc/nginx/nginx.conf- /opt/software/nginx/html/:/usr/share/nginx/html/        # 默认显示的index网页#- /home/admin/software/docker/nginx/cert/:/etc/nginx/cert- /opt/software/nginx/logs/:/var/log/nginx/               # 日志文件

    192.168.115.12执行:

    #创建目录mkdir-p/opt/software/nginx/docker-composecd/opt/software/nginx/docker-compose
    # 创建docker-compose.ymlvidocker-compose.yml# 添加内容:version: '3'services:  nginx:    image: nginx:latest    restart: always    hostname: nginx    container_name: nginx    privileged: trueports:      - 80:80      - 443:443      - 16443:16443    volumes:      - /usr/share/zoneinfo/Asia/Shanghai:/etc/localtime:ro      - /opt/software/nginx/conf/nginx.conf:/etc/nginx/nginx.conf       # 这里是引用的配置文件,主配置文件路径是/etc/nginx/nginx.conf- /opt/software/nginx/html/:/usr/share/nginx/html/        # 默认显示的index网页#- /home/admin/software/docker/nginx/cert/:/etc/nginx/cert- /opt/software/nginx/logs/:/var/log/nginx/               # 日志文件

    192.168.115.13执行:

    #创建目录mkdir-p/opt/software/nginx/docker-composecd/opt/software/nginx/docker-compose
    # 创建docker-compose.ymlvidocker-compose.yml# 添加内容:version: '3'services:  nginx:    image: nginx:latest    restart: always    hostname: nginx    container_name: nginx    privileged: trueports:      - 80:80      - 443:443      - 16443:16443    volumes:      - /usr/share/zoneinfo/Asia/Shanghai:/etc/localtime:ro      - /opt/software/nginx/conf/nginx.conf:/etc/nginx/nginx.conf       # 这里是引用的配置文件,主配置文件路径是/etc/nginx/nginx.conf- /opt/software/nginx/html/:/usr/share/nginx/html/        # 默认显示的index网页#- /home/admin/software/docker/nginx/cert/:/etc/nginx/cert- /opt/software/nginx/logs/:/var/log/nginx/               # 日志文件

    三个主节点都启动

    # 在docker-compose.yml所在目录执行docker-composeup -d
    测试
    # 每个主节点在docker-compose.yml所在目录执行 docker-compose ps测试[root@k8s-master01 docker-compose]# docker-compose psNAME      IMAGE          COMMAND                   SERVICE   CREATED          STATUS          PORTSnginx     nginx:latest   "/docker-entrypoint.…"nginx     13minutes ago   Up 13minutes   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp[root@k8s-master01 docker-compose]#
    #三个主节点分别测试#192.168.115.11测试[root@k8s-master01 docker-compose]# curl 127.0.0.1192.168.115.11#192.168.115.12测试[root@k8s-master02 docker-compose]# curl 127.0.0.1192.168.115.12# 192.168.115.13测试[root@k8s-master03 docker-compose]# curl 127.0.0.1192.168.115.13

    13.2、将k8s依赖的镜像传入docker-registry

    将K8S依赖的镜像上传至k8s-master01节点,执行
    dockerload -ikube-apiserver-v1.30.0.tardockerload -ikube-controller-manager-v1.30.0.tardockerload -ikube-scheduler-v1.30.0.tardockerload -ikube-proxy-v1.30.0.tardockerload -icoredns-1.11.1.tardockerload -ipause-3.9.tar
    dockertag registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.0 192.168.115.11:81/kube-apiserver:v1.30.0dockertag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.0 192.168.115.11:81/kube-controller-manager:v1.30.0dockertag registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.0 192.168.115.11:81/kube-scheduler:v1.30.0dockertag registry.aliyuncs.com/google_containers/kube-proxy:v1.30.0 192.168.115.11:81/kube-proxy:v1.30.0dockertag registry.aliyuncs.com/google_containers/coredns:1.11.1 192.168.115.11:81/coredns:v1.11.1dockertag registry.aliyuncs.com/google_containers/pause:3.9 192.168.115.11:81/pause:3.9
    在每台机器执行配置,将docker-registry以及k8s的镜像的地址配置到/etc/docker/daemon.json中
    Vi /etc/docker/daemon.json添加配置"insecure-registries":["192.168.115.11:81", "quay.io", "k8s.gcr.io", "gcr.io"][root@k8s-master02 ~]# cat /etc/docker/daemon.json{"exec-opts":["native.cgroupdriver=systemd"],  "insecure-registries":["192.168.115.11:81", "quay.io", "k8s.gcr.io", "gcr.io"]}# 重启dockersytemctl daemon-reloadsystemctl restart docker
    在k8s-master01上将镜像推送到docker-registry
    dockerpush 192.168.115.11:81/kube-apiserver:v1.30.0dockerpush 192.168.115.11:81/kube-controller-manager:v1.30.0dockerpush 192.168.115.11:81/kube-scheduler:v1.30.0dockerpush 192.168.115.11:81/kube-proxy:v1.30.0dockerpush 192.168.115.11:81/coredns:v1.11.1dockerpush 192.168.115.11:81/pause:3.9

    14.5.4、安装nginx+keepalived

    keepalived+nginx 实现高可用+反向代理,这里为了节约服务器,将keepalived+nginx部署在master节点上。

    rpm-ivh*.rpm
    设置开机自启
    systemctl start ntpdsystemctl enablentpd

    5.4、下载gcc(已下载)
  • 13.2.3、修改cri-docker将pause镜像修改为docker-registry中的
  • 14.6、
    keepalived会虚拟一个vip(192.168.115.10),vip任意绑定在一台master节点上,使用nginx对3台master节点进行反向代理。安装
  • 11.2、下载openssl
  • 在一个有网的机器上下载
    yum -yinstall--downloadonly--downloaddir=/opt/software/openssl  makeopenssl-devel libnfnetlink-devel libnl3-devel net-snmp-devel
    下载的rpm在目录:/opt/software/openssl

    13.2.4、主节点配置
  • 5.4.2、下载kubelet kubeadm kubectl
  • 14.2、 dockershim 组件在 Kubernetes v1.24 发行版本中已被移除;不过,一种来自第三方的替代品, cri-dockerd 是可供使用的。

    文章目录

    • 准备工作
    • 1、下载镜像
  • 在一个有网的机器下载镜像

    dockerpull docker.io/calico/node:v3.27.3dockerpull docker.io/calico/kube-controllers:v3.27.3dockerpull docker.io/calico/cni:v3.27.3dockersave -ocalico-node.tar docker.io/calico/node:v3.27.3dockersave -ocalico-kube-controllers.tar docker.io/calico/kube-controllers:v3.27.3dockersave -ocalico-cni.tar docker.io/calico/cni:v3.27.3
    # 如果以上方式不好下载,从github下载:https://github.com/projectcalico/calico/releases/tag/v3.27.3,选择release-v3.27.3.tgz,下载后解压,从image中找到三个镜像

    下载calico.yaml: https://github.com/projectcalico/calico/blob/v3.27.3/manifests/calico.yaml

    14.7.2、修改cri-docker将pause镜像修改为docker-registry中的

    每台电脑都执行
    # vi /usr/lib/systemd/system/cri-docker.service# 修改--pod-infra-container-image=registry.k8s.io/pause:3.9 为--pod-infra-container-image=192.168.115.11:81/pause:3.9# 重启cri-dockersystemctl daemon-reloadsystemctl restart cri-docker

    14.6、完整配置如下:
    # For more information about this file, see the man pages# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).driftfile /var/lib/ntp/drift# Permit time synchronization with our time source, but do not# permit the source to query or modify the service on this system.restrict default nomodify notrap nopeer noquery# Permit all access over the loopback interface.  This could# be tightened as well, but to do so would effect some of# the administrative functions.restrict 127.0.0.1restrict ::1# Hosts on local network are less restricted.# 允许内网其他机器同步时间,如果不添加该约束默认允许所有IP访问本机同步服务#restrict 192.168.1.0 mask 255.255.255.0 nomodify notraprestrict 192.168.115.0 mask 255.255.255.0 nomodify notrap# Use public servers from the pool.ntp.org project.# Please consider joining the pool (http://www.pool.ntp.org/join.html).#server 0.centos.pool.ntp.org iburst#server 1.centos.pool.ntp.org iburst#server 2.centos.pool.ntp.org iburst#server 3.centos.pool.ntp.org iburst# 配置和上游标准时间同步server 210.72.145.44  # 中国国家授时中心server 133.100.11.8  #日本[福冈大学]server 0.cn.pool.ntp.orgserver 1.cn.pool.ntp.orgserver 2.cn.pool.ntp.orgserver 3.cn.pool.ntp.org# 配置允许上游时间服务器主动修改本机(内网ntp Server)的时间restrict 210.72.145.44 nomodify notrap noqueryrestrict 133.100.11.8 nomodify notrap noqueryrestrict 0.cn.pool.ntp.org nomodify notrap noqueryrestrict 1.cn.pool.ntp.org nomodify notrap noqueryrestrict 2.cn.pool.ntp.org nomodify notrap noqueryrestrict 3.cn.pool.ntp.org nomodify notrap noquery# 确保localhost有足够权限,使用没有任何限制关键词的语法。下载NTP
  • 5.2、下载keepalived
  • 13.2.2、下载
  • 下载地址:
    https://github.com/coreos/etcd/releases/download/v3.5.11/etcd-v3.5.11-linux-amd64.tar.gz

    解压并移动到/usr/local/bin
    tarxzvf etcd-v3.5.11-linux-amd64.tar.gzcdetcd-v3.5.11-linux-amd64/mvetcd* /usr/local/bin

    10.2、关闭防火墙
  • 4、libopts上传至各个机器,执行安装命令。ipvsadm
  • 本次安装使用的景象ipset已经安装了不再安装,仅安装ipvsadm

    7.1、将k8s依赖的镜像传入docker-registry
  • 14.5.4、K8s-master01:
    cat>/usr/lib/systemd/system/etcd.service <<EOF[Unit]Description=Etcd ServerAfter=network.target[Service]Type=notifyExecStart=/usr/local/bin/etcd \--name=k8s-master01 \--data-dir=/var/lib/etcd/default.etcd \--listen-peer-urls=http://192.168.115.11:2380 \--listen-client-urls=http://192.168.115.11:2379,http://127.0.0.1:2379 \--advertise-client-urls=http://192.168.115.11:2379 \--initial-advertise-peer-urls=http://192.168.115.11:2380 \--initial-cluster=k8s-master01=http://192.168.115.11:2380,k8s-master02=http://192.168.115.12:2380,k8s-master03=http://192.168.115.13:2380 \--initial-cluster-token=smartgo \--initial-cluster-state=newRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF

    k8s-master02:

    cat>/usr/lib/systemd/system/etcd.service <<EOF[Unit]Description=Etcd ServerAfter=network.target[Service]Type=notifyExecStart=/usr/local/bin/etcd \--name=k8s-master02 \--data-dir=/var/lib/etcd/default.etcd \--listen-peer-urls=http://192.168.115.12:2380 \--listen-client-urls=http://192.168.115.12:2379,http://127.0.0.1:2379 \--advertise-client-urls=http://192.168.115.12:2379 \--initial-advertise-peer-urls=http://192.168.115.12:2380 \--initial-cluster=k8s-master01=http://192.168.115.11:2380,k8s-master02=http://192.168.115.12:2380,k8s-master03=http://192.168.115.13:2380 \--initial-cluster-token=smartgo \--initial-cluster-state=newRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF

    k8s-master03:

    cat>/usr/lib/systemd/system/etcd.service <<EOF[Unit]Description=Etcd ServerAfter=network.target[Service]Type=notifyExecStart=/usr/local/bin/etcd \--name=k8s-master03 \--data-dir=/var/lib/etcd/default.etcd \--listen-peer-urls=http://192.168.115.13:2380 \--listen-client-urls=http://192.168.115.13:2379,http://127.0.0.1:2379 \--advertise-client-urls=http://192.168.115.13:2379 \--initial-advertise-peer-urls=http://192.168.115.13:2380 \--initial-cluster=k8s-master01=http://192.168.115.11:2380,k8s-master02=http://192.168.115.12:2380,k8s-master03=http://192.168.115.13:2380 \--initial-cluster-token=smartgo \--initial-cluster-state=newRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF

    10.3、K8s.io需要梯子才能下载,这里使用阿里云国内镜像
    dockerpull registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.0dockerpull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.0dockerpull registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.0dockerpull registry.aliyuncs.com/google_containers/kube-proxy:v1.30.0dockerpull registry.aliyuncs.com/google_containers/coredns:1.11.1dockerpull registry.aliyuncs.com/google_containers/pause:3.9
    将docker镜像保存为tar包,并保存待离线使用
    dockersave -okube-apiserver-v1.30.0.tar registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.0dockersave -okube-controller-manager-v1.30.0.tar registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.0dockersave -okube-scheduler-v1.30.0.tar registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.0dockersave -okube-proxy-v1.30.0.tar registry.aliyuncs.com/google_containers/kube-proxy:v1.30.0dockersave -ocoredns-1.11.1.tar registry.aliyuncs.com/google_containers/coredns:1.11.1dockersave -opause-3.9.tar registry.aliyuncs.com/google_containers/pause:3.9

    14.5、下载K8S运行依赖的镜像

    在一个有网的机器执行下载(已安装过docker的机器)查看k8s1.30需要依赖的镜像
    [root@k8s-master01 ~]# kubeadm config images listregistry.k8s.io/kube-apiserver:v1.30.0registry.k8s.io/kube-controller-manager:v1.30.0registry.k8s.io/kube-scheduler:v1.30.0registry.k8s.io/kube-proxy:v1.30.0registry.k8s.io/coredns/coredns:v1.11.1registry.k8s.io/pause:3.9registry.k8s.io/etcd:3.5.12-0
    其中etcd不用下载,因为在前面已经安装过了,这里不使用镜像安装。升级内核
  • 上传下载的内核安装包,执行命令:

    rpm-ivh*.rpm --nodeps--force
    安装过程截图:

    在这里插入图片描述

    执行命令:
    awk-F\' '$1=="menuentry " {print $2}'/etc/grub2.cfg
    命令截图:

    在这里插入图片描述

    修改/etc/default/grub
    GRUB_DEFAULT=saved 改为 GRUB_DEFAULT=0,保存退出
    重新加载内核
    grub2-mkconfig -o/boot/grub2/grub.cfg

    在这里插入图片描述

    重启机器
    reboot

    1.4、下载
  • 10.2、安装nginx
  • 13.1.1、安装

    每台机器都安装
    将下载好的安装包上传至各个虚拟机

    rpm-ivh*.rpm
    启动docker
    systemctl daemon-reload                                                       #重载unit配置文件systemctl start docker#启动Dockersystemctl enabledocker.service                                           #设置开机自启
    查看docker版本
    [root@k8s-master01 docker-ce]# docker --versionDocker version 25.0.5, build 5dc9bcc[root@k8s-master01 docker-ce]#

    11.2、安装openssl

    三个主节点都执行安装
    将openssl包上传至三个主节点机器,执行安装

    rpm-Uvh--force*.rpm

    13.2.6、安装

    将calico的tar包和calico.yaml上传至k8s-master01

    dockerload -icalico-cni.tardockerload -icalico-kube-controllers.tardockerload -icalico-node.tardockertag calico/node:v3.27.3 192.168.115.11:81/calico/node:v3.27.3dockertag calico/kube-controllers:v3.27.3 192.168.115.11:81/calico/kube-controllers:v3.27.3dockertag docker.io/calico/cni:v3.27.3 192.168.115.11:81/calico/cni:v3.27.3dockerpush 192.168.115.11:81/calico/node:v3.27.3dockerpush 192.168.115.11:81/calico/kube-controllers:v3.27.3dockerpush 192.168.115.11:81/calico/cni:v3.27.3
    将calico.yaml上传至一个主节点修改其中的镜像,都修改为192.168.115.11:81中的三个镜像:192.168.115.11:81/calico/node:v3.27.3,192.168.115.11:81/calico/kube-controllers:v3.27.3,192.168.115.11:81/calico/cni:v3.27.3修改网络,value修改为kubuedm-config.yaml中的podSubnet值一致
    - name: CALICO_IPV4POOL_CIDR  value: "10.244.0.0/16"
    启动calico
    kubectl apply -fcalico.yaml
    等待几分钟后查看calico的pod,都在running状态了
    [root@k8s-master01 calico]# kubectl get pods -n kube-systemNAME                                       READY   STATUS    RESTARTS   AGEcalico-kube-controllers-5f87f7fc98-84wpm   1/1     Running   02m55scalico-node-bxns7                          1/1     Running   02m55scalico-node-dpvhb                          1/1     Running   02m55scalico-node-gzncb                          1/1     Running   02m55scalico-node-j62nt                          1/1     Running   02m55scalico-node-np695                          1/1     Running   02m55scoredns-7b9565c6c-f865r                    1/1     Running   0104mcoredns-7b9565c6c-g9df5                    1/1     Running   0104mkube-apiserver-k8s-master01                1/1     Running   10105mkube-apiserver-k8s-master02                1/1     Running   098mkube-apiserver-k8s-master03                1/1     Running   089mkube-controller-manager-k8s-master01       1/1     Running   4105mkube-controller-manager-k8s-master02       1/1     Running   098mkube-controller-manager-k8s-master03       1/1     Running   089mkube-proxy-2j9t2                           1/1     Running   089mkube-proxy-4l48v                           1/1     Running   081mkube-proxy-cf4mb                           1/1     Running   0104mkube-proxy-gs2ph                           1/1     Running   081mkube-proxy-lgtxw                           1/1     Running   098mkube-scheduler-k8s-master01                1/1     Running   4105mkube-scheduler-k8s-master02                1/1     Running   098mkube-scheduler-k8s-master03                1/1     Running   089m
    查看节点状态,都是ready了
    [root@k8s-master01 calico]# kubectl get nodeNAME           STATUS   ROLES           AGE    VERSIONk8s-master01   Ready    control-plane   106m   v1.30.0k8s-master02   Ready    control-plane   99m    v1.30.0k8s-master03   Ready    control-plane   90m    v1.30.0k8s-node01     Ready    <none>82m    v1.30.0k8s-node02     Ready    <none>82m    v1.30.0[root@k8s-master01 calico]#