做网站好迷茫,2017做网站挣钱吗,wordpress运费,商城网站管理系统1.部署ansible集群
使用python脚本一个简单的搭建ansible集群-CSDN博客
2.ansible命令搭建k8s#xff1a;
1.主机规划#xff1a;
节点IP地址操作系统配置server192.168.174.150centos7.92G2核client1192.168.174.151centos7.92G2核client2192.168.174.152centos7.92G2核…1.部署ansible集群
使用python脚本一个简单的搭建ansible集群-CSDN博客
2.ansible命令搭建k8s
1.主机规划
节点IP地址操作系统配置server192.168.174.150centos7.92G2核client1192.168.174.151centos7.92G2核client2192.168.174.152centos7.92G2核
ansible清单文件内容如下
[clients_all]
server
client1
client2
[clients_master]
server
[clients_client]
client1
client2 2.配置yum源 配置本地yum源 # name是设备名path是挂载点stateunmounted如果目标路径已经挂载了设备将其卸载重新挂载
ansible clients_all -m mount -a src/dev/cdrom path/mnt/cdrom fstypeiso9660 optsdefaults statemounted # path是文件路径line是要添加的行内容insertafterEOF是文件末尾添加BOF是文件头部添加
ansible clients_all -m lineinfile -a path/etc/fstab line/dev/cdrom /mnt/cdrom iso9660 defaults 0 0 insertafterEOF ansible clients_all -m shell -a echo /etc/yum.repos.d/centos-local.repo path修改文件的路径block添加的文本内容多行用\n隔开create如果没有文件就创建marker插入的前后标签如果重新执行就是替换文本
ansible clients_all -m blockinfile -a path/etc/yum.repos.d/centos-local.repo block[centos7.9]\nnamecentos7.9\nbaseurlfile:///mnt/cdrom\nenabled1\ngpgcheck0 createyes marker#{mark} centos7.9 ansible clients_all -m shell -a yum clean all yum repolist 配置远程阿里源 ansible clients_all -m yum -a namewget ansible clients_all -m get_url -a dest/etc/yum.repos.d/CentOS-Base.repo urlhttp://mirrors.aliyun.com/repo/Centos-7.repo ansible clients_all -m shell -a yum clean all yum repolist 配置扩展源 ansible clients_all -m yum -a nameepel-release ansible clients_all -m shell -a yum clean all yum repolist
3.安装必要工具
ansible clients_all -m yum -a namebash-completion,vim,net-tools,tree,psmisc,lrzsz,dos2unix 4.禁止防火墙和selinux 禁止使用selinux ansible clients_all -m selinux -a statedisabled 禁用iptables和firewalld服务kubernetes和docker在运行中会产生大量的iptables规则为了不让系统规则跟它们混淆直接关闭系统的规则 ansible clients_all -m service -a namefirewalld statestopped enabledfalse # 可能没有安装iptables服务ansible clients_all -m service -a nameiptables statestopped enabledfalse
5.时间同步 所有节点 ansible clients_all -m yum -a namechrony ansible clients_all -m service -a namechronyd staterestarted enabledtrue 修改master节点chrony.conf ansible clients_master -m lineinfile -a path/etc/chrony.conf regexp^#allow 192.168.0.0\/16 lineallow 192.168.174.0/24 backrefsyes
ansible clients_master -m lineinfile -a path/etc/chrony.conf regexp^#local stratum 10 linelocal stratum 10 backrefsyes 修改node节点chrony.conf ansible clients_client -m lineinfile -a path/etc/chrony.conf regexp^server stateabsent ansible clients_client -m lineinfile -a path/etc/chrony.conf lineserver 192.168.174.150 iburst insertbefore^.*\bline2\b.*$ 所有节点 ansible clients_all -m service -a namechronyd staterestarted enabledtrue ansible clients_all -m shell -a timedatectl set-ntp true 查看 [rootserver ~]# ansible clients_client -m shell -a chronyc sources -v
client2 | CHANGED | rc0
210 Number of sources 1
.-- Source mode ^ server, peer, # local clock./ .- Source state * current synced, combined , - not combined,
| / ? unreachable, x time may be in error, ~ time too variable.
|| .- xxxx [ yyyy ] /- zzzz
|| Reachability register (octal) -. | xxxx adjusted offset,
|| Log2(Polling interval) --. | | yyyy measured offset,
|| \ | | zzzz estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample ^* server 3 6 17 6 918us[4722us] /- 217ms
client1 | CHANGED | rc0
210 Number of sources 1
.-- Source mode ^ server, peer, # local clock./ .- Source state * current synced, combined , - not combined,
| / ? unreachable, x time may be in error, ~ time too variable.
|| .- xxxx [ yyyy ] /- zzzz
|| Reachability register (octal) -. | xxxx adjusted offset,
|| Log2(Polling interval) --. | | yyyy measured offset,
|| \ | | zzzz estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample ^* server 3 6 17 6 961us[4856us] /- 217ms
6.禁用swap分区:
# backrefsyes当没有匹配到指定行则不做任何更改
ansible clients_all -m lineinfile -a path/etc/fstab regexp^\/dev\/mapper\/centos-swap line#/dev/mapper/centos-swap swap swap defaults 0 0 backrefsyes 7.修改linux的内核参数: 编辑/etc/sysctl.d/kubernetes.conf添加网桥过滤和地址转发功能 # path修改文件的路径block添加的文本内容多行用\n隔开create如果没有文件就创建marker插入的前后标签如果重新执行就是替换文本
ansible clients_all -m blockinfile -a path/etc/sysctl.d/kubernetes.conf blocknet.bridge.bridge-nf-call-ip6tables 1\nnet.bridge.bridge-nf-call-iptables 1\nnet.ipv4.ip_forward 1 createyes marker#{mark} kubernetes 重新加载配置 ansible clients_all -m shell -a sysctl -p 加载网桥过滤模块 ansible clients_all -m shell -a modprobe br_netfilter 查看网桥过滤模块是否加载成功 ansible clients_all -m shell -a lsmod | grep br_netfilter
8.配置ipvs功能 在kubernetes中service有两种代理模型 一种是基于iptables的 一种是基于ipvs的 两者比较的话ipvs的性能明显要高一些但是如果要使用它需要手动载入ipvs模块 安装ipset和ipvsadm ansible clients_all -m yum -a nameipset,ipvsadm 添加需要加载的模块写入脚本文件: ansible clients_all -m blockinfile -a path/etc/sysconfig/modules/ipvs.modules block#! /bin/bash\nmodprobe -- ip_vs\nmodprobe -- ip_vs_rr\nmodprobe -- ip_vs_wrr\nmodprobe -- ip_vs_sh\nmodprobe -- nf_conntrack_ipv4 createyes marker#{mark} ipvs 为脚本文件添加执行权限 ansible clients_all -m file -a path/etc/sysconfig/modules/ipvs.modules mode0755 执行脚本文件 ansible clients_all -m script -a /bin/bash /etc/sysconfig/modules/ipvs.modules 查看对应的模块是否加载成功 ansible clients_all -m shell -a lsmod | grep -e ip_vs -e nf_conntrack_ipv4 重启linux服务 ansible clients_all -m reboot 9.安装Docker 添加docker镜像到本地 ansible clients_all -m get_url -a dest/etc/yum.repos.d/docker-ce.repo urlhttp://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 然后输入命令 ansible clients_all -m shell -a yum install -y --setoptobsoletes0 docker-ce-18.06.3.ce-3.el7 修改配置文件 ansible clients_all -m file -a path/etc/docker statedirectory # Docker在默认情况下使用的Cgroup Driver为cgroupfs而kubernetes推荐使用systemd来代替cgroupfs
mkdir -p /etc/docker
cat eof /etc/docker/daemon.json
{
storage-driver: devicemapper,
exec-opts: [native.cgroupdriversystemd],
registry-mirrors: [https://ja9e22yz.mirror.aliyuncs.com]
}
eof ansible clients_all -m copy -a src/etc/docker/daemon.json dest/etc/docker/daemon.json cat eof /etc/sysconfig/docker
OPTIONS--selinux-enabled --log-driverjournald --signature-verificationfalse
eof ansible clients_all -m copy -a src/etc/sysconfig/docker dest/etc/sysconfig/docker 重启并设置开机自启 ansible clients_all -m service -a namedocker staterestarted enabledtrue
10.安装k8s组件 配置k8syum仓库 cat EOF /etc/yum.repos.d/kubernetes.repo
[kubernetes]
nameKubernetes
baseurlhttp://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled1
gpgcheck0
repo_gpgcheck0
gpgkeyhttp://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF ansible clients_all -m copy -a src/etc/yum.repos.d/kubernetes.repo dest/etc/yum.repos.d/kubernetes.repo 安装kubeadm、kubelet和kubectl 组件说明kubeadm搭建kubernetes集群的工具kubelet负责维护容器的生命周期即通过控制docker来创建、更新、销毁容器kubectl用来与集群通信的命令行工具。ansible clients_all -m shell -a yum install --setoptobsoletes0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y 编辑/etc/sysconfig/kubelet配置kubelet的cgroup cat eof /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS--cgroup-driversystemd
KUBE_PROXY_MODEipvs
eof ansible clients_all -m copy -a src/etc/sysconfig/kubelet dest/etc/sysconfig/kubelet 设置kubelet开机自启 ansible clients_all -m service -a namekubelet statestarted enabledtrue
11.准备镜像集群 在安装kubernetes集群之前必须要提前准备好集群需要的镜像所需镜像可以通过下面命令查看 ansible clients_all -m shell -a kubeadm config images list 下载镜像此镜像在kubernetes的仓库中,由于网络原因,无法连接下面提供了一种替代方案下载这些镜像 cat eof kubernetes_images_install.yaml
---
- hosts: clients_allgather_facts: novars: images: - kube-apiserver:v1.17.4 - kube-controller-manager:v1.17.4 - kube-scheduler:v1.17.4 - kube-proxy:v1.17.4 - pause:3.1 - etcd:3.4.3-0 - coredns:1.6.5 tasks:- name: 拉取镜像shell: docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}with_items: {{ images }}- name: 给镜像打标签shell: docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }} k8s.gcr.io/{{ item }}with_items: {{ images }}- name: 删除镜像shell: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}with_items: {{ images }}
eof ansible-playbook kubernetes_images_install.yaml
12.集群初始化 在 master 节点 创建集群 ansible clients_master -m shell -a kubeadm init \
--kubernetes-versionv1.17.4 \
--pod-network-cidr10.244.0.0/16 \
--service-cidr10.96.0.0/12 \
--apiserver-advertise-address192.168.174.150 | grep kubeadm join # 集群入口指向master # 成功执行后将给出将node节点加入集群的命令
kubeadm join 192.168.174.150:6443 --token 2pmmsi.xv4534qap5pf3bjv \
--discovery-token-ca-cert-hash sha256:69715f25a2e7795f4642afeb8f88c800e601cb1624b819180e820702885b5eef 创建必要文件 ansible clients_master -m file -a path$HOME/.kube statedirectory ansible clients_master -m shell -a cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
ansible clients_master -m file -a path$HOME/.kube/config statetouch owner$(id -u) group$(id -g) 在 node 节点将node节点加入集群命令各不相同需要在住master节点创建集群后获取命令 ansible clients_client -m shell -a kubeadm join 192.168.174.150:6443 --token 2pmmsi.xv4534qap5pf3bjv \
--discovery-token-ca-cert-hash sha256:69715f25a2e7795f4642afeb8f88c800e601cb1624b819180e820702885b5eef 在 master 节点在查看集群状态 此时的集群状态为NotReady这是因为还没有配置网络插件 [rootserver ~]# ansible clients_master -m shell -a kubectl get nodes
server | CHANGED | rc0
NAME STATUS ROLES AGE VERSION
client1 NotReady none 14m v1.17.4
client2 NotReady none 14m v1.17.4
server NotReady master 23m v1.17.4
13.安装网络插件 kubernetes支持多种网络插件比如flannel、calico、canal等等任选一种使用即可 本次选择flannel 下面操作只需在 master 节点执行即可插件使用的是DaemonSet的控制器它会在每个节点上都运行 获取fannel的配置文件 ansible clients_master -m get_url -a dest./ urlhttps://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 部署flannel网络 ansible clients_master -m shell -a kubectl apply -f kube-flannel.yml 过一分钟左右查看各节点状态变为Ready说明网络打通了 [rootserver ~]# ansible clients_master -m shell -a kubectl get nodes
server | CHANGED | rc0
NAME STATUS ROLES AGE VERSION
client1 Ready none 20m v1.17.4
client2 Ready none 20m v1.17.4
server Ready master 29m v1.17.4 查看所有pod是否变为Running [rootserver ~]# ansible clients_master -m shell -a kubectl get pod --all-namespaces
server | CHANGED | rc0
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-h7d2z 1/1 Running 0 3m3s
kube-flannel kube-flannel-ds-hht48 1/1 Running 0 3m3s
kube-flannel kube-flannel-ds-lk7qd 1/1 Running 0 3m3s
kube-system coredns-6955765f44-4vg95 1/1 Running 0 29m
kube-system coredns-6955765f44-kkndx 1/1 Running 0 29m
kube-system etcd-server 1/1 Running 0 29m
kube-system kube-apiserver-server 1/1 Running 0 29m
kube-system kube-controller-manager-server 1/1 Running 0 29m
kube-system kube-proxy-7x47c 1/1 Running 0 29m
kube-system kube-proxy-pxx4l 1/1 Running 0 21m
kube-system kube-proxy-v54j6 1/1 Running 0 21m
kube-system kube-scheduler-server 1/1 Running 0 29m