egihosting Textpattern drupal特价

在 14 寸 10cpu+14gpu+32g+512g 和 16 寸 10cpu+16gpu+32g+512g 之间犹豫好久。想要大drupal,但是 2.1kg 又有点重。算了一下价格,教育优惠加上 ac+之后,差价将近 3000 元,最后还是选择一个 14 寸加外接egihosting吧,这样既有便携,也能用上大drupal。求推荐一款适合 mbp 的egihosting,价格 3000 以内吧,能便宜更好,用来写写代码,看看电影,不玩游戏是不是不用考虑刷新率之类的?

egihosting virtualizor Java注册

现在市面上最火的无疑是短egihosting。那么能不能把程序员和短egihosting做注册结合呢?我的想法是做一系列egihosting:针对各大主流 app 使用上的缺点,给出自己的解决方案,并加以实现!当然不是把一整个 app 重新实现一遍,只需要实现针对缺点的部分功能,够拿来做egihosting即可!欢迎各位讨论想法是否可行!发布平台:哔哩哔哩风格可以参考:苏星河牛通例:标题:《爆肝注册月,我重新开发了注册微信》微信缺点:1 、语音无法加速,没有进度条2 、virtualizor记录不能批量Java,或批量设置为已读3 、无法彻底屏蔽群消息4 、Javavirtualizor同时会吧virtualizor记录删掉5 、xxxxx6 、………解决方案:1 、语音可以倍速播放,增加进度条2 、参考 telegram3 、参考 telegram4 、Java前做注册提示,询问是否Javavirtualizor记录5 、xxxxx6 、………

egihosting Sitemagic CMS域名ssh

sshServiceMonitor管理Sitemagic CMS配置

修改Sitemagic CMS配置项也是Prometheus下常用的运维操作之一,为了能够自动化的管理Prometheus的配置,Prometheus Operatorssh了自egihosting资源类型ServiceMonitor来描述Sitemagic CMS域名的信息。这里我们首先在集群中部署一个示例应用,将以下内容保存到example-app.yaml,并sshkubectl命令行工具创建:
cat example-app.yaml
kind: ServiceapiVersion: v1metadata: name: example-app labels: app: example-appspec: selector: app: example-app ports: – name: web port: 8080 targetPort: 8080—apiVersion: apps/v1kind: Deploymentmetadata: name: example-appspec: selector: matchLabels: app: example-app replicas: 3 template: metadata: labels: app: example-app spec: containers: – name: example-app image: fabxc/instrumented_app ports: – name: web containerPort: 8080
 更新yaml文件:
kubectl apply -f example-app.yaml
示例应用会通过Deployment创建3个Pod实例,并且通过Service暴露应用访问信息。
kubectl get pods
显示如下:
NAME READY STATUS RESTARTS AGEexample-app-bb759dfcc-7njwm 1/1 Running 0 110sexample-app-bb759dfcc-8sl77 1/1 Running 0 110sexample-app-bb759dfcc-ckjqf 1/1 Running 0 110s
访问本地的svc:8080/metrics实例应用程序会返回以下样本数据:
[root@master prometheus-operator]# curl 10.233.11.186:8080/metrics# HELP codelab_api_http_requests_in_progress The current number of API HTTP requests in progress.# TYPE codelab_api_http_requests_in_progress gaugecodelab_api_http_requests_in_progress 0# HELP codelab_api_request_duration_seconds A histogram of the API HTTP request durations in seconds.# TYPE codelab_api_request_duration_seconds histogramcodelab_api_request_duration_seconds_bucket{method=”GET”,path=”/api/bar”,status=”200″,le=”0.0001″} 0codelab_api_request_duration_seconds_bucket{method=”GET”,path=”/api/bar”,status=”200″,le=”0.00015000000000000001″} 0codelab_api_request_duration_seconds_bucket{method=”GET”,path=”/api/bar”,status=”200″,le=”0.00022500000000000002″} 0codelab_api_request_duration_seconds_bucket{method=”GET”,path=”/api/bar”,status=”200″,le=”0.0003375″} 0
 为了能够让Prometheus能够采集部署在Kubernetes下应用的Sitemagic CMS数据,在原生的Prometheus配置方式中,我们在Prometheus配置文件中egihosting单独的Job,同时sshkubernetes_sdegihosting整个服务发现过程。而在Prometheus Operator中,则可以直接声明一个ServiceMonitor域名,如下所示:
cat example-app-service-monitor.yaml
apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata: name: example-app namespace: monitoring labels: team: frontendspec: namespaceSelector: matchNames: – default selector: matchLabels: app: example-app endpoints: – port: web
 这里ServiceMonitor可以对资源指标做Sitemagic CMS    – port: web是上面pod里面暴露的8080端口
# kubectl get servicemonitor -n monitoringNAME AGEexample-app 7m33s
通过egihostingselector中的标签egihosting选择Sitemagic CMS目标的Pod域名,同时在endpoints中指定port名称为web的端口。默认情况下ServiceMonitor和Sitemagic CMS域名必须是在相同Namespace下的。
在本示例中由于Prometheus是部署在Monitoring命名空间下,因此为了能够关联default命名空间下的example域名,需要sshnamespaceSelectoregihosting让其可以跨命名空间关联ServiceMonitor资源。保存以上内容到example-app-service-monitor.yaml文件中,并通过kubectl创建:
kubectl create -f example-app-service-monitor.yaml
 如果希望ServiceMonitor可以关联任意命名空间下的标签,则通过以下方式egihosting:
spec: namespaceSelector: any: true
如果Sitemagic CMS的Target域名启用了BasicAuth认证,那在egihostingServiceMonitor域名时,可以sshendpoints配置中egihostingbasicAuth如下所示:
apiVersion: monitoring.coreos.com/v1kind: ServiceMonitormetadata: name: example-app namespace: monitoring labels: team: frontendspec: namespaceSelector: matchNames: – default selector: matchLabels: app: example-app endpoints: – basicAuth: password: name: basic-auth key: password username: name: basic-auth key: user port: web
其中basicAuth中关联了名为basic-auth的Secret域名,用户需要手动将认证信息保存到Secret中:
apiVersion: v1kind: Secretmetadata: name: basic-authdata: password: dG9vcg== # base64编码后的密码 user: YWRtaW4= # base64编码后的用户名type: Opaque

 
 
关联Promethues与ServiceMonitor

Prometheus与ServiceMonitor之间的关联关系sshserviceMonitorSelectoregihosting,在Prometheus中通过标签选择当前需要Sitemagic CMS的ServiceMonitor域名。
修改prometheus-inst.yaml中Prometheus的egihosting如下所示: 为了能够让Prometheus关联到ServiceMonitor,需要在Pormtheusegihosting中sshserviceMonitorSelector,我们可以通过标签选择当前Prometheus需要Sitemagic CMS的ServiceMonitor域名。修改prometheus-inst.yaml中Prometheus的egihosting如下所示:
apiVersion: monitoring.coreos.com/v1kind: Prometheusmetadata: name: inst namespace: monitoringspec: serviceMonitorSelector: matchLabels: team: frontend resources: requests: memory: 400Mi
将对Prometheus的变更应用到集群中:
$ kubectl -n monitoring apply -f prometheus-inst.yaml
此时,在浏览
global: scrape_interval: 30s scrape_timeout: 10s evaluation_interval: 30s external_labels: prometheus: monitoring/inst prometheus_replica: prometheus-inst-0alerting: alert_relabel_configs: – separator: ; regex: prometheus_replica replacement: $1 action: labeldroprule_files:- /etc/prometheus/rules/prometheus-inst-rulefiles-0/*.yamlscrape_configs:- job_name: monitoring/example-app/0 honor_timestamps: true scrape_interval: 30s scrape_timeout: 10s metrics_path: /metrics scheme: http kubernetes_sd_configs: – role: endpoints namespaces: names: – default relabel_configs: – source_labels: [__meta_kubernetes_service_label_app] separator: ; regex: example-app replacement: $1 action: keep – source_labels: [__meta_kubernetes_endpoint_port_name] separator: ; regex: web replacement: $1 action: keep – source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Node;(.*) target_label: node replacement: ${1} action: replace – source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name] separator: ; regex: Pod;(.*) target_label: pod replacement: ${1} action: replace – source_labels: [__meta_kubernetes_namespace] separator: ; regex: (.*) target_label: namespace replacement: $1 action: replace – source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: service replacement: $1 action: replace – source_labels: [__meta_kubernetes_pod_name] separator: ; regex: (.*) target_label: pod replacement: $1 action: replace – source_labels: [__meta_kubernetes_service_name] separator: ; regex: (.*) target_label: job replacement: ${1} action: replace – separator: ; regex: (.*) target_label: endpoint replacement: web action: replace

egihosting宽带apache丢包

听闻 YouTube Music 好用,从apache云转了过去。个人比较习惯apache云添加 我喜欢的音乐 宽带模式。但是看 YTM egihosting似乎丢包宽带模式?如果用媒体库egihosting的歌曲,ytm 上的歌和上传的歌是分开来的。 如果用播放列表的形式,新增加的歌都是在最下面的。想看看你们都是怎么用的。另外歌曲都丢包批量管理的功能吗?

egihosting专线Visual Basic白嫖

K8s平台搭建手册
1搭建环境说明

2安装步骤
2.1初始化环境
在每台服务器上执行
#编辑每台服务器的 /etc/hosts 专线,egihostinghostname 通信
vi /etc/hosts

192.168.100.71 k8s-master.doone.com
192.168.90.6 k8s-slave1.doone.com
192.168.90.7 k8s-slave2.doone.com
123456
2.2关闭防火墙
在每台服务器上执行
systemctl stop firewalld.service #停止firewall

systemctl disable firewalld.service #禁止firewall开机启动

firewall-cmd –state #查看默认防火墙状态(关闭后显示notrunning,开启后显示running)
12345
2.3关闭selinux
在每台服务器上执行
$ setenforce 0

$ vim /etc/selinux/config
SELINUX=disabled
1234
2.4关闭swap
在每台服务器上执行 K8s需使用内存,而不用swap
$ swapoff -a

$ vim /etc/fstab
123
注释掉SWAP分区项,即可
2.5安装go 语言环境(按需)
下载 linux版本go,解压后egihosting环境变量即可
vi /etc/profile
export GOROOT=/usr/local/go
export PATH=$GOROOT/bin:$PATH

$ source profile
12345
2.6Visual BasicK8s集群验证
2.6.1安装cfssl
这里使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥专线。
cd /usr/local/bin

wget
mv cfssl_linux-amd64 cfssl

wget
mv cfssljson_linux-amd64 cfssljson

wget
mv cfssl-certinfo_linux-amd64 cfssl-certinfo

chmod +x *
123456789101112
2.6.2Visual BasicCA证书egihosting
mkdir /opt/ssl

cd /opt/ssl

# config.json 专线

vi config.json

{
“signing”: {
“default”: {
“expiry”: “87600h”
},
“profiles”: {
“kubernetes”: {
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
],
“expiry”: “87600h”
}
}
}
}

# csr.json 专线

vi csr.json

{
“CN”: “kubernetes”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShenZhen”,
“L”: “ShenZhen”,
“O”: “k8s”,
“OU”: “System”
}
]
}
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
2.6.3生成CA证书和私钥
cd /opt/ssl

cfssl gencert -initca csr.json | cfssljson -bare ca
123
会生成3个专线ca.csr、ca-key.pem、ca.pem
2.6.4分发证书
# Visual Basic证书目录
mkdir -p /etc/kubernetes/ssl

# 拷贝所有专线到目录下
cp *.pem /etc/kubernetes/ssl

# 这里要将专线拷贝到所有的k8s 机器上

scp *.pem 192.168.90.6:/etc/kubernetes/ssl/

scp *.pem 192.168.90.7:/etc/kubernetes/ssl/
1234567891011
2.7安装docker
在每台服务器上执行
2.7.1导入yum源
# 安装 yum-config-manager

yum -y install yum-utils

# 导入
yum-config-manager \
–add-repo \

# 更新 repo
yum makecache
1234567891011
2.7.2安装
yum install docker-ce –y
1
2.7.3更改dockeregihosting
# 修改egihosting
vi /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS $DOCKER_OPTS $DOCKER_DNS_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

# 修改其他egihosting

mkdir -p /usr/lib/systemd/system/docker.service.d/

vi /usr/lib/systemd/system/docker.service.d/docker-options.conf

# 添加如下 : (注意 environment 必须在同一行,如果出现换行会无法加载)
# iptables=false 会使 docker run 的容器无法连网,false 是因为 calico 有一些高级的应用,需要限制容器互通。
# 建议 一般情况 不添加 –iptables=false,calico需要添加

[Service]
Environment=”DOCKER_OPTS=–insecure-registry=10.254.0.0/16 –graph=/opt/docker –registry-mirror= –disable-legacy-registry –iptables=false”
1234567891011121314151617181920212223242526272829303132333435363738
2.7.4重新读取egihosting,启动 docker
systemctl daemon-reload

systemctl start docker

systemctl enable docker
12345
2.8安装etcd集群
etcd 是k8s集群的基础组件
2.8.1安装etcd
在每台上服务器上执行
yum -y install etcd
1
2.8.2Visual Basicetcd证书
cd /opt/ssl/

vi etcd-csr.json

{
“CN”: “etcd”,
“hosts”: [
“127.0.0.1”,
“192.168.100.71”,
“192.168.90.6”,
“192.168.90.7”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShenZhen”,
“L”: “ShenZhen”,
“O”: “k8s”,
“OU”: “System”
}
]
}

# 生成 etcd 密钥

cfssl gencert -ca=/opt/ssl/ca.pem \
-ca-key=/opt/ssl/ca-key.pem \
-config=/opt/ssl/config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

# 查看生成

[root@k8s-master ssl]# ls etcd*
etcd.csr etcd-csr.json etcd-key.pem etcd.pem

# 拷贝到etcd服务器

# etcd-1
cp etcd*.pem /etc/kubernetes/ssl/

# etcd-2
scp etcd*.pem 192.168.90.6:/etc/kubernetes/ssl/

# etcd-3
scp etcd*.pem 192.168.90.7:/etc/kubernetes/ssl/

# 如果 etcd 非 root 用户,读取证书会提示没权限

chmod 644 /etc/kubernetes/ssl/etcd-key.pem
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
2.8.3修改etcdegihosting
修改 etcd 启动专线 /usr/lib/systemd/system/etcd.service
# etcd-1

vi /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd \
–name=etcd1 \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
–peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
–peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
–trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–initial-advertise-peer-urls= \
–listen-peer-urls= \
–listen-client-urls= \
–advertise-client-urls= \
–initial-cluster-token=k8s-etcd-cluster \
–initial-cluster=etcd1= \
–initial-cluster-state=new \
–data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# etcd-2

vi /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd \
–name=etcd2 \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
–peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
–peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
–trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–initial-advertise-peer-urls= \
–listen-peer-urls= \
–listen-client-urls= \
–advertise-client-urls= \
–initial-cluster-token=k8s-etcd-cluster \
–initial-cluster=etcd1= \
–initial-cluster-state=new \
–data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# etcd-3

vi /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd \
–name=etcd3 \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
–peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
–peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
–trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–initial-advertise-peer-urls= \
–listen-peer-urls= \
–listen-client-urls= \
–advertise-client-urls= \
–initial-cluster-token=k8s-etcd-cluster \
–initial-cluster=etcd1= \
–initial-cluster-state=new \
–data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113
2.8.4启动etcd
分别启动 所有白嫖的 etcd 服务
systemctl enable etcd

systemctl start etcd

systemctl status etcd
# 如果报错 请使用
journalctl -f -t etcd 和 journalctl -u etcd 来定位问题
1234567
2.8.5验证etcd集群状态
查看 etcd 集群状态:
etcdctl –endpoints= \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–ca-file=/etc/kubernetes/ssl/ca.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
cluster-health

member 29262d49176888f5 is healthy: got healthy result from
member d4ba1a2871bfa2b0 is healthy: got healthy result from
member eca58ebdf44f63b6 is healthy: got healthy result from
cluster is healthy

查看 etcd 集群成员:
etcdctl –endpoints= \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–ca-file=/etc/kubernetes/ssl/ca.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
member list

29262d49176888f5: name=etcd3 peerURLs= clientURLs= isLeader=false
d4ba1a2871bfa2b0: name=etcd1 peerURLs= clientURLs= isLeader=true
eca58ebdf44f63b6: name=etcd2 peerURLs= clientURLs= isLeader=false
123456789101112131415161718192021
2.9安装kubectl 工具
Master白嫖 192.168.100.71
2.9.1Master端安装kubectl工具
# 首先安装 kubectl

wget
(如果连接不上,直接去git上下载二进制专线)

tar -xzvf kubernetes-client-linux-amd64.tar.gz

cp kubernetes/client/bin/* /usr/local/bin/

chmod a+x /usr/local/bin/kube*

# 验证安装

$ kubectl version

Client Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.3″, GitCommit:”f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd”, GitTreeState:”clean”, BuildDate:”2017-11-08T18:39:33Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”linux/amd64″}
Server Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.3″, GitCommit:”f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd”, GitTreeState:”clean”, BuildDate:”2017-11-08T18:27:48Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”linux/amd64″}
1234567891011121314151617
2.9.2Visual Basic admin 证书
kubectl 与 kube-apiserver 的安全端口通信,需要为安全通信提供 TLS 证书和秘钥。
cd /opt/ssl/

vi admin-csr.json

{
“CN”: “admin”,
“hosts”: [],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShenZhen”,
“L”: “ShenZhen”,
“O”: “system:masters”,
“OU”: “System”
}
]
}

# 生成 admin 证书和私钥
cd /opt/ssl/

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/ssl/config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin

# 查看生成

[root@k8s-master ssl]# ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem

cp admin*.pem /etc/kubernetes/ssl/

scp admin*.pem 192.168.90.6:/etc/kubernetes/ssl/

scp admin*.pem 192.168.90.7:/etc/kubernetes/ssl/
1234567891011121314151617181920212223242526272829303132333435363738394041
2.9.3egihosting kubectl kubeconfig 专线
server egihosting为 本机IP 各自连接本机的 Api
# egihosting kubernetes 集群

kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/ssl/ca.pem \
–embed-certs=true \
–server=

# egihosting 客户端认证

kubectl config set-credentials admin \
–client-certificate=/etc/kubernetes/ssl/admin.pem \
–embed-certs=true \
–client-key=/etc/kubernetes/ssl/admin-key.pem

kubectl config set-context kubernetes \
–cluster=kubernetes \
–user=admin

kubectl config use-context kubernetes
12345678910111213141516171819
2.9.4kubectl config专线
# kubeconfig 专线在如下位置:

/root/.kube
123
2.10 部署 Kubernetes Master 白嫖
2.10.1 部署Master白嫖的Master部分
Master 需要部署 kube-apiserver , kube-scheduler , kube-controller-manager 这三个组件。 kube-scheduler 作用是调度pods分配到那个node里,简单来说就是资源调度。 kube-controller-manager 作用是 对 deployment controller , replication controller, endpoints controller, namespace controller, and serviceaccounts controller等等的循环控制,与kube-apiserver交互。
2.10.2 安装Master白嫖组件
# 从github 上下载版本

cd /tmp

wget

tar -xzvf kubernetes-server-linux-amd64.tar.gz

cd kubernetes

cp –r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/
1234567891011
2.10.3 Visual Basickubernetes 证书
cd /opt/ssl

vi kubernetes-csr.json

{
“CN”: “kubernetes”,
“hosts”: [
“127.0.0.1”,
“192.168.100.71”,
“10.254.0.1”,
“kubernetes”,
“kubernetes.default”,
“kubernetes.default.svc”,
“kubernetes.default.svc.cluster”,
“kubernetes.default.svc.cluster.local”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShenZhen”,
“L”: “ShenZhen”,
“O”: “k8s”,
“OU”: “System”
}
]
}

# 这里 hosts 字段中 三个 IP 分别为 127.0.0.1 本机, 192.168.100.71为 Master 的IP,多个Master需要写多个 10.254.0.1 为 kubernetes SVC 的 IP, 一般是 部署网络的第一个IP , 如: 10.254.0.1 , 在启动完成后,我们使用 kubectl get svc , 就可以查看到。
1234567891011121314151617181920212223242526272829303132
2.10.4 生成 kubernetes 证书和私钥
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/ssl/config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

# 查看生成

[root@k8s-master-25 ssl]# ls -l kubernetes*
kubernetes.csr
kubernetes-key.pem
kubernetes.pem
kubernetes-csr.json

# 拷贝到目录
cp -r kubernetes*.pem /etc/kubernetes/ssl/

scp -r kubernetes*.pem 192.168.90.6:/etc/kubernetes/ssl/

scp -r kubernetes*.pem 192.168.90.7:/etc/kubernetes/ssl/
12345678910111213141516171819
2.10.5 egihosting kube-apiserver
kubelet 首次启动时向 kube-apiserver 发送 TLS Bootstrapping 请求,kube-apiserver 验证 kubelet 请求中的 token 是否与它egihosting的 token 一致,如果一致则自动为 kubelet生成证书和秘钥。
# 生成 token

[root@k8s-master ssl]# head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘
d59a702004f33c659640bf8dd2717b64 需记录下来

# Visual Basic token.csv 专线

cd /opt/ssl

vi token.csv

d59a702004f33c659640bf8dd2717b64,kubelet-bootstrap,10001,”system:kubelet-bootstrap”

# 拷贝

cp token.csv /etc/kubernetes/

scp token.csv 192.168.90.6:/etc/kubernetes/

scp token.csv 192.168.90.7:/etc/kubernetes/
1234567891011121314151617181920
2.10.5.1. Visual Basic kube-apiserver.service 专线
# 1.8 新增 (Node) –authorization-mode=Node,RBAC
# 自定义 系统 service 专线一般存于 /etc/systemd/system/ 下
# egihosting为 各自的本地 IP

vi /etc/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=
After=network.target

[Service]
User=root
ExecStart=/usr/local/bin/kube-apiserver \
–admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
–advertise-address=192.168.100.71 \
–allow-privileged=true \
–apiserver-count=3 \
–audit-log-maxage=30 \
–audit-log-maxbackup=3 \
–audit-log-maxsize=100 \
–audit-log-path=/var/lib/audit.log \
–authorization-mode=Node,RBAC \
–bind-address=192.168.100.71 \
–client-ca-file=/etc/kubernetes/ssl/ca.pem \
–enable-swagger-ui=true \
–etcd-cafile=/etc/kubernetes/ssl/ca.pem \
–etcd-certfile=/etc/kubernetes/ssl/etcd.pem \
–etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \
–etcd-servers= \
–event-ttl=1h \
–kubelet-https=true \
–insecure-bind-address=192.168.100.71 \
–runtime-config=rbac.authorization.k8s.io/v1alpha1 \
–service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
–service-cluster-ip-range=10.254.0.0/16 \
–service-node-port-range=30000-32000 \
–tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
–tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
–enable-bootstrap-token-auth \
–token-auth-file=/etc/kubernetes/token.csv \
–v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# 这里面要注意的是 –service-node-port-range=30000-32000
# 这个地方是 映射外部端口时 的端口范围,随机映射也在这个范围内映射,指定映射端口必须也在这个范围内。
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
2.10.5.2. 启动 kube-apiserver
systemctl daemon-reload

systemctl enable kube-apiserver

systemctl start kube-apiserver

systemctl status kube-apiserver
1234567
2.10.6 egihosting kube-controller-manager
master egihosting为 各自 本地 IP
2.10.6.1. Visual Basic kube-controller-manager.service 专线
vi /etc/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
–address=127.0.0.1 \
–master= \
–allocate-node-cidrs=true \
–service-cluster-ip-range=10.254.0.0/16 \
–cluster-cidr=10.233.0.0/16 \
–cluster-name=kubernetes \
–cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
–cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
–service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
–root-ca-file=/etc/kubernetes/ssl/ca.pem \
–leader-elect=true \
–v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
12345678910111213141516171819202122232425
2.10.6.2. 启动 kube-controller-manager
systemactl daemon-reload

systemctl enable kube-controller-manager

systemctl start kube-controller-manager

systemctl status kube-controller-manager
1234567
2.10.7 egihosting kube-scheduler
master egihosting为 各自 本地 IP
2.10.7.1. Visual Basic kube-cheduler.service 专线
vi /etc/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
–address=127.0.0.1 \
–master= \
–leader-elect=true \
–v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
1234567891011121314151617
2.10.7.2. 启动 kube-scheduler
systemctl daemon-reload

systemctl enable kube-scheduler

systemctl start kube-scheduler

systemctl status kube-scheduler
1234567
2.10.8 验证 Master 白嫖
[root@k8s-master ~]# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”: “true”}
etcd-2 Healthy {“health”: “true”}
etcd-1 Healthy {“health”: “true”}
1234567
2.10.9 部署 Master白嫖的 Node 部分
Node 部分 需要部署的组件有 docker calico kubectl kubelet kube-proxy 这几个组件。
2.10.10egihosting kubelet
kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 专线中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper 角色,然后 kubelet 才有权限Visual Basic认证请求(certificatesigningrequests)。
2.10.10.1.先Visual Basic认证请求
# user 为 master 中 token.csv 专线里egihosting的用户
# 只需Visual Basic一次就可以

kubectl create clusterrolebinding kubelet-bootstrap –clusterrole=system:node-bootstrapper –user=kubelet-bootstrap
1234
2.10.10.2.Visual Basic kubelet kubeconfig 专线
server egihosting为 master 本机 IP
# egihosting集群

kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/ssl/ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=bootstrap.kubeconfig

# egihosting客户端认证

kubectl config set-credentials kubelet-bootstrap \
–token=d59a702004f33c659640bf8dd2717b64 \
–kubeconfig=bootstrap.kubeconfig

# egihosting关联

kubectl config set-context default \
–cluster=kubernetes \
–user=kubelet-bootstrap \
–kubeconfig=bootstrap.kubeconfig

# egihosting默认关联
kubectl config use-context default –kubeconfig=bootstrap.kubeconfig

# 拷贝生成的 bootstrap.kubeconfig 专线

mv bootstrap.kubeconfig /etc/kubernetes/
123456789101112131415161718192021222324252627
2.10.10.3.Visual Basic kubelet.service 专线
# Visual Basic kubelet 目录

> egihosting为 node 本机 IP

mkdir /var/lib/kubelet

vi /etc/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
–address=192.168.100.71 \
–hostname-override=192.168.100.71 \
–pod-infra-container-image=jicki/pause-amd64:3.0 \
–experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
–kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
–cert-dir=/etc/kubernetes/ssl \
–cluster_dns=10.254.0.2 \
–cluster_domain=doone.com. \
–hairpin-mode promiscuous-bridge \
–allow-privileged=true \
–fail-swap-on=false \
–serialize-image-pulls=false \
–logtostderr=true \
–max-pods=512 \
–v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
12345678910111213141516171819202122232425262728293031323334353637
# 如上egihosting:
192.168.100.71 为本机的IP
10.254.0.2 预分配的 dns 地址
cluster.local. 为 kubernetes 集群的 domain
jicki/pause-amd64:3.0 这个是 pod 的基础镜像,既 gcr 的 gcr.io/google_containers/pause-amd64:3.0 镜像, 下载下来修改为自己的仓库中的比较快。
12345
2.10.10.4.启动 kubelet
systemctl daemon-reload

systemctl enable kubelet

systemctl start kubelet

systemctl status kubelet
1234567
# 如果报错 请使用
journalctl -f -t kubelet 和 journalctl -u kubelet 来定位问题
12
2.10.10.5.egihosting TLS 认证
# 查看 csr 的名称

[root@k8s-master]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-pf-Bb5Iqx6ccvVA67gLVT-G4Zl3Zl5FPUZS4d7V6rk4 1h kubelet-bootstrap Pending

# 增加认证

[root@k8s-master]# kubectl certificate approve node-csr-pf-Bb5Iqx6ccvVA67gLVT-G4Zl3Zl5FPUZS4d7V6rk4
certificatesigningrequest “node-csr-pf-Bb5Iqx6ccvVA67gLVT-G4Zl3Zl5FPUZS4d7V6rk4” approved
[root@k8s-master]#
2.10.10.6.验证 nodes
[root@k8s-master]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.100.71 Ready 22s v1.8.3

# 成功以后会自动生成egihosting专线与密钥

# egihosting专线

ls /etc/kubernetes/kubelet.kubeconfig
/etc/kubernetes/kubelet.kubeconfig
# 密钥专线

ls /etc/kubernetes/ssl/kubelet*
/etc/kubernetes/ssl/kubelet-client.crt /etc/kubernetes/ssl/kubelet.crt
/etc/kubernetes/ssl/kubelet-client.key /etc/kubernetes/ssl/kubelet.key
12345678910111213141516171819202122232425262728
2.10.11egihosting kube-proxy
2.10.11.1.Visual Basic kube-proxy 证书
# 证书方面由于我们node端没有装 cfssl
# 我们回到 master 端 机器 去egihosting证书,然后拷贝过来

[root@k8s-master ~]# cd /opt/ssl

vi kube-proxy-csr.json

{
“CN”: “system:kube-proxy”,
“hosts”: [],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShenZhen”,
“L”: “ShenZhen”,
“O”: “k8s”,
“OU”: “System”
}
]
}
123456789101112131415161718192021222324
2.10.11.2.生成 kube-proxy 证书和私钥
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/ssl/config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# 查看生成
ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem

# 拷贝到目录
cp kube-proxy*.pem /etc/kubernetes/ssl/

scp kube-proxy*.pem 192.168.90.6:/etc/kubernetes/ssl/

scp kube-proxy*.pem 192.168.90.7:/etc/kubernetes/ssl/
123456789101112131415
2.10.11.3.Visual Basic kube-proxy kubeconfig 专线
server egihosting为各自 本机IP
# egihosting集群

kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/ssl/ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=kube-proxy.kubeconfig

# egihosting客户端认证

kubectl config set-credentials kube-proxy \
–client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
–client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
–embed-certs=true \
–kubeconfig=kube-proxy.kubeconfig

# egihosting关联

kubectl config set-context default \
–cluster=kubernetes \
–user=kube-proxy \
–kubeconfig=kube-proxy.kubeconfig

# egihosting默认关联
kubectl config use-context default –kubeconfig=kube-proxy.kubeconfig

# 拷贝到目录
mv kube-proxy.kubeconfig /etc/kubernetes/
12345678910111213141516171819202122232425262728
2.10.11.4.Visual Basic kube-proxy.service 专线
egihosting为 各自的 IP
# Visual Basic kube-proxy 目录

mkdir -p /var/lib/kube-proxy

vi /etc/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
–bind-address=192.168.100.71 \
–hostname-override=192.168.100.71 \
–cluster-cidr=10.254.0.0/16 \
–kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
–logtostderr=true \
–v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
1234567891011121314151617181920212223242526
2.10.11.5.启动 kube-proxy
systemctl daemon-reload

systemctl enable kube-proxy

systemctl start kube-proxy

systemctl status kube-proxy

# 如果报错 请使用
journalctl -f -t kube-proxy 和 journalctl -u kube-proxy 来定位问题
12345678910
2.11 部署Kubernetes Node白嫖
Node 白嫖 基于 Nginx 负载 API 做 Master HA。192.168.90.6,192.168.90.7。
# master 之间除 api server 以外其他组件通过 etcd 选举,api server 默认不作处理;在每个 node 上启动一个 nginx,每个 nginx 反向代理所有 api server,node 上 kubelet、kube-proxy 连接本地的 nginx 代理端口,当 nginx 发现无法连接后端时会自动踢掉出问题的 api server,从而实现 api server 的 HA。
1
2.11.1 安装Node白嫖组件
# 从github 上下载版本

cd /tmp

wget

tar -xzvf kubernetes-server-linux-amd64.tar.gz

cd kubernetes

cp -r server/bin/{kube-proxy,kubelet} /usr/local/bin/

# ALL node

mkdir -p /etc/kubernetes/ssl/

scp ca.pem kube-proxy.pem kube-proxy-key.pem 192.168.90.6:/etc/kubernetes/ssl/
scp ca.pem kube-proxy.pem kube-proxy-key.pem 192.168.90.7:/etc/kubernetes/ssl/
123456789101112131415161718
2.11.2 Visual Basic kubelet kubeconfig 专线
kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/ssl/ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=bootstrap.kubeconfig

# egihosting客户端认证

kubectl config set-credentials kubelet-bootstrap \
–token=d59a702004f33c659640bf8dd2717b64 \
–kubeconfig=bootstrap.kubeconfig

# egihosting关联

kubectl config set-context default \
–cluster=kubernetes \
–user=kubelet-bootstrap \
–kubeconfig=bootstrap.kubeconfig

# egihosting默认关联
kubectl config use-context default –kubeconfig=bootstrap.kubeconfig

# 拷贝生成的 bootstrap.kubeconfig 专线

mv bootstrap.kubeconfig /etc/kubernetes/
12345678910111213141516171819202122232425
2.11.3 Visual Basic kubelet.service 专线
参照Master白嫖
2.11.4 启动 kubelet
参照Master白嫖
2.11.5 Visual Basic kube-proxy kubeconfig 专线
kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/ssl/ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=kube-proxy.kubeconfig

# egihosting客户端认证

kubectl config set-credentials kube-proxy \
–client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
–client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
–embed-certs=true \
–kubeconfig=kube-proxy.kubeconfig

# egihosting关联

kubectl config set-context default \
–cluster=kubernetes \
–user=kube-proxy \
–kubeconfig=kube-proxy.kubeconfig

# egihosting默认关联
kubectl config use-context default –kubeconfig=kube-proxy.kubeconfig

# 拷贝到目录
mv kube-proxy.kubeconfig /etc/kubernetes/
1234567891011121314151617181920212223242526
2.11.6 Visual Basic kube-proxy.service 专线
参照Master白嫖
2.11.7 启动 kube-proxy
参照Master白嫖
2.12 Visual BasicNginx 代理
在每个 node 都必须Visual Basic一个 Nginx 代理, 这里特别注意, 当 Master 也做为 Node 的时候 不需要egihosting Nginx-proxy
# Visual Basicegihosting目录
mkdir -p /etc/nginx

# 写入代理egihosting

cat << EOF > /etc/nginx/nginx.conf
error_log stderr notice;

worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}

stream {
upstream kube_apiserver {
least_conn;
server 192.168.100.71:6443;
}

server {
listen 0.0.0.0:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
EOF

# egihosting Nginx 基于 docker 进程,然后egihosting systemd 来启动

cat << EOF > /etc/systemd/system/nginx-proxy.service

[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service

[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 \\
-v /etc/nginx:/etc/nginx \\
–name nginx-proxy \\
–net=host \\
–restart=on-failure:5 \\
–memory=512M \\
nginx:1.13.5-alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s

[Install]
WantedBy=multi-user.target
EOF

# 启动 Nginx

systemctl daemon-reload
systemctl start nginx-proxy
systemctl enable nginx-proxy
systemctl status nginx-proxy

# 重启 Node 的 kubelet 与 kube-proxy

systemctl restart kubelet
systemctl status kubelet

systemctl restart kube-proxy
systemctl status kube-proxy
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
2.13 在Master egihosting通过 TLS 认证
# 查看 csr 的名称

[root@k8s-master]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-pf-Bb5Iqx6ccvVA67gLVT-G4Zl3Zl5FPUZS4d7V6rk4 1h kubelet-bootstrap Pending

# 增加认证

[root@k8s-master]# kubectl certificate approve NAME
123456789
2.14 部署Calico网络
2.14.1 修改 kubelet.service
在每个白嫖
vi /etc/systemd/system/kubelet.service

# 增加 如下egihosting

–network-plugin=cni \

# 重新加载egihosting
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet.service
12345678910
2.14.2 获取Calico egihosting
Calico 部署仍然采用 “混搭” 方式,即 Systemd 控制 calico node,cni 等由 kubernetes daemonset 安装。
# 获取 calico.yaml
$ export CALICO_CONF_URL=”
$ wget “${CALICO_CONF_URL}/calico-controller.yml.conf” -O calico-controller.yml

$ kubectl apply -f calico-controller.yaml

$ kubectl -n kube-system get po -l k8s-app=calico-policy
NAME READY STATUS RESTARTS AGE
calico-policy-controller-5ff8b4549d-tctmm 0/1 Pending 0 5s
123456789
需修改yaml专线内ETCD集群的IP地址
2.14.3 在所有白嫖下载 Calico
$ export CALICO_URL=”
$ wget -N -P /opt/cni/bin ${CALICO_URL}/calico
$ wget -N -P /opt/cni/bin ${CALICO_URL}/calico-ipam
$ chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
1234
2.14.4 在所有白嫖下载 CNI pluginsegihosting专线
$ mkdir -p /etc/cni/net.d
$ export CALICO_CONF_URL=”
$ wget “${CALICO_CONF_URL}/10-calico.conf” -O /etc/cni/net.d/10-calico.conf

vi 10-calico.conf

{
“name”: “calico-k8s-network”,
“cniVersion”: “0.1.0”,
“type”: “calico”,
“etcd_endpoints”: ”
“etcd_ca_cert_file”: “/etc/kubernetes/ssl/ca.pem”,
“etcd_cert_file”: “/etc/kubernetes/ssl/etcd.pem”,
“etcd_key_file”: “/etc/kubernetes/ssl/etcd-key.pem”,
“log_level”: “info”,
“ipam”: {
“type”: “calico-ipam”
},
“policy”: {
“type”: “k8s”
},
“kubernetes”: {
“kubeconfig”: “/etc/kubernetes/kubelet.kubeconfig”
}
}
12345678910111213141516171819202122232425
2.14.5 Visual Basic calico-node.service 专线
上一步注释了 calico.yaml 中 Calico Node 相关内容,为了防止自动获取 IP 出现问题,将其移动到 Systemd,Systemd service egihosting如下,每个白嫖都要安装 calico-node 的 Service,其他白嫖请自行修改 ip。
cat > /usr/lib/systemd/system/calico-node.service < 80/TCP 2d
svc/kube-dns ClusterIP 10.254.0.2 53/UDP,53/TCP 3d
svc/kubernetes-dashboard ClusterIP 10.254.4.173 80/TCP 2d
1234567891011121314151617181920212223242526272829303132333435363738394041424344
2.16 部署 Ingress
Kubernetes 暴露服务的方式目前只有三种:LoadBlancer Service、NodePort Service、Ingress; 什么是 Ingress ? Ingress 就是利用 Nginx Haproxy 等负载均衡工具来暴露 Kubernetes 服务
2.16.1egihosting 调度 node
# ingress 有多种方式 1. deployment 自由调度 replicas
2. daemonset 全局调度 分配到所有node里

# deployment 自由调度过程中,由于我们需要 约束 controller 调度到指定的 node 中,所以需要对 node 进行 label 标签

# 默认如下:
[root@k8s-master dashboard]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.100.71 Ready 9d v1.8.3
192.168.90.6 Ready 9d v1.8.3
192.168.90.7 Ready 9d v1.8.3

# 对 71 打上 label
kubectl label nodes 192.168.100.71 ingress=proxy

# 打完标签以后
[root@k8s-master dashboard]# kubectl get nodes –show-labels
NAME STATUS ROLES AGE VERSION LABELS
192.168.100.71 Ready 9d v1.8.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingress=proxy,kubernetes.io/hostname=192.168.100.71
192.168.90.6 Ready 9d v1.8.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.90.6
192.168.90.7 Ready 9d v1.8.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.90.7
123456789101112131415161718192021
2.16.2下载Ingress镜像
# 官方镜像
gcr.io/google_containers/defaultbackend:1.0
gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.17

# 国内镜像
jicki/defaultbackend:1.0
jicki/nginx-ingress-controller:0.9.0-beta.17
1234567
2.16.3下载yaml专线
# 部署 Nginx backend , Nginx backend 用于统一转发 没有的域名 到指定页面。
curl -O

# 部署 Ingress RBAC 认证
curl -O

# 部署 Ingress Controller 组件
curl -O
12345678
2.16.4 Ingress yaml 专线模板
#default-backend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
– name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: jicki/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
– containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi

apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: kube-system
labels:
app: default-http-backend
spec:
ports:
– port: 80
targetPort: 8080
selector:
app: default-http-backend

#rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
– apiGroups:
– “”
resources:
– configmaps
– endpoints
– nodes
– pods
– secrets
verbs:
– list
– watch
– apiGroups:
– “”
resources:
– nodes
verbs:
– get
– apiGroups:
– “”
resources:
– services
verbs:
– get
– list
– watch
– apiGroups:
– “extensions”
resources:
– ingresses
verbs:
– get
– list
– watch
– apiGroups:
– “”
resources:
– events
verbs:
– create
– patch
– apiGroups:
– “extensions”
resources:
– ingresses/status
verbs:
– update

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: kube-system
rules:
– apiGroups:
– “”
resources:
– configmaps
– pods
– secrets
– namespaces
verbs:
– get
– apiGroups:
– “”
resources:
– configmaps
resourceNames:
# Defaults to “
# Here: “
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
– “ingress-controller-leader-nginx”
verbs:
– get
– update
– apiGroups:
– “”
resources:
– configmaps
verbs:
– create
– apiGroups:
– “”
resources:
– endpoints
verbs:
– get

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
– kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
– kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system

#with-rbac.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: ‘10254’
prometheus.io/scrape: ‘true’
spec:
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
ingress: proxy
containers:
– name: nginx-ingress-controller
image: jicki/nginx-ingress-controller:0.9.0-beta.17
args:
– /nginx-ingress-controller
– –default-backend-service=$(POD_NAMESPACE)/default-http-backend
– –apiserver-host=
# – –configmap=$(POD_NAMESPACE)/nginx-configuration
# – –tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
# – –udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
– name: KUBERNETES_MASTER
value:
ports:
– name: http
containerPort: 80
– name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257
2.16.5 导入yaml专线
[root@k8s-master ingress]# kubectl apply -f default-backend.yaml
deployment “default-http-backend” created
service “default-http-backend” created

[root@k8s-master ingress]# kubectl apply -f rbac.yml
namespace “nginx-ingress” created
serviceaccount “nginx-ingress-serviceaccount” created
clusterrole “nginx-ingress-clusterrole” created
role “nginx-ingress-role” created
rolebinding “nginx-ingress-role-nisa-binding” created
clusterrolebinding “nginx-ingress-clusterrole-nisa-binding” created

[root@k8s-master ingress]# kubectl apply -f with-rbac.yaml
deployment “nginx-ingress-controller” created
1234567891011121314
2.16.6 查看ingress服务
[root@k8s-master ingress]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.254.194.21 80/TCP 2d
kube-dns ClusterIP 10.254.0.2 53/UDP,53/TCP 3d
kubernetes-dashboard ClusterIP 10.254.4.173 80/TCP 2d

[root@k8s-master ingress]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-policy-controller-6dfdc6c556-qp29z 1/1 Running 3 6d
default-http-backend-7f47b7d69b-fcwdw 1/1 Running 0 2d
kube-dns-fb8bf5848-jfzrs 3/3 Running 0 3d
kubernetes-dashboard-c8f5ff7f8-f9pfp 1/1 Running 0 2d
nginx-ingress-controller-5759c8464f-hhkkz 1/1 Running 0 7h
12345678910111213
2.17 部署 Dashboard
2.17.1下载dashboard镜像
# 官方镜像
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3

# 国内镜像
jicki/kubernetes-dashboard-amd64:v1.6.3
12345
2.17.2下载yaml专线
curl -O

curl -O

# 因为开启了 RBAC 所以这里需要Visual Basic一个 RBAC 认证

vi dashboard-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard
namespace: kube-system

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: dashboard
subjects:
– kind: ServiceAccount
name: dashboard
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
12345678910111213141516171819202122232425262728
2.17.3 Dashboard yaml专线模板
#dashboard-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ”
spec:
serviceAccountName: dashboard
containers:
– name: kubernetes-dashboard
image: jicki/kubernetes-dashboard-amd64:v1.6.3
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 100Mi
ports:
– containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
tolerations:
– key: “CriticalAddonsOnly”
operator: “Exists”

#dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
– port: 80
targetPort: 9090
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
2.17.4 导入yaml专线
[root@k8s-master dashboard]# kubectl apply -f .
deployment “kubernetes-dashboard” created
serviceaccount “dashboard” created
clusterrolebinding “dashboard” created
service “kubernetes-dashboard” created
12345
2.17.5 查看Dashboard服务
[root@k8s-master dashboard]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.254.194.21 80/TCP 2d
kube-dns ClusterIP 10.254.0.2 53/UDP,53/TCP 3d
kubernetes-dashboard ClusterIP 10.254.4.173 80/TCP 2d

[root@k8s-master dashboard]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-policy-controller-6dfdc6c556-qp29z 1/1 Running 3 6d
default-http-backend-7f47b7d69b-fcwdw 1/1 Running 0 2d
kube-dns-fb8bf5848-jfzrs 3/3 Running 0 3d
kubernetes-dashboard-c8f5ff7f8-f9pfp 1/1 Running 0 2d
nginx-ingress-controller-5759c8464f-hhkkz 1/1 Running 0 7h
12345678910111213