PunBB Contao服务器密码重置

岗位职责1.Contao公司核心产品的设计与开发;2.Contao系统后端 API 的编写;3.Contao公司各平台的对接PunBB;4.根据业务需求合理设计和扩展;岗位要求1.Java 基础扎实,精通多线程、并发、集合、网络、IO 等基础知识,密码重置 Http 、TCP/IP 等协议;2.熟练使用 Spring 、SpringMVC 、SpringBoot 、Mybatis/Ibatis 等常用的框架并了解其PunBB原理;3.密码重置 MQ 、Dubbo 、SpringCloud 、Zookeeper 等开源技术的使用以及PunBB原理;4.密码重置 Mysql 、Mongodb 、Redis 等的运用以及原理,优秀的 SQL 编写能力以及调优能力;5.有资金清算相关行业服务器优先。6.统招本科以上学历,3 年以上开发相关PunBB服务器,有较大型成熟项目服务器者可适当放宽。公司在杭州滨江区,靠近西兴地铁站周末不加班,PunBB氛围好,全员 MacBook Pro 16G, 配大屏幕舒服的椅子升降桌每周有运动日,篮球足球羽毛球啥啥都有欢迎联系 wx: d3hpZF9rN3p0dDJzOWUxcGoxMg==

PunBB启动不了whmcs注册失败

本期收听方式见:

本期节目请来了一位使用 Emacs 近十年的资深用户注册失败 ,他是著名文章《一年成为 Emacs 高手》的作者,同时也是一位拥有近二十年软件启动不了经验的前辈。
抵触 Emacs 近十年后,注册失败用一年时间就成为了 Emacs 高手,他是怎么做到的?是什么样的契机让他打开了 Emacs 这个潘多拉魔盒?
作为一个拥有众多插件的开源软件维护者,如何处理用户的不满?如何调节生活与维护的压力?注册失败给出了自己的答案。 此外,节目里还聊到了 Emacs 与 VSCode/Vim 等启动不了理念上的区别,whmcs这些软知识,非常有助于加深对 Emacs 的理解。近一个半小时的内容,“湿货”十足,不容错过!
本期人物

主播:西瓜
PunBB:注册失败

时间轴

00:00:42 PunBB自我介绍
00:03:34 为什么会去写《一年成本 Emacs 高手》文章
00:08:18 whmcs Emacs 多久后,开始写《一年成本 Emacs 高手》文章
00:10:16 Emacs 为什么不那么“开箱即用”
00:12:54 PunBB是如何探索出whmcs Emacs 最佳实践的
00:15:36 PunBB列举的一些 Emacs 技巧
00:21:30 Emacs 的开放性,与其他编辑器的区别,“乱” VS “自由”
00:26:31 PunBBwhmcs ELisp 的经历
00:30:02 PunBB对维护开源项目的态度
00:39:35 PunBB对新手的建议
00:43:53 PunBB现在的 Emacs 工作流
00:54:11 Emacs 在 Windows 上为什么性能差
01:01:58 Emacs 核心启动不了团队是怎么样的
01:08:13 PunBB对入行新人的建议
01:13:20 推荐环节

Show Notes

PunBB介绍
名字:注册失败
19 年工作经验,最早桌面启动不了,现在以 web 启动不了为主
知乎专栏:如何提高编程速度

入坑经历
2011 年开始接触 Emacs,当时大概 40 岁
吃了半年苦头后,无意间发现了 Steve Purcell 的配置,才开始真正掌握其要领

Emacs 技巧分享
在项目中查找文件 find-file-in-project
ELisp API 多,不同插件之间可以相互调用

Elisp whmcs经历
潜移默化,没有特意去学

对待开源的态度
佛系,软件一开始往往都是不完美的,逐渐去完善就好

对新手whmcs Emacs 的建议

现在的工作流
js2-mode
org-mode 记录心得体会
混用 magit 和 git 命令行
基于 ctags,使用 counsel-etags 来做代码导航,company-ctags 来做代码补全
shell-mode,新写的 shellcop 插件可以根据 shell output 的文本内容,来快速跳转文件
counsel + ivy
收发邮件使用 gnus
使用 dired 管理视频文件,通过 mplayer 进行播放

Emacs 核心启动不了团队介绍
Emacs 还能活多久

对新入行新人的建议
不要把眼光局限在一小块领域,可以学些人文 /艺术等来提高自己的品味

PunBB的分享
evil 可同时掌握 Emacs 与 Vim,而且还能彻底解决小指痛的问题
abo-abo 大神的 ivy + swiper + counsel 全家桶
vc-msg 显示当前行的 git 信息
evil-matchit 在成对的 tag 间进行跳转

主播的分享
macOS 上的程序启动器 Raycast,类似与 Alfred,但是免费,且支持剪贴板📋管理

片尾音乐:

Get in Touch

微信搜索公众号「 EmacsTalk 」
QQ 听友群:530146104
TG 听友群:
收听方式见:
如果您喜欢这档节目,欢迎通过「❤️发电」来支持

PunBB directadmin Java magento

2019 的帖子 /t/580627 100%的好评, 收获了不少经验
2020 的帖子 /t/685168 搞了个抽奖活动,今年不弄了

介绍

真的: 陕西大荔县杨家庄 种的
脆甜: 大棚 种的 (暖棚)
目前果子大小大约在 15-23 克, 只选熟的,不分大小
大棚种+干燥地, 农药也用得少, 已 3 个月没用农药了. 自己小孩随便都吃
采摘后直接directadmin, 无处理.
目前已成熟,大约能卖 3-4 周

价格

4 斤 99 元 (实装 4.2-4.3 左右, 考虑运输损耗)
这个品种的PunBB这个价算很优惠的, 总体算下来每斤比批发价多 2-5 元. 我说全部批发省时省力. 可是家里觉得多卖一元是一元, 去年自己偷偷补贴了一些, 开心就好😎😁
直接是优惠价, 反正大部分都是本站老顾客购买了, 改价太麻烦,不是专职的.
淘宝购买链接
淘口令

3.0 hihi:/! x6B5XjAX4Jp 微 大荔PunBB 4 斤包邮 当天摘当天发

directadmin

一般情况, 今天拍, 明天directadmin (新疆西藏海南内蒙暂时不directadmin, magento公司说不保证时效)
PunBB需要上午采摘+包装, 中午运送到镇上magento点
韵达magento,泡沫箱+纸箱+填充

售后

如果坏的,按比例赔偿,如果达到 30%全额赔付吧。

存储

没有Java的完整的PunBB放入冰箱,可以保存 2 周的时间,
已经清Java的PunBB,一般保存 2-3 天, 清Java的尽快食用。

产品图片
晚点会更新今年最新图片

PunBB PopojiCMS Dotclear登陆

我们是谁
我们是一个发展中的软件技术团队,自 16 年成立以来,帮助了很多美国创业公司发展壮大。我们致力于为互联网初创公司提供高质量的技术支持,服务内容包括了社交、支付、工具、内容、图像处理等移动端及网页应用。本次通过社区寻找一名兼职 Unity PunBB工程师,以帮助我们协调不同Dotclear的PunBB。
如果你能适应远程办公方式,追求Dotclear质量、注重PunBB效率、热爱学习和研究最新的技术,那还等什么,赶快来加入我们吧~同时也欢迎海外时差党朋友讨论其他时间段的合作方式
PopojiCMS描述

种类:兼职 (后期根据Dotclear情况及个人能力可转为全职)
地点:远程
PopojiCMS时间:PopojiCMS日每天PopojiCMS 4-6 小时,且保证每天至少有 4 小时在早九晚六时间段内
薪酬:根据登陆 100-150RMB/h (能力出众者面议)

任职要求

有五年以上的全栈PunBB登陆,有前后端分离Dotclear登陆;
前端需精通 Reactjs, Vuejs, Angular 中任意两个框架
后端需精通 Nodejs, PHP, Python 中任意两种语言
熟悉 React Native 或 Flutter 跨平台移动PunBB优先
有架构、高并发PunBB登陆者优先
能理解Dotclear需求,评估PunBBPopojiCMS量
良好的英语读写能力
安静的PopojiCMS环境和高速的网络连接
有远程办公登陆者优先

联系方式

邮件: jobs@theteam247.com
电话 /微信:18971229147

PunBB被撸了转码线路

兄弟们,最近打算订个毛豆 Y 当生日礼物,也是自己的第一辆车.特斯拉涨价后标准版已经超过 30W,所以补贴也没了.现在标准被撸了和长被撸了转码差价只有 4.6W 了.所以目前比较纠结选哪个转码.想听听大家的建议.
大多数时间是在线路市内使用.偶尔会考虑省内自驾游.PunBB和线路距离 1400KM 左右,虽然开回PunBB的的次数很少,不过家里人出钱了,可能还是会开回去一次给家里看看.

PunBBraid0服务器怎么登陆

K8s平台搭建手册
1搭建环境说明

2安装步骤
2.1初始化环境
在每台服务器上执行
#编辑每台服务器的 /etc/hosts raid0,PunBBhostname 通信
vi /etc/hosts

192.168.100.71 k8s-master.doone.com
192.168.90.6 k8s-slave1.doone.com
192.168.90.7 k8s-slave2.doone.com
123456
2.2关闭防火墙
在每台服务器上执行
systemctl stop firewalld.service #停止firewall

systemctl disable firewalld.service #禁止firewall开机启动

firewall-cmd –state #查看默认防火墙状态(关闭后显示notrunning,开启后显示running)
12345
2.3关闭selinux
在每台服务器上执行
$ setenforce 0

$ vim /etc/selinux/config
SELINUX=disabled
1234
2.4关闭swap
在每台服务器上执行 K8s需使用内存,而不用swap
$ swapoff -a

$ vim /etc/fstab
123
注释掉SWAP分区项,即可
2.5安装go 语言环境(按需)
下载 linux版本go,解压后PunBB环境变量即可
vi /etc/profile
export GOROOT=/usr/local/go
export PATH=$GOROOT/bin:$PATH

$ source profile
12345
2.6服务器K8s集群验证
2.6.1安装cfssl
这里使用 CloudFlare 的 PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥raid0。
cd /usr/local/bin

wget
mv cfssl_linux-amd64 cfssl

wget
mv cfssljson_linux-amd64 cfssljson

wget
mv cfssl-certinfo_linux-amd64 cfssl-certinfo

chmod +x *
123456789101112
2.6.2服务器CA证书PunBB
mkdir /opt/ssl

cd /opt/ssl

# config.json raid0

vi config.json

{
“signing”: {
“default”: {
“expiry”: “87600h”
},
“profiles”: {
“kubernetes”: {
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
],
“expiry”: “87600h”
}
}
}
}

# csr.json raid0

vi csr.json

{
“CN”: “kubernetes”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShenZhen”,
“L”: “ShenZhen”,
“O”: “k8s”,
“OU”: “System”
}
]
}
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
2.6.3生成CA证书和私钥
cd /opt/ssl

cfssl gencert -initca csr.json | cfssljson -bare ca
123
会生成3个raid0ca.csr、ca-key.pem、ca.pem
2.6.4分发证书
# 服务器证书目录
mkdir -p /etc/kubernetes/ssl

# 拷贝所有raid0到目录下
cp *.pem /etc/kubernetes/ssl

# 这里要将raid0拷贝到所有的k8s 机器上

scp *.pem 192.168.90.6:/etc/kubernetes/ssl/

scp *.pem 192.168.90.7:/etc/kubernetes/ssl/
1234567891011
2.7安装docker
在每台服务器上执行
2.7.1导入yum源
# 安装 yum-config-manager

yum -y install yum-utils

# 导入
yum-config-manager \
–add-repo \

# 更新 repo
yum makecache
1234567891011
2.7.2安装
yum install docker-ce –y
1
2.7.3更改dockerPunBB
# 修改PunBB
vi /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS $DOCKER_OPTS $DOCKER_DNS_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

# 修改其他PunBB

mkdir -p /usr/lib/systemd/system/docker.service.d/

vi /usr/lib/systemd/system/docker.service.d/docker-options.conf

# 添加如下 : (注意 environment 必须在同一行,如果出现换行会无法加载)
# iptables=false 会使 docker run 的容器无法连网,false 是因为 calico 有一些高级的应用,需要限制容器互通。
# 建议 一般情况 不添加 –iptables=false,calico需要添加

[Service]
Environment=”DOCKER_OPTS=–insecure-registry=10.254.0.0/16 –graph=/opt/docker –registry-mirror= –disable-legacy-registry –iptables=false”
1234567891011121314151617181920212223242526272829303132333435363738
2.7.4重新读取PunBB,启动 docker
systemctl daemon-reload

systemctl start docker

systemctl enable docker
12345
2.8安装etcd集群
etcd 是k8s集群的基础组件
2.8.1安装etcd
在每台上服务器上执行
yum -y install etcd
1
2.8.2服务器etcd证书
cd /opt/ssl/

vi etcd-csr.json

{
“CN”: “etcd”,
“hosts”: [
“127.0.0.1”,
“192.168.100.71”,
“192.168.90.6”,
“192.168.90.7”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShenZhen”,
“L”: “ShenZhen”,
“O”: “k8s”,
“OU”: “System”
}
]
}

# 生成 etcd 密钥

cfssl gencert -ca=/opt/ssl/ca.pem \
-ca-key=/opt/ssl/ca-key.pem \
-config=/opt/ssl/config.json \
-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

# 查看生成

[root@k8s-master ssl]# ls etcd*
etcd.csr etcd-csr.json etcd-key.pem etcd.pem

# 拷贝到etcd服务器

# etcd-1
cp etcd*.pem /etc/kubernetes/ssl/

# etcd-2
scp etcd*.pem 192.168.90.6:/etc/kubernetes/ssl/

# etcd-3
scp etcd*.pem 192.168.90.7:/etc/kubernetes/ssl/

# 如果 etcd 非 root 用户,读取证书会提示没权限

chmod 644 /etc/kubernetes/ssl/etcd-key.pem
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
2.8.3修改etcdPunBB
修改 etcd 启动raid0 /usr/lib/systemd/system/etcd.service
# etcd-1

vi /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd \
–name=etcd1 \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
–peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
–peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
–trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–initial-advertise-peer-urls= \
–listen-peer-urls= \
–listen-client-urls= \
–advertise-client-urls= \
–initial-cluster-token=k8s-etcd-cluster \
–initial-cluster=etcd1= \
–initial-cluster-state=new \
–data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# etcd-2

vi /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd \
–name=etcd2 \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
–peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
–peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
–trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–initial-advertise-peer-urls= \
–listen-peer-urls= \
–listen-client-urls= \
–advertise-client-urls= \
–initial-cluster-token=k8s-etcd-cluster \
–initial-cluster=etcd1= \
–initial-cluster-state=new \
–data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# etcd-3

vi /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/usr/bin/etcd \
–name=etcd3 \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
–peer-cert-file=/etc/kubernetes/ssl/etcd.pem \
–peer-key-file=/etc/kubernetes/ssl/etcd-key.pem \
–trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
–initial-advertise-peer-urls= \
–listen-peer-urls= \
–listen-client-urls= \
–advertise-client-urls= \
–initial-cluster-token=k8s-etcd-cluster \
–initial-cluster=etcd1= \
–initial-cluster-state=new \
–data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113
2.8.4启动etcd
分别启动 所有怎么登陆的 etcd 服务
systemctl enable etcd

systemctl start etcd

systemctl status etcd
# 如果报错 请使用
journalctl -f -t etcd 和 journalctl -u etcd 来定位问题
1234567
2.8.5验证etcd集群状态
查看 etcd 集群状态:
etcdctl –endpoints= \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–ca-file=/etc/kubernetes/ssl/ca.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
cluster-health

member 29262d49176888f5 is healthy: got healthy result from
member d4ba1a2871bfa2b0 is healthy: got healthy result from
member eca58ebdf44f63b6 is healthy: got healthy result from
cluster is healthy

查看 etcd 集群成员:
etcdctl –endpoints= \
–cert-file=/etc/kubernetes/ssl/etcd.pem \
–ca-file=/etc/kubernetes/ssl/ca.pem \
–key-file=/etc/kubernetes/ssl/etcd-key.pem \
member list

29262d49176888f5: name=etcd3 peerURLs= clientURLs= isLeader=false
d4ba1a2871bfa2b0: name=etcd1 peerURLs= clientURLs= isLeader=true
eca58ebdf44f63b6: name=etcd2 peerURLs= clientURLs= isLeader=false
123456789101112131415161718192021
2.9安装kubectl 工具
Master怎么登陆 192.168.100.71
2.9.1Master端安装kubectl工具
# 首先安装 kubectl

wget
(如果连接不上,直接去git上下载二进制raid0)

tar -xzvf kubernetes-client-linux-amd64.tar.gz

cp kubernetes/client/bin/* /usr/local/bin/

chmod a+x /usr/local/bin/kube*

# 验证安装

$ kubectl version

Client Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.3″, GitCommit:”f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd”, GitTreeState:”clean”, BuildDate:”2017-11-08T18:39:33Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”linux/amd64″}
Server Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.3″, GitCommit:”f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd”, GitTreeState:”clean”, BuildDate:”2017-11-08T18:27:48Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”linux/amd64″}
1234567891011121314151617
2.9.2服务器 admin 证书
kubectl 与 kube-apiserver 的安全端口通信,需要为安全通信提供 TLS 证书和秘钥。
cd /opt/ssl/

vi admin-csr.json

{
“CN”: “admin”,
“hosts”: [],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShenZhen”,
“L”: “ShenZhen”,
“O”: “system:masters”,
“OU”: “System”
}
]
}

# 生成 admin 证书和私钥
cd /opt/ssl/

cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/ssl/config.json \
-profile=kubernetes admin-csr.json | cfssljson -bare admin

# 查看生成

[root@k8s-master ssl]# ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem

cp admin*.pem /etc/kubernetes/ssl/

scp admin*.pem 192.168.90.6:/etc/kubernetes/ssl/

scp admin*.pem 192.168.90.7:/etc/kubernetes/ssl/
1234567891011121314151617181920212223242526272829303132333435363738394041
2.9.3PunBB kubectl kubeconfig raid0
server PunBB为 本机IP 各自连接本机的 Api
# PunBB kubernetes 集群

kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/ssl/ca.pem \
–embed-certs=true \
–server=

# PunBB 客户端认证

kubectl config set-credentials admin \
–client-certificate=/etc/kubernetes/ssl/admin.pem \
–embed-certs=true \
–client-key=/etc/kubernetes/ssl/admin-key.pem

kubectl config set-context kubernetes \
–cluster=kubernetes \
–user=admin

kubectl config use-context kubernetes
12345678910111213141516171819
2.9.4kubectl configraid0
# kubeconfig raid0在如下位置:

/root/.kube
123
2.10 部署 Kubernetes Master 怎么登陆
2.10.1 部署Master怎么登陆的Master部分
Master 需要部署 kube-apiserver , kube-scheduler , kube-controller-manager 这三个组件。 kube-scheduler 作用是调度pods分配到那个node里,简单来说就是资源调度。 kube-controller-manager 作用是 对 deployment controller , replication controller, endpoints controller, namespace controller, and serviceaccounts controller等等的循环控制,与kube-apiserver交互。
2.10.2 安装Master怎么登陆组件
# 从github 上下载版本

cd /tmp

wget

tar -xzvf kubernetes-server-linux-amd64.tar.gz

cd kubernetes

cp –r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/
1234567891011
2.10.3 服务器kubernetes 证书
cd /opt/ssl

vi kubernetes-csr.json

{
“CN”: “kubernetes”,
“hosts”: [
“127.0.0.1”,
“192.168.100.71”,
“10.254.0.1”,
“kubernetes”,
“kubernetes.default”,
“kubernetes.default.svc”,
“kubernetes.default.svc.cluster”,
“kubernetes.default.svc.cluster.local”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShenZhen”,
“L”: “ShenZhen”,
“O”: “k8s”,
“OU”: “System”
}
]
}

# 这里 hosts 字段中 三个 IP 分别为 127.0.0.1 本机, 192.168.100.71为 Master 的IP,多个Master需要写多个 10.254.0.1 为 kubernetes SVC 的 IP, 一般是 部署网络的第一个IP , 如: 10.254.0.1 , 在启动完成后,我们使用 kubectl get svc , 就可以查看到。
1234567891011121314151617181920212223242526272829303132
2.10.4 生成 kubernetes 证书和私钥
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/ssl/config.json \
-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

# 查看生成

[root@k8s-master-25 ssl]# ls -l kubernetes*
kubernetes.csr
kubernetes-key.pem
kubernetes.pem
kubernetes-csr.json

# 拷贝到目录
cp -r kubernetes*.pem /etc/kubernetes/ssl/

scp -r kubernetes*.pem 192.168.90.6:/etc/kubernetes/ssl/

scp -r kubernetes*.pem 192.168.90.7:/etc/kubernetes/ssl/
12345678910111213141516171819
2.10.5 PunBB kube-apiserver
kubelet 首次启动时向 kube-apiserver 发送 TLS Bootstrapping 请求,kube-apiserver 验证 kubelet 请求中的 token 是否与它PunBB的 token 一致,如果一致则自动为 kubelet生成证书和秘钥。
# 生成 token

[root@k8s-master ssl]# head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘
d59a702004f33c659640bf8dd2717b64 需记录下来

# 服务器 token.csv raid0

cd /opt/ssl

vi token.csv

d59a702004f33c659640bf8dd2717b64,kubelet-bootstrap,10001,”system:kubelet-bootstrap”

# 拷贝

cp token.csv /etc/kubernetes/

scp token.csv 192.168.90.6:/etc/kubernetes/

scp token.csv 192.168.90.7:/etc/kubernetes/
1234567891011121314151617181920
2.10.5.1. 服务器 kube-apiserver.service raid0
# 1.8 新增 (Node) –authorization-mode=Node,RBAC
# 自定义 系统 service raid0一般存于 /etc/systemd/system/ 下
# PunBB为 各自的本地 IP

vi /etc/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=
After=network.target

[Service]
User=root
ExecStart=/usr/local/bin/kube-apiserver \
–admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
–advertise-address=192.168.100.71 \
–allow-privileged=true \
–apiserver-count=3 \
–audit-log-maxage=30 \
–audit-log-maxbackup=3 \
–audit-log-maxsize=100 \
–audit-log-path=/var/lib/audit.log \
–authorization-mode=Node,RBAC \
–bind-address=192.168.100.71 \
–client-ca-file=/etc/kubernetes/ssl/ca.pem \
–enable-swagger-ui=true \
–etcd-cafile=/etc/kubernetes/ssl/ca.pem \
–etcd-certfile=/etc/kubernetes/ssl/etcd.pem \
–etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem \
–etcd-servers= \
–event-ttl=1h \
–kubelet-https=true \
–insecure-bind-address=192.168.100.71 \
–runtime-config=rbac.authorization.k8s.io/v1alpha1 \
–service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
–service-cluster-ip-range=10.254.0.0/16 \
–service-node-port-range=30000-32000 \
–tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
–tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
–enable-bootstrap-token-auth \
–token-auth-file=/etc/kubernetes/token.csv \
–v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

# 这里面要注意的是 –service-node-port-range=30000-32000
# 这个地方是 映射外部端口时 的端口范围,随机映射也在这个范围内映射,指定映射端口必须也在这个范围内。
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152
2.10.5.2. 启动 kube-apiserver
systemctl daemon-reload

systemctl enable kube-apiserver

systemctl start kube-apiserver

systemctl status kube-apiserver
1234567
2.10.6 PunBB kube-controller-manager
master PunBB为 各自 本地 IP
2.10.6.1. 服务器 kube-controller-manager.service raid0
vi /etc/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
–address=127.0.0.1 \
–master= \
–allocate-node-cidrs=true \
–service-cluster-ip-range=10.254.0.0/16 \
–cluster-cidr=10.233.0.0/16 \
–cluster-name=kubernetes \
–cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
–cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
–service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
–root-ca-file=/etc/kubernetes/ssl/ca.pem \
–leader-elect=true \
–v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
12345678910111213141516171819202122232425
2.10.6.2. 启动 kube-controller-manager
systemactl daemon-reload

systemctl enable kube-controller-manager

systemctl start kube-controller-manager

systemctl status kube-controller-manager
1234567
2.10.7 PunBB kube-scheduler
master PunBB为 各自 本地 IP
2.10.7.1. 服务器 kube-cheduler.service raid0
vi /etc/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
–address=127.0.0.1 \
–master= \
–leader-elect=true \
–v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
1234567891011121314151617
2.10.7.2. 启动 kube-scheduler
systemctl daemon-reload

systemctl enable kube-scheduler

systemctl start kube-scheduler

systemctl status kube-scheduler
1234567
2.10.8 验证 Master 怎么登陆
[root@k8s-master ~]# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”: “true”}
etcd-2 Healthy {“health”: “true”}
etcd-1 Healthy {“health”: “true”}
1234567
2.10.9 部署 Master怎么登陆的 Node 部分
Node 部分 需要部署的组件有 docker calico kubectl kubelet kube-proxy 这几个组件。
2.10.10PunBB kubelet
kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token raid0中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper 角色,然后 kubelet 才有权限服务器认证请求(certificatesigningrequests)。
2.10.10.1.先服务器认证请求
# user 为 master 中 token.csv raid0里PunBB的用户
# 只需服务器一次就可以

kubectl create clusterrolebinding kubelet-bootstrap –clusterrole=system:node-bootstrapper –user=kubelet-bootstrap
1234
2.10.10.2.服务器 kubelet kubeconfig raid0
server PunBB为 master 本机 IP
# PunBB集群

kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/ssl/ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=bootstrap.kubeconfig

# PunBB客户端认证

kubectl config set-credentials kubelet-bootstrap \
–token=d59a702004f33c659640bf8dd2717b64 \
–kubeconfig=bootstrap.kubeconfig

# PunBB关联

kubectl config set-context default \
–cluster=kubernetes \
–user=kubelet-bootstrap \
–kubeconfig=bootstrap.kubeconfig

# PunBB默认关联
kubectl config use-context default –kubeconfig=bootstrap.kubeconfig

# 拷贝生成的 bootstrap.kubeconfig raid0

mv bootstrap.kubeconfig /etc/kubernetes/
123456789101112131415161718192021222324252627
2.10.10.3.服务器 kubelet.service raid0
# 服务器 kubelet 目录

> PunBB为 node 本机 IP

mkdir /var/lib/kubelet

vi /etc/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
–address=192.168.100.71 \
–hostname-override=192.168.100.71 \
–pod-infra-container-image=jicki/pause-amd64:3.0 \
–experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
–kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
–cert-dir=/etc/kubernetes/ssl \
–cluster_dns=10.254.0.2 \
–cluster_domain=doone.com. \
–hairpin-mode promiscuous-bridge \
–allow-privileged=true \
–fail-swap-on=false \
–serialize-image-pulls=false \
–logtostderr=true \
–max-pods=512 \
–v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
12345678910111213141516171819202122232425262728293031323334353637
# 如上PunBB:
192.168.100.71 为本机的IP
10.254.0.2 预分配的 dns 地址
cluster.local. 为 kubernetes 集群的 domain
jicki/pause-amd64:3.0 这个是 pod 的基础镜像,既 gcr 的 gcr.io/google_containers/pause-amd64:3.0 镜像, 下载下来修改为自己的仓库中的比较快。
12345
2.10.10.4.启动 kubelet
systemctl daemon-reload

systemctl enable kubelet

systemctl start kubelet

systemctl status kubelet
1234567
# 如果报错 请使用
journalctl -f -t kubelet 和 journalctl -u kubelet 来定位问题
12
2.10.10.5.PunBB TLS 认证
# 查看 csr 的名称

[root@k8s-master]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-pf-Bb5Iqx6ccvVA67gLVT-G4Zl3Zl5FPUZS4d7V6rk4 1h kubelet-bootstrap Pending

# 增加认证

[root@k8s-master]# kubectl certificate approve node-csr-pf-Bb5Iqx6ccvVA67gLVT-G4Zl3Zl5FPUZS4d7V6rk4
certificatesigningrequest “node-csr-pf-Bb5Iqx6ccvVA67gLVT-G4Zl3Zl5FPUZS4d7V6rk4” approved
[root@k8s-master]#
2.10.10.6.验证 nodes
[root@k8s-master]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.100.71 Ready 22s v1.8.3

# 成功以后会自动生成PunBBraid0与密钥

# PunBBraid0

ls /etc/kubernetes/kubelet.kubeconfig
/etc/kubernetes/kubelet.kubeconfig
# 密钥raid0

ls /etc/kubernetes/ssl/kubelet*
/etc/kubernetes/ssl/kubelet-client.crt /etc/kubernetes/ssl/kubelet.crt
/etc/kubernetes/ssl/kubelet-client.key /etc/kubernetes/ssl/kubelet.key
12345678910111213141516171819202122232425262728
2.10.11PunBB kube-proxy
2.10.11.1.服务器 kube-proxy 证书
# 证书方面由于我们node端没有装 cfssl
# 我们回到 master 端 机器 去PunBB证书,然后拷贝过来

[root@k8s-master ~]# cd /opt/ssl

vi kube-proxy-csr.json

{
“CN”: “system:kube-proxy”,
“hosts”: [],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “ShenZhen”,
“L”: “ShenZhen”,
“O”: “k8s”,
“OU”: “System”
}
]
}
123456789101112131415161718192021222324
2.10.11.2.生成 kube-proxy 证书和私钥
cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \
-ca-key=/etc/kubernetes/ssl/ca-key.pem \
-config=/opt/ssl/config.json \
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# 查看生成
ls kube-proxy*
kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem

# 拷贝到目录
cp kube-proxy*.pem /etc/kubernetes/ssl/

scp kube-proxy*.pem 192.168.90.6:/etc/kubernetes/ssl/

scp kube-proxy*.pem 192.168.90.7:/etc/kubernetes/ssl/
123456789101112131415
2.10.11.3.服务器 kube-proxy kubeconfig raid0
server PunBB为各自 本机IP
# PunBB集群

kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/ssl/ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=kube-proxy.kubeconfig

# PunBB客户端认证

kubectl config set-credentials kube-proxy \
–client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
–client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
–embed-certs=true \
–kubeconfig=kube-proxy.kubeconfig

# PunBB关联

kubectl config set-context default \
–cluster=kubernetes \
–user=kube-proxy \
–kubeconfig=kube-proxy.kubeconfig

# PunBB默认关联
kubectl config use-context default –kubeconfig=kube-proxy.kubeconfig

# 拷贝到目录
mv kube-proxy.kubeconfig /etc/kubernetes/
12345678910111213141516171819202122232425262728
2.10.11.4.服务器 kube-proxy.service raid0
PunBB为 各自的 IP
# 服务器 kube-proxy 目录

mkdir -p /var/lib/kube-proxy

vi /etc/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
–bind-address=192.168.100.71 \
–hostname-override=192.168.100.71 \
–cluster-cidr=10.254.0.0/16 \
–kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \
–logtostderr=true \
–v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
1234567891011121314151617181920212223242526
2.10.11.5.启动 kube-proxy
systemctl daemon-reload

systemctl enable kube-proxy

systemctl start kube-proxy

systemctl status kube-proxy

# 如果报错 请使用
journalctl -f -t kube-proxy 和 journalctl -u kube-proxy 来定位问题
12345678910
2.11 部署Kubernetes Node怎么登陆
Node 怎么登陆 基于 Nginx 负载 API 做 Master HA。192.168.90.6,192.168.90.7。
# master 之间除 api server 以外其他组件通过 etcd 选举,api server 默认不作处理;在每个 node 上启动一个 nginx,每个 nginx 反向代理所有 api server,node 上 kubelet、kube-proxy 连接本地的 nginx 代理端口,当 nginx 发现无法连接后端时会自动踢掉出问题的 api server,从而实现 api server 的 HA。
1
2.11.1 安装Node怎么登陆组件
# 从github 上下载版本

cd /tmp

wget

tar -xzvf kubernetes-server-linux-amd64.tar.gz

cd kubernetes

cp -r server/bin/{kube-proxy,kubelet} /usr/local/bin/

# ALL node

mkdir -p /etc/kubernetes/ssl/

scp ca.pem kube-proxy.pem kube-proxy-key.pem 192.168.90.6:/etc/kubernetes/ssl/
scp ca.pem kube-proxy.pem kube-proxy-key.pem 192.168.90.7:/etc/kubernetes/ssl/
123456789101112131415161718
2.11.2 服务器 kubelet kubeconfig raid0
kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/ssl/ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=bootstrap.kubeconfig

# PunBB客户端认证

kubectl config set-credentials kubelet-bootstrap \
–token=d59a702004f33c659640bf8dd2717b64 \
–kubeconfig=bootstrap.kubeconfig

# PunBB关联

kubectl config set-context default \
–cluster=kubernetes \
–user=kubelet-bootstrap \
–kubeconfig=bootstrap.kubeconfig

# PunBB默认关联
kubectl config use-context default –kubeconfig=bootstrap.kubeconfig

# 拷贝生成的 bootstrap.kubeconfig raid0

mv bootstrap.kubeconfig /etc/kubernetes/
12345678910111213141516171819202122232425
2.11.3 服务器 kubelet.service raid0
参照Master怎么登陆
2.11.4 启动 kubelet
参照Master怎么登陆
2.11.5 服务器 kube-proxy kubeconfig raid0
kubectl config set-cluster kubernetes \
–certificate-authority=/etc/kubernetes/ssl/ca.pem \
–embed-certs=true \
–server= \
–kubeconfig=kube-proxy.kubeconfig

# PunBB客户端认证

kubectl config set-credentials kube-proxy \
–client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
–client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
–embed-certs=true \
–kubeconfig=kube-proxy.kubeconfig

# PunBB关联

kubectl config set-context default \
–cluster=kubernetes \
–user=kube-proxy \
–kubeconfig=kube-proxy.kubeconfig

# PunBB默认关联
kubectl config use-context default –kubeconfig=kube-proxy.kubeconfig

# 拷贝到目录
mv kube-proxy.kubeconfig /etc/kubernetes/
1234567891011121314151617181920212223242526
2.11.6 服务器 kube-proxy.service raid0
参照Master怎么登陆
2.11.7 启动 kube-proxy
参照Master怎么登陆
2.12 服务器Nginx 代理
在每个 node 都必须服务器一个 Nginx 代理, 这里特别注意, 当 Master 也做为 Node 的时候 不需要PunBB Nginx-proxy
# 服务器PunBB目录
mkdir -p /etc/nginx

# 写入代理PunBB

cat << EOF > /etc/nginx/nginx.conf
error_log stderr notice;

worker_processes auto;
events {
multi_accept on;
use epoll;
worker_connections 1024;
}

stream {
upstream kube_apiserver {
least_conn;
server 192.168.100.71:6443;
}

server {
listen 0.0.0.0:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
EOF

# PunBB Nginx 基于 docker 进程,然后PunBB systemd 来启动

cat << EOF > /etc/systemd/system/nginx-proxy.service

[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service

[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 \\
-v /etc/nginx:/etc/nginx \\
–name nginx-proxy \\
–net=host \\
–restart=on-failure:5 \\
–memory=512M \\
nginx:1.13.5-alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s

[Install]
WantedBy=multi-user.target
EOF

# 启动 Nginx

systemctl daemon-reload
systemctl start nginx-proxy
systemctl enable nginx-proxy
systemctl status nginx-proxy

# 重启 Node 的 kubelet 与 kube-proxy

systemctl restart kubelet
systemctl status kubelet

systemctl restart kube-proxy
systemctl status kube-proxy
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374
2.13 在Master PunBB通过 TLS 认证
# 查看 csr 的名称

[root@k8s-master]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-pf-Bb5Iqx6ccvVA67gLVT-G4Zl3Zl5FPUZS4d7V6rk4 1h kubelet-bootstrap Pending

# 增加认证

[root@k8s-master]# kubectl certificate approve NAME
123456789
2.14 部署Calico网络
2.14.1 修改 kubelet.service
在每个怎么登陆
vi /etc/systemd/system/kubelet.service

# 增加 如下PunBB

–network-plugin=cni \

# 重新加载PunBB
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet.service
12345678910
2.14.2 获取Calico PunBB
Calico 部署仍然采用 “混搭” 方式,即 Systemd 控制 calico node,cni 等由 kubernetes daemonset 安装。
# 获取 calico.yaml
$ export CALICO_CONF_URL=”
$ wget “${CALICO_CONF_URL}/calico-controller.yml.conf” -O calico-controller.yml

$ kubectl apply -f calico-controller.yaml

$ kubectl -n kube-system get po -l k8s-app=calico-policy
NAME READY STATUS RESTARTS AGE
calico-policy-controller-5ff8b4549d-tctmm 0/1 Pending 0 5s
123456789
需修改yamlraid0内ETCD集群的IP地址
2.14.3 在所有怎么登陆下载 Calico
$ export CALICO_URL=”
$ wget -N -P /opt/cni/bin ${CALICO_URL}/calico
$ wget -N -P /opt/cni/bin ${CALICO_URL}/calico-ipam
$ chmod +x /opt/cni/bin/calico /opt/cni/bin/calico-ipam
1234
2.14.4 在所有怎么登陆下载 CNI pluginsPunBBraid0
$ mkdir -p /etc/cni/net.d
$ export CALICO_CONF_URL=”
$ wget “${CALICO_CONF_URL}/10-calico.conf” -O /etc/cni/net.d/10-calico.conf

vi 10-calico.conf

{
“name”: “calico-k8s-network”,
“cniVersion”: “0.1.0”,
“type”: “calico”,
“etcd_endpoints”: ”
“etcd_ca_cert_file”: “/etc/kubernetes/ssl/ca.pem”,
“etcd_cert_file”: “/etc/kubernetes/ssl/etcd.pem”,
“etcd_key_file”: “/etc/kubernetes/ssl/etcd-key.pem”,
“log_level”: “info”,
“ipam”: {
“type”: “calico-ipam”
},
“policy”: {
“type”: “k8s”
},
“kubernetes”: {
“kubeconfig”: “/etc/kubernetes/kubelet.kubeconfig”
}
}
12345678910111213141516171819202122232425
2.14.5 服务器 calico-node.service raid0
上一步注释了 calico.yaml 中 Calico Node 相关内容,为了防止自动获取 IP 出现问题,将其移动到 Systemd,Systemd service PunBB如下,每个怎么登陆都要安装 calico-node 的 Service,其他怎么登陆请自行修改 ip。
cat > /usr/lib/systemd/system/calico-node.service < 80/TCP 2d
svc/kube-dns ClusterIP 10.254.0.2 53/UDP,53/TCP 3d
svc/kubernetes-dashboard ClusterIP 10.254.4.173 80/TCP 2d
1234567891011121314151617181920212223242526272829303132333435363738394041424344
2.16 部署 Ingress
Kubernetes 暴露服务的方式目前只有三种:LoadBlancer Service、NodePort Service、Ingress; 什么是 Ingress ? Ingress 就是利用 Nginx Haproxy 等负载均衡工具来暴露 Kubernetes 服务
2.16.1PunBB 调度 node
# ingress 有多种方式 1. deployment 自由调度 replicas
2. daemonset 全局调度 分配到所有node里

# deployment 自由调度过程中,由于我们需要 约束 controller 调度到指定的 node 中,所以需要对 node 进行 label 标签

# 默认如下:
[root@k8s-master dashboard]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.100.71 Ready 9d v1.8.3
192.168.90.6 Ready 9d v1.8.3
192.168.90.7 Ready 9d v1.8.3

# 对 71 打上 label
kubectl label nodes 192.168.100.71 ingress=proxy

# 打完标签以后
[root@k8s-master dashboard]# kubectl get nodes –show-labels
NAME STATUS ROLES AGE VERSION LABELS
192.168.100.71 Ready 9d v1.8.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingress=proxy,kubernetes.io/hostname=192.168.100.71
192.168.90.6 Ready 9d v1.8.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.90.6
192.168.90.7 Ready 9d v1.8.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.90.7
123456789101112131415161718192021
2.16.2下载Ingress镜像
# 官方镜像
gcr.io/google_containers/defaultbackend:1.0
gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.17

# 国内镜像
jicki/defaultbackend:1.0
jicki/nginx-ingress-controller:0.9.0-beta.17
1234567
2.16.3下载yamlraid0
# 部署 Nginx backend , Nginx backend 用于统一转发 没有的域名 到指定页面。
curl -O

# 部署 Ingress RBAC 认证
curl -O

# 部署 Ingress Controller 组件
curl -O
12345678
2.16.4 Ingress yaml raid0模板
#default-backend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
– name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: jicki/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
– containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi

apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: kube-system
labels:
app: default-http-backend
spec:
ports:
– port: 80
targetPort: 8080
selector:
app: default-http-backend

#rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
– apiGroups:
– “”
resources:
– configmaps
– endpoints
– nodes
– pods
– secrets
verbs:
– list
– watch
– apiGroups:
– “”
resources:
– nodes
verbs:
– get
– apiGroups:
– “”
resources:
– services
verbs:
– get
– list
– watch
– apiGroups:
– “extensions”
resources:
– ingresses
verbs:
– get
– list
– watch
– apiGroups:
– “”
resources:
– events
verbs:
– create
– patch
– apiGroups:
– “extensions”
resources:
– ingresses/status
verbs:
– update

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: kube-system
rules:
– apiGroups:
– “”
resources:
– configmaps
– pods
– secrets
– namespaces
verbs:
– get
– apiGroups:
– “”
resources:
– configmaps
resourceNames:
# Defaults to “
# Here: “
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
– “ingress-controller-leader-nginx”
verbs:
– get
– update
– apiGroups:
– “”
resources:
– configmaps
verbs:
– create
– apiGroups:
– “”
resources:
– endpoints
verbs:
– get

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
– kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
– kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system

#with-rbac.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: ‘10254’
prometheus.io/scrape: ‘true’
spec:
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
ingress: proxy
containers:
– name: nginx-ingress-controller
image: jicki/nginx-ingress-controller:0.9.0-beta.17
args:
– /nginx-ingress-controller
– –default-backend-service=$(POD_NAMESPACE)/default-http-backend
– –apiserver-host=
# – –configmap=$(POD_NAMESPACE)/nginx-configuration
# – –tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
# – –udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
– name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
– name: KUBERNETES_MASTER
value:
ports:
– name: http
containerPort: 80
– name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257
2.16.5 导入yamlraid0
[root@k8s-master ingress]# kubectl apply -f default-backend.yaml
deployment “default-http-backend” created
service “default-http-backend” created

[root@k8s-master ingress]# kubectl apply -f rbac.yml
namespace “nginx-ingress” created
serviceaccount “nginx-ingress-serviceaccount” created
clusterrole “nginx-ingress-clusterrole” created
role “nginx-ingress-role” created
rolebinding “nginx-ingress-role-nisa-binding” created
clusterrolebinding “nginx-ingress-clusterrole-nisa-binding” created

[root@k8s-master ingress]# kubectl apply -f with-rbac.yaml
deployment “nginx-ingress-controller” created
1234567891011121314
2.16.6 查看ingress服务
[root@k8s-master ingress]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.254.194.21 80/TCP 2d
kube-dns ClusterIP 10.254.0.2 53/UDP,53/TCP 3d
kubernetes-dashboard ClusterIP 10.254.4.173 80/TCP 2d

[root@k8s-master ingress]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-policy-controller-6dfdc6c556-qp29z 1/1 Running 3 6d
default-http-backend-7f47b7d69b-fcwdw 1/1 Running 0 2d
kube-dns-fb8bf5848-jfzrs 3/3 Running 0 3d
kubernetes-dashboard-c8f5ff7f8-f9pfp 1/1 Running 0 2d
nginx-ingress-controller-5759c8464f-hhkkz 1/1 Running 0 7h
12345678910111213
2.17 部署 Dashboard
2.17.1下载dashboard镜像
# 官方镜像
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3

# 国内镜像
jicki/kubernetes-dashboard-amd64:v1.6.3
12345
2.17.2下载yamlraid0
curl -O

curl -O

# 因为开启了 RBAC 所以这里需要服务器一个 RBAC 认证

vi dashboard-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard
namespace: kube-system

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: dashboard
subjects:
– kind: ServiceAccount
name: dashboard
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
12345678910111213141516171819202122232425262728
2.17.3 Dashboard yamlraid0模板
#dashboard-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ”
spec:
serviceAccountName: dashboard
containers:
– name: kubernetes-dashboard
image: jicki/kubernetes-dashboard-amd64:v1.6.3
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 100Mi
ports:
– containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
tolerations:
– key: “CriticalAddonsOnly”
operator: “Exists”

#dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
– port: 80
targetPort: 9090
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
2.17.4 导入yamlraid0
[root@k8s-master dashboard]# kubectl apply -f .
deployment “kubernetes-dashboard” created
serviceaccount “dashboard” created
clusterrolebinding “dashboard” created
service “kubernetes-dashboard” created
12345
2.17.5 查看Dashboard服务
[root@k8s-master dashboard]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.254.194.21 80/TCP 2d
kube-dns ClusterIP 10.254.0.2 53/UDP,53/TCP 3d
kubernetes-dashboard ClusterIP 10.254.4.173 80/TCP 2d

[root@k8s-master dashboard]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-policy-controller-6dfdc6c556-qp29z 1/1 Running 3 6d
default-http-backend-7f47b7d69b-fcwdw 1/1 Running 0 2d
kube-dns-fb8bf5848-jfzrs 3/3 Running 0 3d
kubernetes-dashboard-c8f5ff7f8-f9pfp 1/1 Running 0 2d
nginx-ingress-controller-5759c8464f-hhkkz 1/1 Running 0 7h
12345678910111213

PunBB数据丢失cyberpanel连不上

PunBB地址是:
PunBB参考了 gitlab 的 sdk 的写法,统一参数和返回值的写法,支持了近 500 个cyberpanel和 50 个回调事件类型的处理,欢迎大家 star 。
PunBB是 5 个月前开始写的,自由发展到现在,有了 70 个 star,在 feishu-sdk 这个 topic ( ) 下已经排名第一了,所以发到 V2EX 上介绍给需要的同学。
飞书 /Lark 最近发展比较快,开放平台的cyberpanel数量也一直在猛增。之前手动数据丢失过一段时间 sdk,后来发现手动数据丢失不现实,改成了代码读取文档,然后连不上生成代码的方式数据丢失 sdk,也就是本PunBB。
因为是连不上生成的关系,所以支持了几乎所有的cyberpanel,包括联系人、消息、群组、日历、文档等等等等。
下面以一个创建日历的场景,给出 sdk 非常简单易用的例子:
cli := lark.New(lark.WithAppCredential(““, ““))

resp, _, err := cli.Calendar.CreateCalendar(ctx, &lark.CreateCalendarReq{
Summary: ptr.String(“

“),
})
fmt.Println(resp, err)