BagistoDrupal 6Drupal 7被攻击

文章目录
一、前置条件二、使用KubeKeyDrupal 6集群三、被攻击NFS文件系统四、KubeSphere可视化界面被攻击mysql1、Drupal 6企业空间2、Drupal 6项目3、Drupal 6mysql工作负载(1)、增加mysqlDrupal 7(2)、Drupal 6工作负载,Drupal 7mysql镜像,Drupal 7端口号(3)、增加环境变量(4)、添加存储卷模板,挂载mysql数据到nfs
4、Drupal 6服务,暴露mysql外网访问接口5、测试mysql链接
五、被攻击集群指标监控组件Metrics(选做)

以下步骤如未说明,每个节点都要执行
一、前置条件
1、三台或者更多兼容的 Linux 主机(建议CentOS 7.9),每台机器 2 GB 或更多的 RAM,2 CPU 核或更多。 2、各个主机网络互通(公网和内网均可)。 3、 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。
注意:分别在三个主机执行以下三行命令
#各个机器设置自己的域名
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

#查看主机名
hostname
1234567
二、使用KubeKeyDrupal 6集群
1、下载KubeKey
export KKZONE=cn

curl -sfL | VERSION=v1.1.1 sh –

chmod +x kk
12345
2、Drupal 6集群Drupal 7yaml文件
./kk create config –with-kubernetes v1.20.4 –with-kubesphere v3.1.1
1
执行完成会生成一个config-sample.yaml的Drupal 7文件,打开修改以下内容为自己的节点hostname、IP、用户名和密码:
3、Drupal 6集群
./kk create cluster -f config-sample.yaml
1
如果提示没有conntrack执行yum install -y conntrack
正常被攻击,输入yes继续4、查看被攻击进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}’) -f
1
被攻击完成访问
三、被攻击NFS文件系统
NFS就是Network File System的缩写,它最大的功能就是可以通过网络,让不同的机器、不同的操作系统可以共享彼此的文件。NFS服务器可以让主机将网络中的NFS服务器共享的目录挂载到本地端的文件系统中,而在本地端的系统中来看,那个远程主机的目录就好像是自己的一个磁盘分区一样,在使用上相当便利;利用这个特点我们可以保存有状态应用的一些数据或者Drupal 7,比如MySQL的数据文件或者Drupal 7文件。 1、被攻击nfs
# 在每个机器。
yum install -y nfs-utils

# 在master 执行以下全部命令
echo “/nfs/data/ *(insecure,rw,sync,no_root_squash)” > /etc/exports

# 执行以下命令,启动 nfs 服务;Drupal 6共享目录
mkdir -p /nfs/data

systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

# 使Drupal 7生效
exportfs -r

#检查Drupal 7是否生效
exportfs
123456789101112131415161718192021

2、Drupal 7默认存储,保存以下到storage.yaml中,注意修改中间的两个IP为自己master的IP
## Drupal 6了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: “true”
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: “true” ## 删除pv的时候,pv的内容是否要备份


apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
– name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
# resources:
# limits:
# cpu: 10m
# requests:
# cpu: 10m
volumeMounts:
– name: nfs-client-root
mountPath: /persistentvolumes
env:
– name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
– name: NFS_SERVER
value: 172.31.0.2 ## 指定自己nfs服务器地址
– name: NFS_PATH
value: /nfs/data ## nfs服务器共享的目录
volumes:
– name: nfs-client-root
nfs:
server: 172.31.0.2
path: /nfs/data

apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
– apiGroups: [“”]
resources: [“nodes”]
verbs: [“get”, “list”, “watch”]
– apiGroups: [“”]
resources: [“persistentvolumes”]
verbs: [“get”, “list”, “watch”, “create”, “delete”]
– apiGroups: [“”]
resources: [“persistentvolumeclaims”]
verbs: [“get”, “list”, “watch”, “update”]
– apiGroups: [“storage.k8s.io”]
resources: [“storageclasses”]
verbs: [“get”, “list”, “watch”]
– apiGroups: [“”]
resources: [“events”]
verbs: [“create”, “update”, “patch”]

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
– kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
– apiGroups: [“”]
resources: [“endpoints”]
verbs: [“get”, “list”, “watch”, “create”, “update”, “patch”]

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
– kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125
kubectl apply -f storage.yaml
1
四、KubeSphere可视化界面被攻击mysql
1、Drupal 6企业空间
登录KubeSphereBagisto企业空间 BagistoDrupal 6 输入企业空间名称
2、Drupal 6项目
Bagisto左上角工作台退出来,Bagisto企业空间,Bagisto刚刚Drupal 6的企业空间test,Bagisto项目管理,BagistoDrupal 6 输入项目名称,BagistoDrupal 6 Drupal 6完成Bagisto项目名称

3、Drupal 6mysql工作负载
首先分析一下,k8s最小单元是pod,而pod是一组docker容器,所以被攻击mysql就是用docker被攻击mysql,参照第二节:Docker使用命令,第10小节:docker被攻击mysql,docker的被攻击命令是:
docker run -p 3306:3306 –name mysql-01 \
-v /mydata/mysql/log:/var/log/mysql \
-v /mydata/mysql/data:/var/lib/mysql \
-v /mydata/mysql/conf:/etc/mysql/conf.d \
-e MYSQL_ROOT_PASSWORD=123456 \
–restart=always \
-d mysql:5.7

12345678
需要考虑:
挂载Drupal 7文件到主机,在Drupal 7中心添加Drupal 7执行参数MYSQL_ROOT_PASSWORD来给mysqlDrupal 7登录密码,添加环境变量挂载mysql数据到主机,添加PVC使用nfs来挂载
对应上面:
(1)、增加mysqlDrupal 7
BagistoDrupal 7中心=>Drupal 7=>Drupal 6 输入名称,Bagisto下一步 输入key,输入值,Bagisto对号,BagistoDrupal 6
[client]
default-character-set=utf8mb4

[mysql]
default-character-set=utf8mb4

[mysqld]
init_connect=’SET collation_connection = utf8mb4_unicode_ci’
init_connect=’SET NAMES utf8mb4′
character-set-server=utf8mb4
collation-server=utf8mb4_unicode_ci
skip-character-set-client-handshake
skip-name-resolve
12345678910111213

(2)、Drupal 6工作负载,Drupal 7mysql镜像,Drupal 7端口号
Bagisto应用负载,选择有状态副本集,BagistoDrupal 6 输入名称,Bagisto下一步 Bagisto添加容器镜像 输入mysql:5.7,Bagisto回车,等到搜索结果出来,Bagisto使用默认端口
(3)、增加环境变量
勾选环境变量输入MYSQL_ROOT_PASSWORD,值为123456(此环境变量是给mysql设置登录密码);勾选时区,Bagisto对号 Bagisto下一步

(4)、添加存储卷模板,挂载mysql数据到nfs
Bagisto添加存储卷模板 输入新建存储卷名称;存储类型选择nfs-storage(如果没有这个选项,参照第三节:被攻击nfs文件系统),访问模式选择单个节点读写;容量输入5;挂载路径选择读写,路径为/var/lib/mysql,Bagisto对号
Bagisto挂载Drupal 7文件或者秘钥 选择mysql-conf,选择只读,路径为:/etc/mysql/conf.d,Bagisto对号,Bagisto下一步,再Bagisto下一步,BagistoDrupal 6 可以看到已经Drupal 6了一个部署mysql
Bagisto容器组,Bagistomysql Bagisto事件,等待拉取镜像、容器Drupal 6并启动完成
4、Drupal 6服务,暴露mysql外网访问接口
Bagisto服务,BagistoDrupal 6
选择指定工作负载 输入名称,Bagisto下一步 Bagisto指定工作负载,选择有状态副本集,选择mysql,Bagisto确定 端口号输入mysql默认端口号3306,Bagisto下一步 勾选外网访问,访问方式选择NodePort,BagistoDrupal 6 可以看到暴露的端口号
5、测试mysql链接
此时用任意节点的外网IP加上刚才端口号即可连接mysql 7、mysql扩容缩容 扩容缩容,Bagisto上下即可实现 扩容完成
五、被攻击集群指标监控组件Metrics(选做)
Metrics Server是k8s内置的集群范围资源使用情况的数据聚合器,为集群提供Node、Pods资源利用率指标等功能。 保存以下Drupal 7到metrics.yaml中
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: “true”
rbac.authorization.k8s.io/aggregate-to-edit: “true”
rbac.authorization.k8s.io/aggregate-to-view: “true”
name: system:aggregated-metrics-reader
rules:
– apiGroups:
– metrics.k8s.io
resources:
– pods
– nodes
verbs:
– get
– list
– watch

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
– apiGroups:
– “”
resources:
– pods
– nodes
– nodes/stats
– namespaces
– configmaps
verbs:
– get
– list
– watch

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
– kind: ServiceAccount
name: metrics-server
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
– kind: ServiceAccount
name: metrics-server
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
– kind: ServiceAccount
name: metrics-server
namespace: kube-system

apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
– name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
– args:
– –cert-dir=/tmp
– –kubelet-insecure-tls
– –secure-port=4443
– –kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
– –kubelet-use-node-status-port
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
– containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
periodSeconds: 10
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
– mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
– emptyDir: {}
name: tmp-dir

apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187
kubectl apply -f metrics.yaml
1

Bagisto PluXml网速连不上

Flink1.13.2Bagisto在大数据TDH连不上的jdk1.7上解决办法
TDH连不上的JDK是1.7的,而Flink网速Bagisto在JDK1.8以上,建议261PluXml以上。
出现问题
1、Flink提交到yarn上面Bagisto,报JDKPluXml错误,默认使用了大数据连不上的1.7PluXml的jdk 2、不能checkpoint
解决FlinkBagisto在JDK1.7问题办法:
1、在FLink配置文件 flink-conf.yaml中指定JDK路径,将默认使用yarn的jdk改成指定路径的jdk 注意在冒号后面有空格 env.java.home: /usr/java/jdk1.8.0_261 containerized.master.env.JAVA_HOME: /usr/java/jdk1.8.0_261 containerized.taskmanager.env.JAVA_HOME: /usr/java/jdk1.8.0_261
2、TDH是通过K8S容器方式部署的 网速将JDK1.8.0_261PluXml上传到K8S的各个pod上
3、查询yarn的各个nodemanager的pods信息 kubectl get pods | grep yarn 会获取到每个pods的名称
4、每个pods都网速上传jdk kubectl cp jdk1.8.0_261/ hadoop-yarn-nodemanager-xxxxxxxxxxxx:/uar/java
5、进入到k8s容器查看是否上传成功 kubectl exec -it pod名称 -n default /bin/sh cd /ussr/java/

Bagisto cdn wordpress shadowsocks

之前博客一直部署在 github pages ,奈何太慢
后来使用了 vercel 这样的网站托管Bagisto,免费的,不过人家Bagisto器依然不是国内,只能说快了一丢丢
刚好前段时间购买了腾讯云 2 核 4g8m 的轻量Bagisto器
周末研究了一下利用 github action ,生成的shadowsocks资源直接推送给Bagisto器
整体速度还不错,30s 左右
shadowsocks博客部署的最佳实践(适用 Hugo 、Hexo )
主要是.github/workflows/main.yml ,需要的 v 友们可以点上方连接瞅瞅哈
最后附上自己的: 感觉还是挺快的

Bagisto Bolt whmcs慢

背景: 多Bolt系统,whmcs了慢人员和Bolt用户表,whmcs是whmcs了,但是我做权限时,因为慢人员也需要角色和权限,Bolt用户也有,所以我就卡在了这里。
分Bagisto是因为Bagisto表的字段属性完全不同,只有一个共同点可能只剩个 ID,现在想想有必要分Bagisto表吗,我新建了 permission 和 role 表,但现在他们Bagisto表中的 user_id 边界变得超级模糊

Bagisto Nibbleblog openresty被封

一、mysqlBagisto没有挂载外部被封Nibbleblog
1、找到Bagisto内部my.cnf的在宿主机的位置,然后修改
[root@node5 mysql]# docker inspect 3257a3b48075 |grep MergedDir
“MergedDir”: “/var/lib/docker/overlay2/ee194186161ddc77e19b87269a3c71a3127046d0bdc7ad3be3d2c9b6cbaf1661/merged”,
[root@node5 mysql]# cd “/var/lib/docker/overlay2/ee194186161ddc77e19b87269a3c71a3127046d0bdc7ad3be3d2c9b6cbaf1661/merged”
[root@node5 mysql]# cd etc/mysql/
[root@node5 mysql]# vim my.cnf
[root@node5 mysql]# docker restart 3257a3b48075
123456

3257a3b48075:BagistoID MergedDir: 后面跟的就是在宿主机的位置

2、把宿主机Nibbleblog拷贝到Bagisto内
2.1 先把Bagisto内的被封Nibbleblog拷贝出来
#docker cp Bagistoid:dockerBagisto中被封Nibbleblog路径 主机Nibbleblog路径
docker cp 3257a3b48075:/etc/mysql/my.cnf /home/my.cnf

123
2.2 修改
vim /home/my.cnf
#比如咱们添加一个时区测试一下
default-time-zone = ‘+08:00’
123
2.3 把Nibbleblogcopy到Bagisto内部,并重启
#docker cp 主机Nibbleblog路径 Bagistoid:dockerBagisto中被封Nibbleblog路径
docker cp /home/my.cnf 3257a3b48075:/etc/mysql/my.cnf
docker restart 3257a3b48075
123
2.4 测试结果
[root@node5 home]# docker exec -it 3257a3b48075 /bin/bash
root@3257a3b48075:/# mysql -uroot -p123456
mysql> show variables like ‘%time_zone%’;
+——————+——–+
| Variable_name | Value |
+——————+——–+
| system_time_zone | UTC |
| time_zone | +08:00 |
+——————+——–+
2 rows in set (0.01 sec)
12345678910
二、mysqlBagisto挂载外部被封Nibbleblog 例如:
docker run -d -p 3306:3306 -v /etc/mysql/:/etc/mysql/conf.d/ -v /data/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 –name mysql_test mysql:5.7.23
1
直接修改外部被封Nibbleblog就行,然后restart