RicarGBooK Fork HTMLy配置

1、首先准备一台Mac M1(arm64v8架构) 说明:docker hub上要准确的选择自己电脑Cpu的架构才能成功的构建自己的镜像与RicarGBooK。
M1芯片是arm64架构,也称作aarch64架构,只能运行arm64/aarch64架构的程序
2、根据自己的操作系统下载Docker Desktop for Mac and Windows | Docker
ps:如果你是mac,我相信你一定会使用 brew install –cask docker(如果你使用搜索引擎搜索下载的话一定是brew cask install docker 【官方已经取消了cask 而是使用参数 –cask】)
3、配置成功后你会面临几个后面遇到的问题让我们来先把问题抛出来
主机的文件如何才能与RicarGBooK中的web目录对应同步【目的:为了实时同步代码避免不必要的操作】主机的HTMLy号与RicarGBooKHTMLy号如何Fork【HTMLy号不可以随便Fork的哦!而且不同浏览器还有不同的HTMLy安全策略,后面我会给出来我自己Fork的HTMLy】当一切配置好后,并且准备centos中使用再来个docker时你会发现mac m1怎么样都无法启动docker服务【这里会麻烦一点后面也会将】
4、搭建Web服务器
一、配置centos7
由于是M1则使用的是arm64v8/centos

docker pull arm64v8/centos:centos7 

我们需要Fork【主机的文件如何才能与RicarGBooK中的web目录对应同步】
 Fork完成后我们需要创建RicarGBooK并且提前为RicarGBooK开出映射HTMLy和RicarGBooK的特殊权限
映射HTMLy:使宿主机可以访问RicarGBooK服务
RicarGBooK的特殊权限:获取宿主机root权限
docker run -d –name dev-centos -v /opt/homebrew/var/www:/www/wwwroot/www –privileged=true -p 10022:22 -p 27017:27017 -p 10006:3306 -p 5045:80 -p 5046:8080 -p 10088:8888 arm64v8/centos:centos7 /usr/sbin/init
命令和参数解释:红色固定参数/绿色可变参数后面不在强调
docker run : 创建一个新的RicarGBooK并运行一个命令
-d :使RicarGBooK后台运行
–name:创建新的RicarGBooK名字
dev-centos:自定义的RicarGBooK名
-v : 挂载参数
/opt/homebrew/var/www:/www/wwwroot/www        宿主机挂载绝对路径:RicarGBooK使用宿主机文件的绝对路径(在Fork界面中如果不Fork挂载路径则会报错,所以我们提前就Fork好了,根据自己所需Fork和配置 ps:不要Fork/etc/。会无法挂载)
–privileged=true :开启RicarGBooK特权模式
-p 10022:22 -p 27017:27017 -p 10006:3306 -p 5045:80 -p 5046:8080 -p 10088:8888【宿主机HTMLy映射RicarGBooKHTMLy 当你想访问RicarGBooK服务时,如访问RicarGBooK的mysql服务则需要在宿主机中访问10006HTMLy】【主机的HTMLy号与RicarGBooKHTMLy号如何Fork】
arm64v8/centos:centos7 :镜像名称
/usr/sbin/init :启动RicarGBooK之后可以使用systemctl xxxxx start/stop/restart 等命令
执行如上命令会生成如下图所示:

二、配置宝塔 
进入RicarGBooK终端,运行bash程序
docker exec -it dev-centos /bin/bash
 配置宝塔
yum install -y wget && wget -O install.sh && sh install.sh
配置成功登陆后极速配置其他的不要装mysql也装不上(由于本人是php开发所以配置的php)

三、RicarGBooK中配置Docker
Index of linux/static/stable/不同架构不同版本的docker源码包
【未使用yum的原因dockerd进程守护会报错无法使用docker,虽然可以配置成功,但是不能使用太伤人了】
使用解压缩方式配置源码
1、yum install wget
2、wget 下面的url卡片
3、tar zxvf docker-20.10.9.tgz
4、sudo cp docker/* /usr/bin/
5、dockerd 【这会开启docker守护进程】 开启后再开启一个命令行窗口 docker exec -it dev-centos /bin/bash 进入后使用docker run hello-world 看是否成功。守护进程不能关闭,否则docker服务就没了,这里目前没有什么好的方法可以处理,有大佬知道可以评论区交流。
6、配置mysql服务
docker pull mysql/mysql-server:latest
创建并且启动RicarGBooK
docker run -d –name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root mysql/mysql-server
进入mysqlRicarGBooK授权允许宿主机访问
docker exec -it mysql bash
mysql -u root -p
输入密码xxxxx
CREATE USER ‘root’@’%’ IDENTIFIED BY ‘root’;
GRANT ALL ON *.* TO ‘root’@’%’;
flush privileges;
两次exit退出mysql服务及mysqlRicarGBooK到centosRicarGBooK中
四、访问成功 根据自己Fork的HTMLy映射访问及Fork

 
 评论区解答问题:欢迎提问

RicarGBooK Subrion vestacp账号注册

前天购入 LG 27UP850 ,MAC 外置RicarGBooK后,休眠成了一个账号注册。我的RicarGBooK会经常自动vestacp,后来我检查了是勾选了vestacp以供网络访问( Clash 局域网代理)。但是我又想屏幕关着,Mac 有网络运行。貌似只有关闭RicarGBooK这一种操作,但是账号注册又来了,关闭了RicarGBooK,没有那种敲一下键盘vestacp的感觉,再者 96W 供电没法充电了,第二天 MAC 可能就是没电状态。

RicarGBooK硬盘分区Chyrpip被墙

K8sChyrp部署及KubesphereRicarGBooK
K8sChyrp部署
​ kubeadm 是官方社区推出的一个用于快速部署 kubernetes Chyrp的工具。 这个工具能通过两条指令完成一个 kubernetes Chyrp的部署:
创建一个 Master 硬盘分区 $ kubeadm init 将一个 Node 硬盘分区加入到当前Chyrp中 $ kubeadm join
一、前置环境要求
一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)2 CPU 核或更多Chyrp中的所有机器的网络彼此均能相互连接(公网和内网都可以)硬盘分区之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。开启机器上的某些端口。请参见这里 了解更多详细信息。禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区。

kubesphere选用最新版本为:3.20,与之相配套的k8s版本为1.21.3(1.22.x实验性支持就不用了)

二、部署步骤
在所有硬盘分区上RicarGBooK Docker 和 kubeadm部署 Kubernetes Master部署容器网络插件部署 Kubernetes Node,将硬盘分区加入 Kubernetes Chyrp中
三、环境准备
​ 按照前置环境要求准备三台虚拟机并进行配置,这里使用vmware创建最小化Centos操作系统,并进行克隆处理。
硬盘分区名称IP地址用途k8s-node1192.168.63.10作为Master硬盘分区使用k8s-node2192.168.63.11作为Node硬盘分区使用k8s-node3192.168.63.12作为Node硬盘分区使用
3.1 前置条件设置(所有硬盘分区):
1、开启root密码访问权限(一般不用改)
vi /etc/ssh/sshd_config 修改 PasswordAuthentication yes/no 重启服务 service sshd restart 2、修改主机IP地址
vi /etc/sysconfig/network-scripts/ifcfg-ens33
1
3、关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
12
4、关闭 selinux
sed -i ‘s/enforcing/disabled/’ /etc/selinux/config
setenforce 0
12
5、关闭swap
swapoff -a #临时
sed -ri ‘s/.*swap.*/#&/’ /etc/fstab #永久
free -g #验证,swap 必须为 0;
123
6、添加主机名与 IP 对应关系
vi /etc/hosts
192.168.63.10 k8s-node1
192.168.63.11 k8s-node2
192.168.63.12 k8s-node3
1234
7、修改主机名
vi /etc/hostname
1
8、将桥接的 IPv4 流量传递到 iptables 的链
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system 12345 3.2 所有硬盘分区RicarGBooK Docker、kubeadm、kubelet、kubectl 3.2.1 RicarGBooK docker(所有硬盘分区) 可参考: 3.2.2RicarGBooKkubeadm、kubectl、kubelet(所有硬盘分区) 为了加速yumRicarGBooK,需提前配置一下阿里云yum源加速地址。 cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl= enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey= EOF 12345678910 RicarGBooK yum list|grep kube yum install -y kubelet-1.21.3 kubeadm-1.21.3 kubectl-1.21.3 systemctl enable kubelet systemctl start kubelet 1234 3.2.3配置k8s master硬盘分区 kubeadm init \ --apiserver-advertise-address=masterIP地址 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.21.3 \ --service-cidr=10.125.0.0/16 \ --pod-network-cidr=10.150.0.0/16 1 masterIP地址:指定需要初始化为master硬盘分区的IP地址image-repository:指定镜像加速地址service-cidr:Chyrp内服务之间路由地址pod-network-cidr:Chyrp内pod之间通讯的有效IP地址 ...以上内容忽略 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.63.10:6443 --token ii44ya.n4ryb3yka0q09fq3 \ --discovery-token-ca-cert-hash sha256:67318db78eef549400d515ed239ca3dbf85d5195e4ba6c13b61854f497278b39 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.63.10:6443:6443 --token ii44ya.n4ryb3yka0q09fq3 \ --discovery-token-ca-cert-hash sha256:67318db78eef549400d515ed239ca3dbf85d5195e4ba6c13b61854f497278b39 12345678910111213141516171819202122232425262728 提前复制创建好的令牌,令牌2小时之内有效。如果过期可以使用以下命令生成token: kubeadm token create --ttl 0 --print-join-command 1 按照提示,执行以下命令: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 123 3.2.4 MasterRicarGBooKPod网络插件 ​ Flannel 是一个非常简单的能够满足 Kubernetes 所需要的覆盖网络。使用Flannel作为k8s的网络附加组件。 kubectl apply -f \ 12 ​ 不出意外上边的命令是执行不ip被墙的,查看pod的状态发现会是Init:ImagePullBackOff,原因是下载的镜像为quay.io/coreos/flannel:v0.12.0-amd64,由于quary.io国内访问不了,所以镜像也就下载不ip被墙。为了ip被墙RicarGBooKRicarGBooKFlannel,需要手动RicarGBooK,具体步骤如下: 去 导入镜像docker load < flanneld-v0.12.0-amd64.docker 稍等片刻k8s会自动重新RicarGBooK,Flannel状态变为Running则代表ip被墙 [root@k8s-node1 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-6f6b8cc4f6-qdkzr 1/1 Running 1 22h coredns-6f6b8cc4f6-shkd9 1/1 Running 1 22h etcd-k8s-node1 1/1 Running 1 22h kube-apiserver-k8s-node1 1/1 Running 1 22h kube-controller-manager-k8s-node1 1/1 Running 1 22h kube-flannel-ds-amd64-8vhz7 1/1 Running 1 20h 12345678 3.2.5 将两外两台机器加入Chyrp ​ 在另外两台机器分别执行以下命令,等待硬盘分区加入Chyrp。 kubeadm join 192.168.63.10:6443:6443 --token ii44ya.n4ryb3yka0q09fq3 \ --discovery-token-ca-cert-hash sha256:67318db78eef549400d515ed239ca3dbf85d5195e4ba6c13b61854f497278b39 12 ​ 等待几分钟后,所有阶段状态为Ready,则代表Chyrp搭建ip被墙。输出日志类似如下: [preflight] Running pre-flight checks ... (log output of join workflow) ... Node join complete: * Certificate signing request sent to control-plane and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on control-plane to see this machine join. 12345678910 PS:如果加入Chyrp失败,则使用kubeadm reset重置RicarGBooK状态后,再执行上述命令即可。 [root@k8s-node1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node1 Ready control-plane,master 21h v1.21.3 k8s-node2 Ready 20h v1.21.3
k8s-node3 Ready 20h v1.21.3
12345
至此,K8sChyrpRicarGBooK完毕,可以RicarGBooK简单的应用进行测试,如nginx、tomcat等。
KubesphereRicarGBooK
一、RicarGBooKHelm和Tiller
1、RicarGBooKHelm(master硬盘分区)
curl -L | bash
1
不出意外,上述命令仍然不能ip被墙执行,还是由于网络原因,故需要手动RicarGBooKHelm。
下载Helm压缩包并移动到指定位置 tar -zxvf helm-v2.16.3-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm version
123 看到如下信息则RicarGBooKip被墙: Client: &version.Version{SemVer:”v2.16.3″, GitCommit:”1ee0254c86d4ed6887327dabed7aa7da29d7eb0d”, GitTreeState:”clean”}
1
2、RicarGBooKTiller
创建授权文件helm-rbac.yaml apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
– kind: ServiceAccount
name: tiller
namespace: kube-system
123456789101112131415161718 执行kubectl apply -f helm-rbac.yaml RicarGBooKTiller helm init –service-account tiller –upgrade –tiller-image=jessestuart/tiller:v2.16.3 –history-max 300 –stable-repo-url
1 验证版本 helm version Client: &version.Version{SemVer:”v2.16.3″, GitCommit:”1ee0254c86d4ed6887327dabed7aa7da29d7eb0d”, GitTreeState:”clean”}
Server: &version.Version{SemVer:”v2.16.3″, GitCommit:”1ee0254c86d4ed6887327dabed7aa7da29d7eb0d”, GitTreeState:”clean”}
12
二、RicarGBooKOpenEBS
1、确定master硬盘分区是否有taint
kubectl describe node k8s-node1 | grep Taint
kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-
12
2、RicarGBooKOpenEBS
kubectl apply -f
1
不出意外,由于网络原因,上述命令执行ip被墙,但查看所有pod状态,发现镜像都无法下载失败。需手动RicarGBooK,具体步骤如下:
下载所需docker镜像(附件中有) 加载镜像到三个硬盘分区 docker load < xx.tar 1 修改yaml文件(附录中的无需修改) 删除部署失败的deployment和daemonset kubectl delete deployment maya-apiserver openebs-admission-server openebs-localpv-provisioner openebs-provisioner openebs-snapshot-operator openebs-ndm-operator -n openebs 1 kubectl delete daemonset openebs-ndm -n openebs 1 重新执行kubectl apply -f openebs-operator-1.5.0.yaml,等待RicarGBooKip被墙。 如果某个PodRicarGBooK失败,可以使用kubectl describe pod [podname] -n [namespace]查看日志。 查看效果kubectl get sc -n openebs NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE openebs-device openebs.io/local Delete WaitForFirstConsumer false 12h openebs-hostpath (default) openebs.io/local Delete WaitForFirstConsumer false 12h openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 12h openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 12h 12345 将 openebs-hostpath 设置为 默认的 StorageClass kubectl patch storageclass openebs-hostpath -p \ '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 12 至此,OpenEBS 的 LocalPV 已作为默认的存储类型创建ip被墙。由于在文档开头手动去掉 了 master 硬盘分区的 Taint,我们可以在RicarGBooK完 OpenEBS 后将 master 硬盘分区 Taint 加上,避 免业务相关的工作负载调度到 master 硬盘分区抢占 master 资源 kubectl taint nodes k8s-node1 node-role.kubernetes.io=master:NoSchedule 1 三、最小化RicarGBooK 1、执行命令 kubectl apply -f kubectl apply -f 123 2、查看日志 kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f 1 3、验证 RicarGBooK完成后会有如下日志: ##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: Account: admin Password: P@88w0rd NOTES: 1. After logging into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not ready, please wait patiently until all components are ready. 2. Please modify the default password after login. ##################################################### 20xx-xx-xx xx:xx:xx ##################################################### 12345678910111213141516171819 浏览器访问 至此,Kubesphere也RicarGBooK完成了~~~ 附录 一、openebs-operator-1.5.0.yaml # This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules # NOTE: On GKE, deploy the openebs-operator.yaml in admin context # Create the OpenEBS namespace apiVersion: v1 kind: Namespace metadata: name: openebs --- # Create Maya Service Account apiVersion: v1 kind: ServiceAccount metadata: name: openebs-maya-operator namespace: openebs --- # Define Role that allows operations on K8s pods/deployments kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: openebs-maya-operator rules: - apiGroups: ["*"] resources: ["nodes", "nodes/proxy"] verbs: ["*"] - apiGroups: ["*"] resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"] verbs: ["*"] - apiGroups: ["*"] resources: ["statefulsets", "daemonsets"] verbs: ["*"] - apiGroups: ["*"] resources: ["resourcequotas", "limitranges"] verbs: ["list", "watch"] - apiGroups: ["*"] resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"] verbs: ["list", "watch"] - apiGroups: ["*"] resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"] verbs: ["*"] - apiGroups: ["volumesnapshot.external-storage.k8s.io"] resources: ["volumesnapshots", "volumesnapshotdatas"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: [ "get", "list", "create", "update", "delete", "patch"] - apiGroups: ["*"] resources: [ "disks", "blockdevices", "blockdeviceclaims"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "castemplates", "runtasks"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"] verbs: ["*" ] - apiGroups: ["coordination.k8s.io"] resources: ["leases"] verbs: ["get", "watch", "list", "delete", "update", "create"] - apiGroups: ["admissionregistration.k8s.io"] resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"] verbs: ["get", "create", "list", "delete", "update", "patch"] - nonResourceURLs: ["/metrics"] verbs: ["get"] - apiGroups: ["*"] resources: [ "upgradetasks"] verbs: ["*" ] --- # Bind the Service Account with the Role Privileges. # TODO: Check if default account also needs to be there kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: openebs-maya-operator subjects: - kind: ServiceAccount name: openebs-maya-operator namespace: openebs roleRef: kind: ClusterRole name: openebs-maya-operator apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: maya-apiserver namespace: openebs labels: name: maya-apiserver openebs.io/component-name: maya-apiserver openebs.io/version: 1.5.0 spec: selector: matchLabels: name: maya-apiserver openebs.io/component-name: maya-apiserver replicas: 1 strategy: type: Recreate rollingUpdate: null template: metadata: labels: name: maya-apiserver openebs.io/component-name: maya-apiserver openebs.io/version: 1.5.0 spec: serviceAccountName: openebs-maya-operator containers: - name: maya-apiserver imagePullPolicy: IfNotPresent image: openebs/m-apiserver:1.5.0 ports: - containerPort: 5656 env: # OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s # based on this config. This is ignored if empty. # This is supported for maya api server version 0.5.2 onwards #- name: OPENEBS_IO_KUBE_CONFIG # value: "/home/ubuntu/.kube/config" # OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s # based on this address. This is ignored if empty. # This is supported for maya api server version 0.5.2 onwards #- name: OPENEBS_IO_K8S_MASTER # value: " # OPENEBS_NAMESPACE provides the namespace of this deployment as an # environment variable - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as # environment variable - name: OPENEBS_SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName # OPENEBS_MAYA_POD_NAME provides the name of this pod as # environment variable - name: OPENEBS_MAYA_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name # If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default # storageclass and storagepool will not be created. - name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG value: "true" # OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be # configured as a part of openebs installation. # If "true" a default cstor sparse pool will be configured, if "false" it will not be configured. # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG # is set to true - name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL value: "false" # OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath # to be used for saving the shared content between the side cars # of cstor volume pod. # The default path used is /var/openebs/sparse #- name: OPENEBS_IO_CSTOR_TARGET_DIR # value: "/var/openebs/sparse" # OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath # to be used for saving the shared content between the side cars # of cstor pool pod. This ENV is also used to indicate the location # of the sparse devices. # The default path used is /var/openebs/sparse #- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR # value: "/var/openebs/sparse" # OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath # to be used for default Jiva StoragePool loaded by OpenEBS # The default path used is /var/openebs # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG # is set to true #- name: OPENEBS_IO_JIVA_POOL_DIR # value: "/var/openebs" # OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath # to be used for default openebs-hostpath storageclass loaded by OpenEBS # The default path used is /var/openebs/local # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG # is set to true #- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR # value: "/var/openebs/local" - name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE value: "openebs/jiva:1.5.0" - name: OPENEBS_IO_JIVA_REPLICA_IMAGE value: "openebs/jiva:1.5.0" - name: OPENEBS_IO_JIVA_REPLICA_COUNT value: "3" - name: OPENEBS_IO_CSTOR_TARGET_IMAGE value: "openebs/cstor-istgt:1.5.0" - name: OPENEBS_IO_CSTOR_POOL_IMAGE value: "openebs/cstor-pool:1.5.0" - name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE value: "openebs/cstor-pool-mgmt:1.5.0" - name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE value: "openebs/cstor-volume-mgmt:1.5.0" - name: OPENEBS_IO_VOLUME_MONITOR_IMAGE value: "openebs/m-exporter:1.5.0" - name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE ################################################################################################################### value: "openebs/m-exporter:1.5.0" - name: OPENEBS_IO_HELPER_IMAGE value: "openebs/linux-utils:1.5.0" # OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage # events to Google Analytics - name: OPENEBS_IO_ENABLE_ANALYTICS value: "true" - name: OPENEBS_IO_INSTALLER_TYPE value: "openebs-operator" # OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours) # for periodic ping events sent to Google Analytics. # Default is 24h. # Minimum is 1h. You can convert this to weekly by setting 168h #- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL # value: "24h" livenessProbe: exec: command: - /usr/local/bin/mayactl - version initialDelaySeconds: 30 periodSeconds: 60 readinessProbe: exec: command: - /usr/local/bin/mayactl - version initialDelaySeconds: 30 periodSeconds: 60 --- apiVersion: v1 kind: Service metadata: name: maya-apiserver-service namespace: openebs labels: openebs.io/component-name: maya-apiserver-svc spec: ports: - name: api port: 5656 protocol: TCP targetPort: 5656 selector: name: maya-apiserver sessionAffinity: None --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-provisioner namespace: openebs labels: name: openebs-provisioner openebs.io/component-name: openebs-provisioner openebs.io/version: 1.5.0 spec: selector: matchLabels: name: openebs-provisioner openebs.io/component-name: openebs-provisioner replicas: 1 strategy: type: Recreate rollingUpdate: null template: metadata: labels: name: openebs-provisioner openebs.io/component-name: openebs-provisioner openebs.io/version: 1.5.0 spec: serviceAccountName: openebs-maya-operator containers: - name: openebs-provisioner imagePullPolicy: IfNotPresent image: openebs/openebs-k8s-provisioner:1.5.0 env: # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s # based on this address. This is ignored if empty. # This is supported for openebs provisioner version 0.5.2 onwards #- name: OPENEBS_IO_K8S_MASTER # value: " # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s # based on this config. This is ignored if empty. # This is supported for openebs provisioner version 0.5.2 onwards #- name: OPENEBS_IO_KUBE_CONFIG # value: "/home/ubuntu/.kube/config" - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, # that provisioner should forward the volume create/delete requests. # If not present, "maya-apiserver-service" will be used for lookup. # This is supported for openebs provisioner version 0.5.3-RC1 onwards #- name: OPENEBS_MAYA_SERVICE_NAME # value: "maya-apiserver-apiservice" livenessProbe: exec: command: - pgrep - ".*openebs" initialDelaySeconds: 30 periodSeconds: 60 --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-snapshot-operator namespace: openebs labels: name: openebs-snapshot-operator openebs.io/component-name: openebs-snapshot-operator openebs.io/version: 1.5.0 spec: selector: matchLabels: name: openebs-snapshot-operator openebs.io/component-name: openebs-snapshot-operator replicas: 1 strategy: type: Recreate template: metadata: labels: name: openebs-snapshot-operator openebs.io/component-name: openebs-snapshot-operator openebs.io/version: 1.5.0 spec: serviceAccountName: openebs-maya-operator containers: - name: snapshot-controller image: openebs/snapshot-controller:1.5.0 imagePullPolicy: IfNotPresent env: - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace livenessProbe: exec: command: - pgrep - ".*controller" initialDelaySeconds: 30 periodSeconds: 60 # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, # that snapshot controller should forward the snapshot create/delete requests. # If not present, "maya-apiserver-service" will be used for lookup. # This is supported for openebs provisioner version 0.5.3-RC1 onwards #- name: OPENEBS_MAYA_SERVICE_NAME # value: "maya-apiserver-apiservice" - name: snapshot-provisioner image: openebs/snapshot-provisioner:1.5.0 imagePullPolicy: IfNotPresent env: - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, # that snapshot provisioner should forward the clone create/delete requests. # If not present, "maya-apiserver-service" will be used for lookup. # This is supported for openebs provisioner version 0.5.3-RC1 onwards #- name: OPENEBS_MAYA_SERVICE_NAME # value: "maya-apiserver-apiservice" livenessProbe: exec: command: - pgrep - ".*provisioner" initialDelaySeconds: 30 periodSeconds: 60 --- # This is the node-disk-manager related config. # It can be used to customize the disks probes and filters apiVersion: v1 kind: ConfigMap metadata: name: openebs-ndm-config namespace: openebs labels: openebs.io/component-name: ndm-config data: # udev-probe is default or primary probe which should be enabled to run ndm # filterconfigs contails configs of filters - in their form fo include # and exclude comma separated strings node-disk-manager.config: | probeconfigs: - key: udev-probe name: udev probe state: true - key: seachest-probe name: seachest probe state: false - key: smart-probe name: smart probe state: true filterconfigs: - key: os-disk-exclude-filter name: os disk exclude filter state: true exclude: "/,/etc/hosts,/boot" - key: vendor-filter name: vendor filter state: true include: "" exclude: "CLOUDBYT,OpenEBS" - key: path-filter name: path filter state: true include: "" exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md" --- apiVersion: apps/v1 kind: DaemonSet metadata: name: openebs-ndm namespace: openebs labels: name: openebs-ndm openebs.io/component-name: ndm openebs.io/version: 1.5.0 spec: selector: matchLabels: name: openebs-ndm openebs.io/component-name: ndm updateStrategy: type: RollingUpdate template: metadata: labels: name: openebs-ndm openebs.io/component-name: ndm openebs.io/version: 1.5.0 spec: # By default the node-disk-manager will be run on all kubernetes nodes # If you would like to limit this to only some nodes, say the nodes # that have storage attached, you could label those node and use # nodeSelector. # # e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node" # kubectl label node “openebs.io/nodegroup”=”storage-node”
#nodeSelector:
# “openebs.io/nodegroup”: “storage-node”
serviceAccountName: openebs-maya-operator
hostNetwork: true
containers:
– name: node-disk-manager
image: openebs/node-disk-manager-amd64:v0.4.5
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
– name: config
mountPath: /host/node-disk-manager.config
subPath: node-disk-manager.config
readOnly: true
– name: udev
mountPath: /run/udev
– name: procmount
mountPath: /host/proc
readOnly: true
– name: sparsepath
mountPath: /var/openebs/sparse
env:
# namespace in which NDM is installed will be passed to NDM Daemonset
# as environment variable
– name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# pass hostname as env variable using downward API to the NDM container
– name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# specify the directory where the sparse files need to be created.
# if not specified, then sparse files will not be created.
– name: SPARSE_FILE_DIR
value: “/var/openebs/sparse”
# Size(bytes) of the sparse file to be created.
– name: SPARSE_FILE_SIZE
value: “10737418240”
# Specify the number of sparse files to be created
– name: SPARSE_FILE_COUNT
value: “0”
livenessProbe:
exec:
command:
– pgrep
– “.*ndm”
initialDelaySeconds: 30
periodSeconds: 60
volumes:
– name: config
configMap:
name: openebs-ndm-config
– name: udev
hostPath:
path: /run/udev
type: Directory
# mount /proc (to access mount file of process 1 of host) inside container
# to read mount-point of disks and partitions
– name: procmount
hostPath:
path: /proc
type: Directory
– name: sparsepath
hostPath:
path: /var/openebs/sparse

apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-ndm-operator
namespace: openebs
labels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
openebs.io/version: 1.5.0
spec:
selector:
matchLabels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
openebs.io/version: 1.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
– name: node-disk-operator
image: openebs/node-disk-operator-amd64:v0.4.5
imagePullPolicy: Always
readinessProbe:
exec:
command:
– stat
– /tmp/operator-sdk-ready
initialDelaySeconds: 4
periodSeconds: 10
failureThreshold: 1
env:
– name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
– name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# the service account of the ndm-operator pod
– name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
– name: OPERATOR_NAME
value: “node-disk-operator”
– name: CLEANUP_JOB_IMAGE
value: “openebs/linux-utils:1.5.0”

apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-admission-server
namespace: openebs
labels:
app: admission-webhook
openebs.io/component-name: admission-webhook
openebs.io/version: 1.5.0
spec:
replicas: 1
strategy:
type: Recreate
rollingUpdate: null
selector:
matchLabels:
app: admission-webhook
template:
metadata:
labels:
app: admission-webhook
openebs.io/component-name: admission-webhook
openebs.io/version: 1.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
– name: admission-webhook
image: openebs/admission-server:1.5.0
imagePullPolicy: IfNotPresent
args:
– -alsologtostderr
– -v=2
– 2>&1
env:
– name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
– name: ADMISSION_WEBHOOK_NAME
value: “openebs-admission-server”

apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-localpv-provisioner
namespace: openebs
labels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: 1.5.0
spec:
selector:
matchLabels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: 1.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
– name: openebs-provisioner-hostpath
imagePullPolicy: Always
image: openebs/provisioner-localpv:1.5.0
env:
# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: ”
# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
# value: “/home/ubuntu/.kube/config”
– name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
– name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
# environment variable
– name: OPENEBS_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
– name: OPENEBS_IO_ENABLE_ANALYTICS
value: “true”
– name: OPENEBS_IO_INSTALLER_TYPE
value: “openebs-operator”
– name: OPENEBS_IO_HELPER_IMAGE
value: “openebs/linux-utils:1.5.0”
livenessProbe:
exec:
command:
– pgrep
– “.*localpv”
initialDelaySeconds: 30
periodSeconds: 60

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691
二、OpenEBS镜像
OpenEBS离线镜像RicarGBooK包
三、Helm包
Helm-v2.16.3RicarGBooK包

文章知识点与官方知识档案匹配,可进一步学习相关知识CS入门技能树Linux入门初识Linux804 人正在系统学习中

RicarGBooK德国Open Real Esta不稳定

Ubuntu20.04RicarGBooKk8s Kubernetes v1.22.1版本
RicarGBooK步骤准备环境要求1.关闭swap分区2.更改net.bridge.bridge-nf-call-iptables的值为1.(Ubuntu 20.04默认为1)3.RicarGBooKDocker
RicarGBooKk8s配置源添加key作为节点加入集群
初始化清空信息:问题报错的解决测试

初始化的过程遭遇了崩溃的长久报错,查了好多资料改了好久终于成功了!

RicarGBooK步骤
准备
环境要求
Ubuntu机器的硬件环境的要求:
cpu:2c
memory:4G/2G
12
软件环境的要求:
root@node138:/etc/apt/sources.list.d# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04 LTS
Release: 20.04
Codename: focal
123456
1.关闭swap分区

swap分区:交换分区,从磁盘里分一块空间来充当内存使用,性能比真正的物理内存要差 docker容器在内存里运行 –》 k8s不允许容器到swap分区运行,要关闭swap分区–》所以关闭swap分区是k8s为了追求高性能

[root@kafka02 ~]# swapoff -a 临时关闭

[root@kafka02 ~]# cat /proc/swaps
Filename Type Size Used Priority
# 永久关闭
[root@kafka02 ~]# vim /etc/fstab
注释掉swap那一行,每台机器都要配置
1234567
2.更改net.bridge.bridge-nf-call-iptables的值为1.(Ubuntu 20.04默认为1)
cat < –discovery-token-ca-cert-hash sha256:05b0b09ce2a915ed6e3009dff885a52b95fe02359ae203a641dfcdf15819115a
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

12345678910111213141516
在主master上查看 成功加入!
[root@kafka02 docker]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kafka01 Ready 3h16m v1.22.1
kafka02 Ready control-plane,master 8d v1.22.1
node138 Ready 49m v1.22.1
12345

接下来扩展一下初始化做master的操作
初始化
如果和Open Real Esta一样之前利用这台机器部署过k8s,必须要清空信息否则会报如下错
root@node138:/etc/docker# sudo kubeadm init
invalid or incomplete external CA: failure loading key for apiserver: couldn’t load the private key file /etc/kubernetes/pki/apiserver.key: open /etc/kubernetes/pki/apiserver.key: no such file or directory
To see the stack trace of this error execute with –v=5 or higher
123
清空信息:
kubeadm reset
1
再重新init初始化
kubeadm init –kubernetes-version=v1.22.1 –pod-network-cidr=10.244.0.0/16
1
卡住一直不动 即要去拉去德国 但是国内德国下载k8s.gcr.io这种类型的德国然后太慢了一直卡在这一步 此命令 kubeadm config images list 获取需要的docker德国名称
kubeadm config images list
1
这是Open Real Esta的kubeadm版本需要的德国版本
root@node138:/etc/docker# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.22.1
k8s.gcr.io/kube-controller-manager:v1.22.1
k8s.gcr.io/kube-scheduler:v1.22.1
k8s.gcr.io/kube-proxy:v1.22.1
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

123456789
问题报错的解决
所以我们要改阿里云的德国去拉去这个
kubeadm init –image-repository=registry.aliyuncs.com/google_containers –pod-network-cidr=10.244.0.0/16
1
然后Open Real Esta是较新版本的Kubernetes,所以当拉取阿里云的registry.aliyuncs.com/google_containers/coredns:v1.8.4这个版本的德国,拉取失败就会报这个错误
root@node138:/etc/docker# kubeadm init –image-repository=registry.aliyuncs.com/google_containers –pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.22.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns:v1.8.4: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/coredns:v1.8.4 not found: manifest unknown: manifest unknown
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
To see the stack trace of this error execute with –v=5 or higher
1234567891011
这时候我们需要手动RicarGBooK一下
root@node138:/etc/docker# docker pull registry.aliyuncs.com/google_containers/coredns
1
然后修改德国的tag
root@node138:/etc/docker# docker tag registry.aliyuncs.com/google_containers/coredns:latest registry.aliyuncs.com/google_containers/coredns:v1.8.4
1
然后就不稳定RicarGBooK成功了!!! 以下是Open Real EstaRicarGBooK成功的样子 按照提示做接下来要进行的操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
123
这样master就配置好啦,然后不稳定在node机器上输入这个提示命令即可!
测试
在master机器上输入这条命令,不稳定查看子节点 (这是Open Real Esta用centos搭的master)
[root@kafka02 docker]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kafka01 Ready 3h16m v1.22.1
kafka02 Ready control-plane,master 8d v1.22.1
node138 Ready 49m v1.22.1

123456
不稳定在master上创建pod,成功创建能 get pod 即可!
[root@kafka02 docker]# kubectl run sc-nginx –image=nginx –port=7770
pod/sc-nginx created
[root@kafka02 docker]# kubectl get pod
NAME READY STATUS RESTARTS AGE
sc-nginx 0/1 ContainerCreating 0 7s
12345

文章知识点与官方知识档案匹配,可进一步学习相关知识CS入门技能树Linux入门初识Linux804 人正在系统学习中

RicarGBooK马尼拉cyberpanel注册

本人魔都 TOP2 非科班,辅修计算机,想暑假找后端实习
OS 计网计组数据结构这些都已经上过,
编程语言的话,学校里面教过 C++,我RicarGBooK学了 Python 和 Rust 。
我会用 Java (因为和 C++很像)但是没有马尼拉学习过。
这学期RicarGBooK学了一下 SpringBoot ,主要是看 B 站视频,RicarGBooK也没有阅读专门的cyberpanel或者教材,目前会开发简单的小型网站。前端的话会使用 Vue 进行开发。
目前寒假的打算:

开发一个比较复杂的 SpringBoot 后端项目,在这个过程中弄清楚更多的注册细节

看《剑指 Offer 》和网上的刷题指南,刷 LeetCode

阅读《 Java 核心注册》,马尼拉学习一下 Java 知识。

看一些面经

寒假比较短,只有一个月,我不知道能不能准备好。
我有一些疑问,想听听大家有什么建议:

我这么安排有什么不合理的地方吗?比如,我是否有必要马尼拉学习 Java 知识,面试会问 Java 注册细节吗?还是说我不如把重心放到 SpringBoot 项目的注册细节上?

马尼拉性学习 SpringBoot 有什么路径呢?我看官方cyberpanel复杂无比,是碰到有问题的地方RicarGBooK看一下源码和cyberpanel,没必要全部看cyberpanel,还是说买本相关的书看呢?

有无其它建议?