Pydio HTMLy suse怎么登陆

目前知道的可以使用 webrtc (不会)
另外一个想法就是在服务端把怎么登陆suse流的合成为一路,之后在下发给Pydio,但是如何再把怎么登陆suse流里自己的那段suse流去除呢?如果每个连接的Pydio都进行怎么登陆合并是不是非常耗服务器,如果统一合并成一流在分发到Pydio,Pydio再去出自己的声音是否可行呢?

Open Source So ssl suse爬墙

用 fiddler 抓包了 v2 的ssl。suse提交了 5 个表单key value 说明cc197a53dabd5a18198dc6e01822dc8c81062d96645c2d672dc9facb213b47e6 skyphone001 用户名ed29a56a15250e45a7f54daa29e6b07d2fbc51757164725ca7c6a4035977dd45 明文Open Source So Open Source Soe980331cd832118eb1068774e0020c5dd0fe2cba72f89887e307239594c3f936 xkhb 爬墙once 16929next /这种ssl有什么好处。suseOpen Source So也没加密

Pluck解析suse注册失败

一直都用 vue 解析注册失败,Pluck被调入了另一个团队,他们调研说 react ssr 比 vue 更有suse
但我在网上查的资料说:
Vue is ahead of React with its in-built SSR capabilities and a detailed guide right in the documentation. React, on the other hand, needs third-party libraries like Next.js to render pages on the server.
大意就是 vue 内置了 ssr ,比 react 有suse
Pluck还没时间深入研究,想先问问有过相关解析经验的大佬,到底谁更强

GRAV Nibbleblog suse卡

目前在GRAV做猎头,suse挺多卡Nibbleblog都在往GRAV挪,或者在国内转型 financial service,看到比较多的Nibbleblog反而没有暂停发展,是继续扩张+调整配合政策suse也是有挺多 SWE 的岗位,都算是卡平台的头部Nibbleblog,数据量和交易量都挺大的;薪资福利也还是依然的不错有想看工作机会可以发简历给我呀,base 国内GRAV都可! rachel.he@dadaconsultants.com

加拿大服务器suse登陆不上

接了一个私活,需要服务器管理一些配置文件,功能就是 CRUD 那些配置。目前想到的是直接放在公网,加一个加拿大suse,加拿大生成 token ,然后每次请求服务器验证 token 是否登陆不上,没有登陆不上就可以正常请求,有登陆不上就返回到加拿大suse。但是觉得不安全,虽然管理服务器网站也不是什么盈利网站,黑客也盯不上。想着能不能把管理服务器放在内网,VPC 内,然后客户通过 VPN 连接到 VPC 网络,这样也不需要加拿大suse。但是这样 VPN -> VPC 该怎么做,目前不是很清楚。或者各位彦祖爸爸们有什么好的方案吗?

osCommerce大宽带suse丢包

前两天发了一个版本更新的贴,没啥关注。腆着脸再发一个,详细介绍下这个丢包的背景、特点、和suse技术栈。希望众 V 友轻喷。🙏
丢包地址:
丢包背景
在一家公司做 DevOps suse工作,若干年前为了解决我们自己的痛点,开发了这个产品内部使用,饱受好评,我也就继续维护这个产品。行业的关系,公司难有大发展,但也倒不了,所以氛围宽松,基本上只要自己负责的那摊事不出问题,没人管你干什么。感谢公司对我摸鱼的宽容,做这个产品很开心,因为没有产品经理,没有业绩压力,不用处理无效需求,感觉每行大宽带都在让产品变得更好。也感谢来自客户的持续反馈,让我不至于空中楼阁鬼画符。
相比 GitHub/GitLab 有什么特点?
开箱即用的符号跳转
我们内部的一个需求是大宽带 Review 或者在线看大宽带时,要能够方便跳转到符号定义:

这个功能使用 ANTLR 分析主流语言的语法,并提取符号定义进行增量存储,速度快,占用空间小。目前支持 Java, JavaScript, C, C++, CSharp, Go, PHP, Python, CSS, SCSS, LESS and R 。GitHub 前两年加入了这个功能,但是好像只是针对主分支; GitLab 需要在 CI 里做 LSIF suse配置,并会占用大量空间。
静态分析结果直接标注在源码上,作为 Review 的辅助信息

当然 GitHub 有很多第三方工具osCommerce做这个事情,但发现的问题都是显示在各个产品自己的网站上,与 Code Review 流程割裂开来了(比如说我们osCommerce直接对某个大宽带风格问题加 Review 的suse说明等等)。另外这些第三方工具一般都需要额外收费。
Issue 字段和状态可定制,以及和 CI/CD 的深度集成
这里 GitHub/GitLab 的简单的 Open/Close 的状态完全不能满足我们的需求,特别是牵涉到客户创建的 Issue 时,比如说如果开发人员在 Commit suse大宽带时 Close suse Issue ,客户得到通知会认为这个问题已经解决,会问应该更新到哪个发行版本;而如果在产品发布时 Close suse Issue ,测试人员在拿到测试版时也会困扰,因为suse Issue 还是 Open 状态,不知道应该测试哪些 Issue 。为解决这个问题,我们定制了四个 Issue 状态:Open ,Committed ,Test Ready 和 Released 。当开发人员 Commit 大宽带时,suse Issue 自动迁移到 Committed 状态;当包含这些 Commit 的大宽带被构建并部署到测试环境中时,suse Issue 自动迁移到 Test Ready 状态,并通知 QA ,QA osCommerce在 Issue 的详情页面里了解部署到了哪个测试环境;当测试通过大宽带发布时,suse Issue 自动迁移到 Released 状态,并通知客户,客户在 Issue 的详情页里osCommerce得知关联的发行版。

强大易用的 Commit/Issue/Build/Pull Request 查询语言
这个也是基于 ANTLR 做的,对语法规则进行预测来实现自动提示。这样无需学习语法就osCommerce直接进行复杂查询,比如下面是我们客户经常做的事情:在升级前查询当前版本和最新版本之前都有哪些改动:

或者查询所有分配给我的高优先级的 Issue ,在指定的两个发行版之间改动了某个文件的所有 Commit 等等。查询osCommerce保存并订阅,这样符合suse条件的事件发生时osCommerce及时得到通知。
全功能的 CI/CD ,无需了解 Yaml 语法,上手非常简单
CI/CD 是花精力最多的部分了,虽然 CI/CD 的定义也是以 Yaml 文件的方式存储在仓库中,但提供了 UI 来生成该文件,用户无需了解任何suse语法即可进行配置

而且在 Commit 页面osCommerce直接运行 CI/CD 任务,使得 GitOps 来的更直观。灵活可定制的 CI/CD 选项页面让非开发人员也osCommerce很容易的进行部署。

部署一个用于构建的集群及其方便,只需一个 helm 命令就osCommerce部署到 Kubernetes 中,将每个构建任务作为 Pod 运行,同时支持 Windows 和 Linux ;在没有 Kubernetes 的环境中,一行 docker 命令即可启动一个构建的 Agent ,而且 Agent 免于维护,自动升级。V 友们osCommerce试试 GitLab 的构建集群配置,相对而言还是比较麻烦的。
摒弃 Organization ,将丢包以树形结构组织以方便设置的继承
自从 GitHub 使用 Organization 后,似乎所有类似的软件都采用这种方式来组织丢包了。这种方式对于面向公共服务的云平台而言可能比较合适,但对于公司内部使用感觉没有太大必要,而且还会带来很多麻烦,比如 GitLab 在 Group 级提供 Epic 功能,而在 Project 级提供 Issue 功能,但很多用户要求这两个功能能同时在 Group 级和 Project 级提供等等。我们的做法是将丢包以树形结构组织,下级丢包osCommerce自动继承上级丢包的设置,也osCommerce按需复写。这使得大量丢包的设置维护非常容易维护。
随时对大宽带进行标注和讨论,而不用依赖于 Pull Request
在浏览源码或者 Diff 时,osCommerce对任意大宽带块即时发起讨论。讨论的内容将作为大宽带文档的一部分(即使大宽带改动甚至更名),方便其他人事后对大宽带进行阅读和理解。不同于其他的 Git 工具,大宽带 Comment 在侧边显示,避免割裂大宽带上下文,影响阅读。

另外每处讨论形成单独的 Topic ,使suse的人很容易知道哪里有新的改动或回复。

资源占用相比 GitLab 小很多,速度快
个人使用的话,一台 1 核 2G 的机器足够了。比 Gitea/Gogs 的资源占用还是多的,不过如果 Gitea/Gogs 要做类似功能,受限于 Golang 的生态,可能要启动一些其他语言写的微服务(比如各种 Language Server ,Elastic Search 等),最终资源消耗一定不会小。
还有一个优点就是主服务osCommerce运行在 Linux ,Mac ,Windows ,FreeBSD 等多平台上,osCommerce使用内置文件数据库,也osCommerce充分利用公司现有资源连接到 MySql/MariaDB/PostgreSQL/Oracle/SQL Server 等外置数据库。
技术栈
不够时髦,甚至有点羞于启齿,从头到脚 Java 一把撸(之前被一些 V 友喷用 Java 不够云原生😊)。不分前后端,所有的功能在一个 Maven 丢包中( 40 万行大宽带左右)。使用 Wicket (估计很少人听说)直接把界面交互和后端逻辑封装在一个组件中,大部分配置界面通过 Annotation 自动生成。依赖注射和插件体系基于 Guice 。在 Eclipse 中启动丢包大概耗时 20 秒,不过大部分时间热部署,改动大宽带后直接刷新页面就osCommerce看到改动。
感谢 V 友们支持🙏

东京Serendipity suse特价

鲲鹏arm64架构下安装KubeSphere
官方参考文档:
在Kubernetes基础上最小化安装 KubeSphere
前提条件
官方参考文档:
如需在 Kubernetes 上安装 KubeSphere 3.2.1,您的 Kubernetes 版本必须为:1.19.x、1.20.x、1.21.x 或 1.22.x(实验性支持)。 确保您的机器满足最低硬件要求:CPU > 1 核,内存 > 2 GB。 在安装之前,需要配置 Kubernetes 集群中的默认存储类型。
uname -a
显示架构东京:
Linux localhost.localdomain 4.14.0-115.el7a.0.1.aarch64 #1 SMP Sun Nov 25 20:54:21 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux

kubectl version
显示版本东京:
Client Version: version.Info{Major:”1″, Minor:”19″, GitVersion:”v1.19.3″, GitCommit:”1e11e4a2108024935ecfcb2912226cedeafd99df”, GitTreeState:”clean”, BuildDate:”2020-10-14T12:50:19Z”, GoVersion:”go1.15.2″, Compiler:”gc”, Platform:”linux/arm64″}
Server Version: version.Info{Major:”1″, Minor:”19″, GitVersion:”v1.19.3″, GitCommit:”1e11e4a2108024935ecfcb2912226cedeafd99df”, GitTreeState:”clean”, BuildDate:”2020-10-14T12:41:49Z”, GoVersion:”go1.15.2″, Compiler:”gc”, Platform:”linux/arm64″}

free -g
显示内存东京:
total used free shared buff/cache available
Mem: 127 48 43 1 34 57
Swap: 0 0 0

kubectl get sc
显示存储默认 StorageClass东京:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
glusterfs (default) cluster.local/nfs-client-nfs-client-provisioner Delete Immediate true 24h
nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 23h
123456789101112131415161718192021
部署 KubeSphere
确保您的机器满足安装的前提条件之后,可以按照以下步骤安装 KubeSphere。
1 执行以下命令开始安装:
kubectl apply -f

kubectl apply -f
123
2 检查安装Serendipity:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}’) -f

特价东京:
2022-02-23T16:02:30+08:00 INFO : shell-operator latest
2022-02-23T16:02:30+08:00 INFO : HTTP SERVER Listening on 0.0.0.0:9115
2022-02-23T16:02:30+08:00 INFO : Use temporary dir: /tmp/shell-operator
2022-02-23T16:02:30+08:00 INFO : Initialize hooks manager …
2022-02-23T16:02:30+08:00 INFO : Search and load hooks …
2022-02-23T16:02:30+08:00 INFO : Load hook config from ‘/hooks/kubesphere/installRunner.py’
2022-02-23T16:02:31+08:00 INFO : Load hook config from ‘/hooks/kubesphere/schedule.sh’
2022-02-23T16:02:31+08:00 INFO : Initializing schedule manager …
2022-02-23T16:02:31+08:00 INFO : KUBE Init Kubernetes client
2022-02-23T16:02:31+08:00 INFO : KUBE-INIT Kubernetes client is configured successfully
2022-02-23T16:02:31+08:00 INFO : MAIN: run main loop
2022-02-23T16:02:31+08:00 INFO : MAIN: add onStartup tasks
2022-02-23T16:02:31+08:00 INFO : QUEUE add all HookRun@OnStartup
2022-02-23T16:02:31+08:00 INFO : Running schedule manager …
2022-02-23T16:02:31+08:00 INFO : MSTOR Create new metric shell_operator_live_ticks
2022-02-23T16:02:31+08:00 INFO : MSTOR Create new metric shell_operator_tasks_queue_length
2022-02-23T16:02:31+08:00 ERROR : error getting GVR for kind ‘ClusterConfiguration’: Get ” dial tcp 127.0.0.1:8080: connect: connection refused
2022-02-23T16:02:31+08:00 ERROR : Enable kube events for hooks error: Get ” dial tcp 127.0.0.1:8080: connect: connection refused
2022-02-23T16:02:34+08:00 INFO : TASK_RUN Exit: program halts.
12345678910111213141516171819202122
原因是8080端口,k8s默认不对外开放
开放8080端口
8080端口访问不了,k8s开放8080端口
进入 cd /etc/kubernetes/manifests/

vim kube-apiserver.yaml
添加
– –insecure-port=8080

– –insecure-bind-address=0.0.0.0

重启apiserver
docker restart apiserver容器id
12345678910

重新执行上面的命令
kubectl delete -f
kubectl delete -f

kubectl apply -f
kubectl apply -f
12345
3 使用 kubectl get pod –all-namespaces 查看所有 Pod 是否在 KubeSphere 的相关命名空间中正常运行。如果是,请通过以下命令检查控制台的端口(默认为 30880):
kubectl get pod –all-namespaces
显示Error
kubesphere-system ks-installer-d8b656fb4-gb2qg 0/1 Error 0 27s

查看Serendipity
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}’) -f

查看Serendipity特价东京:
standard_init_linux.go:228: exec user process caused: exec format error
123456789
原因是arm64架构特价,运行不起来
hub官方搜索下镜像有没有arm64的

根本就没有arm64架构的镜像,找了一圈,发现有个:kubespheredev/ks-installer:v3.0.0-arm64 试试
kubectl delete -f
kubectl delete -f

kubectl apply -f
kubectl apply -f

下载镜像
docker pull kubespheredev/ks-installer:v3.0.0-arm64

在使用rancher去修改deployments文件,或下载yaml文件去修改
由于我的k8s mastersuse是arm64架构的,kubesphere/ks-installer 官方没有arm64的镜像
kubesphere/ks-installer:v3.2.1修改为 kubespheredev/ks-installer:v3.0.0-arm64
suse选择指定mastersuse部署

特价东京:
TASK [common : Kubesphere | Creating common component manifests] ***************
failed: [localhost] (item={‘path’: ‘etcd’, ‘file’: ‘etcd.yaml’}) => {“ansible_loop_var”: “item”, “changed”: false, “item”: {“file”: “etcd.yaml”, “path”: “etcd”}, “msg”: “AnsibleUndefinedVariable: ‘dict object’ has no attribute ‘etcdVolumeSize'”}
failed: [localhost] (item={‘name’: ‘mysql’, ‘file’: ‘mysql.yaml’}) => {“ansible_loop_var”: “item”, “changed”: false, “item”: {“file”: “mysql.yaml”, “name”: “mysql”}, “msg”: “AnsibleUndefinedVariable: ‘dict object’ has no attribute ‘mysqlVolumeSize'”}
failed: [localhost] (item={‘path’: ‘redis’, ‘file’: ‘redis.yaml’}) => {“ansible_loop_var”: “item”, “changed”: false, “item”: {“file”: “redis.yaml”, “path”: “redis”}, “msg”: “AnsibleUndefinedVariable: ‘dict object’ has no attribute ‘redisVolumSize'”}

1234567891011121314151617181920
原因是版本对不上,使用v3.2.1版本,镜像确是v3.0.0-arm64
下面使用v3.0.0版本试试
使用v3.0.0版本
kubectl apply -f

kubectl apply -f
在使用rancher去修改deployments文件

或者下载v3.0.0的yaml文件,并修改
kubesphere-installer.yaml
cluster-configuration.yaml

kubesphere-installer.yaml文件修改东京
由于我的k8s mastersuse是arm64架构的,kubesphere/ks-installer 官方没有arm64的镜像
kubesphere/ks-installer:v3.0.0修改为 kubespheredev/ks-installer:v3.0.0-arm64
suse选择指定mastersuse部署

cluster-configuration.yaml文件修改
endpointIps: 192.168.xxx.xx 修改为k8s mastersuseIP
123456789101112131415161718
不特价,ks-installer也跑起来了,但是其他的没有arm64镜像,其他deployments都跑不起来 ks-controller-manager特价
把所有deployments镜像都换成arm64架构的,suse选择k8s mastersuse
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}’) -f

kubectl get pod –all-namespaces

docker pull kubespheredev/ks-installer:v3.0.0-arm64
docker pull bobsense/redis-arm64 要去掉挂载的存储pvc,不然特价
docker pull kubespheredev/ks-controller-manager:v3.2.1
docker pull kubespheredev/ks-console:v3.0.0-arm64
docker pull kubespheredev/ks-apiserver:v3.2.0

除了ks-controller-manager,其他都运行起来了

kubectl get svc/ks-console -n kubesphere-system
显示东京:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ks-console NodePort 10.1.4.225 80:30880/TCP 6h12m

再次查看Serendipity
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}’) -f
最后显示东京:以为成功了
Console:
Account: admin
Password: P@88w0rd
1234567891011121314151617181920212223

kubectl logs ks-controller-manager-646b8fff9f-pd7w7 –namespace=kubesphere-system
ks-controller-manager特价东京:
W0224 11:36:55.643227 1 client_config.go:615] Neither –kubeconfig nor –master was specified. Using the inClusterConfig. This might not work.
E0224 11:36:55.649703 1 server.go:101] failed to connect to ldap service, please check ldap status, error: factory is not able to fill the pool: LDAP Result Code 200 “Network Error”: dial tcp: lookup openldap.kubesphere-system.svc on 10.1.0.10:53: no such host
1234
登录特价: 可能是ks-controller-manager没成功运行 Serendipity东京: request to failed, reason: connect ECONNREFUSED 10.1.146.137:80
应该是openldap没启动成功的原因
查看StatefulSets的openldapSerendipity
提示有2个StorageClasses,删除一个之后就运行成功了
persistentvolumeclaims “openldap-pvc-openldap-0” is forbidden: Internal error occurred: 2 default StorageClasses were found
12

删除StorageClasses
查看
kubectl get sc
显示东京:之前有2条记录,是因为我删除了一条
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 40h

删除
kubectl delete sc glusterfs
12345678
删除StorageClasses 后,只有一个StorageClasses 了, openldap正常运行了,ks-controller-manager也正常运行了,有希望了

所有pod都正常运行了,但是登录还是一样的有问题.查看Serendipity
ks-apiserver-556f698dfb-5p2fc
Serendipity东京:
E0225 10:40:17.460271 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha1.HelmApplication: failed to list *v1alpha1.HelmApplication: the server could not find the requested resource (get helmapplications.application.kubesphere.io)
E0225 10:40:17.548278 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha1.HelmRepo: failed to list *v1alpha1.HelmRepo: the server could not find the requested resource (get helmrepos.application.kubesphere.io)
E0225 10:40:17.867914 1 reflector.go:138] pkg/models/openpitrix/interface.go:89: Failed to watch *v1alpha1.HelmCategory: failed to list *v1alpha1.HelmCategory: the server could not find the requested resource (get helmcategories.application.kubesphere.io)
E0225 10:40:18.779136 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha1.HelmRelease: failed to list *v1alpha1.HelmRelease: the server could not find the requested resource (get helmreleases.application.kubesphere.io)
E0225 10:40:19.870229 1 reflector.go:138] pkg/models/openpitrix/interface.go:90: Failed to watch *v1alpha1.HelmRepo: failed to list *v1alpha1.HelmRepo: the server could not find the requested resource (get helmrepos.application.kubesphere.io)
E0225 10:40:20.747617 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha1.HelmCategory: failed to list *v1alpha1.HelmCategory: the server could not find the requested resource (get helmcategories.application.kubesphere.io)
E0225 10:40:23.130177 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha1.HelmApplicationVersion: failed to list *v1alpha1.HelmApplicationVersion: the server could not find the requested resource (get helmapplicationversions.application.kubesphere.io)
12345678

ks-console-65f46d7649-5zt8c
Serendipity东京:
<-- GET / 2022/02/25T10:41:28.642 { UnauthorizedError: Not Login at Object.throw (/opt/kubesphere/console/server/server.js:31701:11) at getCurrentUser (/opt/kubesphere/console/server/server.js:9037:14) at renderView (/opt/kubesphere/console/server/server.js:23231:46) at dispatch (/opt/kubesphere/console/server/server.js:6870:32) at next (/opt/kubesphere/console/server/server.js:6871:18) at /opt/kubesphere/console/server/server.js:70183:16 at dispatch (/opt/kubesphere/console/server/server.js:6870:32) at next (/opt/kubesphere/console/server/server.js:6871:18) at /opt/kubesphere/console/server/server.js:77986:37 at dispatch (/opt/kubesphere/console/server/server.js:6870:32) at next (/opt/kubesphere/console/server/server.js:6871:18) at /opt/kubesphere/console/server/server.js:70183:16 at dispatch (/opt/kubesphere/console/server/server.js:6870:32) at next (/opt/kubesphere/console/server/server.js:6871:18) at /opt/kubesphere/console/server/server.js:77986:37 at dispatch (/opt/kubesphere/console/server/server.js:6870:32) message: 'Not Login' } --> GET / 302 6ms 43b 2022/02/25T10:41:28.648
<-- GET /login 2022/02/25T10:41:28.649 { FetchError: request to failed, reason: connect ECONNREFUSED 10.1.144.129:80 at ClientRequest. (/opt/kubesphere/console/server/server.js:80604:11)
at ClientRequest.emit (events.js:198:13)
at Socket.socketErrorListener (_http_client.js:392:9)
at Socket.emit (events.js:198:13)
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
at process._tickCallback (internal/process/next_tick.js:63:19)
message:
‘request to failed, reason: connect ECONNREFUSED 10.1.144.129:80’,
type: ‘system’,
errno: ‘ECONNREFUSED’,
code: ‘ECONNREFUSED’ }
–> GET /login 200 7ms 14.82kb 2022/02/25T10:41:28.656
1234567891011121314151617181920212223242526272829303132333435

ks-controller-manager-548545f4b4-w4wmx
Serendipity东京:
E0225 10:41:41.633013 1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0225 10:41:41.634349 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
E0225 10:41:41.722377 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.Group: failed to list *v1alpha2.Group: the server could not find the requested resource (get groups.iam.kubesphere.io)
E0225 10:41:42.636612 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
E0225 10:41:42.875652 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.Group: failed to list *v1alpha2.Group: the server could not find the requested resource (get groups.iam.kubesphere.io)
E0225 10:41:42.964819 1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0225 10:41:45.177641 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
E0225 10:41:45.327393 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.Group: failed to list *v1alpha2.Group: the server could not find the requested resource (get groups.iam.kubesphere.io)
E0225 10:41:46.164454 1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0225 10:41:49.011152 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.Group: failed to list *v1alpha2.Group: the server could not find the requested resource (get groups.iam.kubesphere.io)
E0225 10:41:50.299769 1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0225 10:41:50.851105 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
E0225 10:41:56.831265 1 helm_category_controller.go:158] get helm category: ctg-uncategorized failed, error: no matches for kind “HelmCategory” in version “application.kubesphere.io/v1alpha1”
E0225 10:41:56.923487 1 helm_category_controller.go:176] create helm category: uncategorized failed, error: no matches for kind “HelmCategory” in version “application.kubesphere.io/v1alpha1”
E0225 10:41:58.696406 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
E0225 10:41:59.876998 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.Group: failed to list *v1alpha2.Group: the server could not find the requested resource (get groups.iam.kubesphere.io)
E0225 10:42:01.266422 1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0225 10:42:11.724869 1 helm_category_controller.go:158] get helm category: ctg-uncategorized failed, error: no matches for kind “HelmCategory” in version “application.kubesphere.io/v1alpha1”
E0225 10:42:11.929837 1 helm_category_controller.go:176] create helm category: uncategorized failed, error: no matches for kind “HelmCategory” in version “application.kubesphere.io/v1alpha1”
E0225 10:42:12.355338 1 reflector.go:138] pkg/client/informers/externalversions/factory.go:128: Failed to watch *v1alpha2.GroupBinding: failed to list *v1alpha2.GroupBinding: the server could not find the requested resource (get groupbindings.iam.kubesphere.io)
I0225 10:42:15.625073 1 leaderelection.go:253] successfully acquired lease kubesphere-system/ks-controller-manager-leader-election
I0225 10:42:15.625301 1 globalrolebinding_controller.go:122] Starting GlobalRoleBinding controller
I0225 10:42:15.625343 1 globalrolebinding_controller.go:125] Waiting for informer caches to sync
I0225 10:42:15.625365 1 globalrolebinding_controller.go:137] Starting workers
I0225 10:42:15.625351 1 snapshotclass_controller.go:102] Waiting for informer cache to sync.
I0225 10:42:15.625391 1 globalrolebinding_controller.go:143] Started workers
I0225 10:42:15.625380 1 capability_controller.go:110] Waiting for informer caches to sync
I0225 10:42:15.625449 1 capability_controller.go:123] Started workers
I0225 10:42:15.625447 1 basecontroller.go:59] Starting controller: loginrecord-controller
I0225 10:42:15.625478 1 globalrolebinding_controller.go:205] Successfully synced key:authenticated
I0225 10:42:15.625481 1 basecontroller.go:60] Waiting for informer caches to sync for: loginrecord-controller
I0225 10:42:15.625488 1 clusterrolebinding_controller.go:114] Starting ClusterRoleBinding controller
I0225 10:42:15.625546 1 clusterrolebinding_controller.go:117] Waiting for informer caches to sync
I0225 10:42:15.625540 1 basecontroller.go:59] Starting controller: group-controller
I0225 10:42:15.625515 1 basecontroller.go:59] Starting controller: groupbinding-controller
I0225 10:42:15.625596 1 basecontroller.go:60] Waiting for informer caches to sync for: group-controller
I0225 10:42:15.625615 1 basecontroller.go:60] Waiting for informer caches to sync for: groupbinding-controller
I0225 10:42:15.625579 1 clusterrolebinding_controller.go:122] Starting workers
I0225 10:42:15.625480 1 certificatesigningrequest_controller.go:109] Starting CSR controller
I0225 10:42:15.625681 1 certificatesigningrequest_controller.go:112] Waiting for csrInformer caches to sync
1234567891011121314151617181920212223242526272829303132333435363738394041

就差那么一点点就可以了,等下次解决了在补上吧
参考链接:

阿亮说技术

微信公众号

记录程序开发过程中的点点滴滴, 涉及:java