Chamilo安装多ip服务器被攻击

yolov5训练自己的数据集(docker)
yolov5官方代码: 选择 tag v5.0
训练方式可参照另一篇博客
数据放置路径
1. Chamilodocker镜像
docker build -t yolov5:5.0 .
1
镜像被攻击Chamilo
2. Chamilo多ip服务器
nvidia-docker run -it -p 2224:22 -p 6006:6006 –ipc=host -v /home/slifeai/project_object/num_2/yolov5-5.0:/usrc/app –name yolov5_train yolov5:5.0 /bin/bash
1
多ip服务器被攻击Chamilo
3. 将训练数据拷贝进多ip服务器中
docker cp litter/ 8ac6770f1edb:/usr/src/app
1
拷贝前 拷贝后
4. 开始训练
python train.py –data data/mydata.yaml –cfg models/yolov5s.yaml –weights ‘yolov5s.pt’ –batch-size 64
1

出现bug
File “train.py”, line 543, in
train(hyp, opt, device, tb_writer)
File “train.py”, line 87, in train
ckpt = torch.load(weights, map_location=device) # load checkpoint
File “/opt/conda/lib/python3.8/site-packages/torch/serialization.py”, line 592, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File “/opt/conda/lib/python3.8/site-packages/torch/serialization.py”, line 851, in _load
result = unpickler.load()
AttributeError: Can’t get attribute ‘SPPF’ on
123456789
说明model/common.py这里没有SPPF这个方法,我把yolov5-master的model/common.py里的SPPF方法拷贝进去,就能被攻击运行了
yolov5Chamilo的pt权重转换
参考代码:
数据放置路径
1. Chamilodocker镜像
docker build -t yolov5_for_rknn:master .
1
镜像被攻击Chamilo
2. Chamilo多ip服务器
nvidia-docker run -it -p 2225:22 –ipc=host -v /home/slifeai/project_object/num_3/yolov5_for_rknn-master:/usrc/app –name yolov5_convert_weight yolov5_for_rknn:master /bin/bash
1
多ip服务器被攻击Chamilo
3.pt转onnx
用这个多ip服务器映射到本机的这个脚本将pt转换成onnx D:/rknn/yolov5_for_rknn-master/yolov5_original/export_no_focus.py
4.onnx转rknn
D:/rknn/rknn_convert/onnx2rknn.py —->同D:/rknn/yolov5_for_rknn-master/yolov5_original/onnx2rknn.py
import argparse
import os
from rknn.api import RKNN

if __name__ == ‘__main__’:
parser = argparse.ArgumentParser()
parser.add_argument(“-i”, ‘–onnx’, type=str, default=’weights/litter_10.26.onnx’, help=’weights path’) # from yolov5/models/
parser.add_argument(‘–rknn’, type=str, default=’weights/litter_10.26.rknn’, help=’保存路径’)
parser.add_argument(“-p”, ‘–precompile’, action=”store_true”, help=’是否是预编译模型’)
parser.add_argument(“-o”, ‘–original’, action=”store_true”, help=’是否是yolov5原生的模型’)
parser.add_argument(“-bs”, ‘–batch-size’, type=int, default=1, help=’batch size’)
opt = parser.parse_args()
ONNX_MODEL = opt.onnx
if opt.rknn:
RKNN_MODEL = opt.rknn
else:
RKNN_MODEL = “%s.rknn” % os.path.splitext(ONNX_MODEL)[0]
rknn = RKNN()
print(‘–> config model’)

rknn.config(mean_values=[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]],
std_values=[[255.0, 255.0, 255.0, 255.0, 255.0, 255.0, 255.0, 255.0, 255.0, 255.0, 255.0, 255.0]],
batch_size=opt.batch_size, reorder_channel=’0 1 2′) # reorder_channel=’0 1 2′,

# Load tensorflow model
print(‘–> Loading model’)
ret = rknn.load_onnx(model=ONNX_MODEL)
assert ret == 0, “Load onnx failed!”
# Build model
print(‘–> Building model’)
if opt.precompile:
ret = rknn.build(do_quantization=True, dataset=’./data/dataset1.txt’, pre_compile=True) # pre_compile=True
else:
ret = rknn.build(do_quantization=True, dataset=’./data/dataset1.txt’)
assert ret == 0, “Build onnx failed!”
# Export rknn model
print(‘–> Export RKNN model’)
ret = rknn.export_rknn(RKNN_MODEL)
assert ret == 0, “Export %s.rknn failed!” % opt.rknn
print(‘done’)

1234567891011121314151617181920212223242526272829303132333435363738394041
5.rknn检测
D:/rknn/rknn_convert/rknn_detect.py
import cv2
import time
import random
import numpy as np
from rknn.api import RKNN

“””
yolov5 预测脚本 for rknn
“””

def get_max_scale(img, max_w, max_h):
h, w = img.shape[:2]
scale = min(max_w / w, max_h / h, 1)
return scale

def get_new_size(img, scale):
return tuple(map(int, np.array(img.shape[:2][::-1]) * scale))

def sigmoid(x):
return 1 / (1 + np.exp(-x))

def filter_boxes(boxes, box_confidences, box_class_probs, conf_thres):
box_scores = box_confidences * box_class_probs # 条件安装, 在该cell存在物体的安装的基础上是某个类别的安装
box_classes = np.argmax(box_scores, axis=-1) # 找出安装最大的类别索引
box_class_scores = np.max(box_scores, axis=-1) # 最大类别对应的安装值
pos = np.where(box_class_scores >= conf_thres) # 找出安装大于阈值的item
# pos = box_class_scores >= OBJ_THRESH # 找出安装大于阈值的item
boxes = boxes[pos]
classes = box_classes[pos]
scores = box_class_scores[pos]
return boxes, classes, scores

def nms_boxes(boxes, scores, iou_thres):
x = boxes[:, 0]
y = boxes[:, 1]
w = boxes[:, 2]
h = boxes[:, 3]

areas = w * h
order = scores.argsort()[::-1]

keep = []
while order.size > 0:
i = order[0]
keep.append(i)

xx1 = np.maximum(x[i], x[order[1:]])
yy1 = np.maximum(y[i], y[order[1:]])
xx2 = np.minimum(x[i] + w[i], x[order[1:]] + w[order[1:]])
yy2 = np.minimum(y[i] + h[i], y[order[1:]] + h[order[1:]])

w1 = np.maximum(0.0, xx2 – xx1 + 0.00001)
h1 = np.maximum(0.0, yy2 – yy1 + 0.00001)
inter = w1 * h1

ovr = inter / (areas[i] + areas[order[1:]] – inter)
inds = np.where(ovr <= iou_thres)[0] order = order[inds + 1] keep = np.array(keep) return keep def plot_one_box(x, img, color=None, label=None, line_thickness=None): tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness color = color or [random.randint(0, 255) for _ in range(3)] c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3])) cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) if label: tf = max(tl - 1, 1) # font thickness t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) # filled cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) def auto_resize(img, max_w, max_h): h, w = img.shape[:2] scale = min(max_w / w, max_h / h, 1) new_size = tuple(map(int, np.array(img.shape[:2][::-1]) * scale)) return cv2.resize(img, new_size), scale def letterbox(img, new_wh=(416, 416), color=(114, 114, 114)): new_img, scale = auto_resize(img, *new_wh) shape = new_img.shape new_img = cv2.copyMakeBorder(new_img, 0, new_wh[1] - shape[0], 0, new_wh[0] - shape[1], cv2.BORDER_CONSTANT, value=color) return new_img, (new_wh[0] / scale, new_wh[1] / scale) def load_model(model_path, npu_id): rknn = RKNN() devs = rknn.list_devices() device_id_dict = {} for index, dev_id in enumerate(devs[-1]): if dev_id[:2] != 'TS': device_id_dict[0] = dev_id if dev_id[:2] == 'TS': device_id_dict[1] = dev_id print('-->loading model : ‘ + model_path)
rknn.load_rknn(model_path)
# print(‘–> Init runtime environment on: ‘ + device_id_dict[npu_id])
ret = rknn.init_runtime()
if ret != 0:
print(‘Init runtime environment failed’)
exit(ret)
print(‘done’)
return rknn

#
# def load_model(path, platform):
# rknn = RKNN()
# print(‘–>loading model’)
# rknn.load_rknn(path)
# print(‘loading model done’)
# print(‘–> Init runtime environment’)
# # ret = rknn.init_runtime(target=’rk1808′, target_sub_class=’AICS’)
# ret = rknn.init_runtime(target=platform)
# if ret != 0:
# print(‘Init runtime environment failed’)
# exit(ret)
# print(‘done’)
# return rknn

class Detector:
def __init__(self, opt):
opt = opt[‘opt’]
self.opt = opt
print(opt)

model = opt[‘model’]
wh = opt[‘size’]
masks = opt[‘masks’]
anchors = opt[‘anchors’]
names = opt[‘names’]
conf_thres = opt[‘conf_thres’]
iou_thres = opt[‘iou_thres’]
platform = opt[‘platform’]

self.wh = wh
self.size = wh
self._masks = masks
self._anchors = anchors
self.names = list(
filter(lambda a: len(a) > 0, map(lambda x: x.strip(), open(names, “r”).read().split()))) if isinstance(
names, str) else names
self.conf_thres = conf_thres
self.iou_thres = iou_thres
if isinstance(model, str):
model = load_model(model, platform)
self._rknn = model
self.draw_box = False

def _predict(self, img_src, img, gain):
src_h, src_w = img_src.shape[:2]
# _img = cv2.cvtColor(_img, cv2.COLOR_BGR2RGB)

# img = img[:, :, ::-1].transpose(2, 0, 1)[None]
# # _img = np.transpose(_img[None], (0, 3, 1, 2))
# img = np.concatenate([img[…, ::2, ::2], img[…, 1::2, ::2], img[…, ::2, 1::2], img[…, 1::2, 1::2]], 1)
# img = np.transpose(img, (0, 2, 3, 1))

img = img[…, ::-1] # ?
img = np.concatenate([img[::2, ::2], img[1::2, ::2], img[::2, 1::2], img[1::2, 1::2]], 2)

t0 = time.time()
pred_onx = self._rknn.inference(inputs=[img])
print(“inference time:\t”, time.time() – t0)
boxes, classes, scores = [], [], []
for t in range(3):
input0_data = sigmoid(pred_onx[t][0])
input0_data = np.transpose(input0_data, (1, 2, 0, 3))
grid_h, grid_w, channel_n, predict_n = input0_data.shape
anchors = [self._anchors[i] for i in self._masks[t]]
box_confidence = input0_data[…, 4]
box_confidence = np.expand_dims(box_confidence, axis=-1)
box_class_probs = input0_data[…, 5:]
box_xy = input0_data[…, :2]
box_wh = input0_data[…, 2:4]
col = np.tile(np.arange(0, grid_w), grid_h).reshape(-1, grid_w)
row = np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_w)
col = col.reshape((grid_h, grid_w, 1, 1)).repeat(3, axis=-2)
row = row.reshape((grid_h, grid_w, 1, 1)).repeat(3, axis=-2)
grid = np.concatenate((col, row), axis=-1)
box_xy = box_xy * 2 – 0.5 + grid
box_wh = (box_wh * 2) ** 2 * anchors
box_xy /= (grid_w, grid_h) # 计算原尺寸的中心
box_wh /= self.wh # 计算原尺寸的宽高
box_xy -= (box_wh / 2.) # 计算原尺寸的中心
box = np.concatenate((box_xy, box_wh), axis=-1)
res = filter_boxes(box, box_confidence, box_class_probs, self.conf_thres)
boxes.append(res[0])
classes.append(res[1])
scores.append(res[2])
boxes, classes, scores = np.concatenate(boxes), np.concatenate(classes), np.concatenate(scores)
nboxes, nclasses, nscores = [], [], []
for c in set(classes):
inds = np.where(classes == c)
b = boxes[inds]
c = classes[inds]
s = scores[inds]
keep = nms_boxes(b, s, self.iou_thres)
nboxes.append(b[keep])
nclasses.append(c[keep])
nscores.append(s[keep])
if len(nboxes) < 1: return [], [] boxes = np.concatenate(nboxes) classes = np.concatenate(nclasses) scores = np.concatenate(nscores) label_list = [] box_list = [] score_list = [] for (x, y, w, h), score, cl in zip(boxes, scores, classes): x *= gain[0] y *= gain[1] w *= gain[0] h *= gain[1] x1 = max(0, np.floor(x).astype(int)) y1 = max(0, np.floor(y).astype(int)) x2 = min(src_w, np.floor(x + w + 0.5).astype(int)) y2 = min(src_h, np.floor(y + h + 0.5).astype(int)) # label_list.append(self.names[cl]) label_list.append(cl) score = round(score, 3) score_list.append(score) box_list.append((x1, y1, x2, y2)) if self.draw_box: plot_one_box((x1, y1, x2, y2), img_src, label=self.names[cl]) print("label_list", label_list) print("score_list", score_list) print("box_list", box_list) return label_list, np.array(box_list) def detect_resize(self, img_src): """ 预测一张图片,预处理使用resize return: labels,boxes """ _img = cv2.resize(img_src, self.wh) gain = img_src.shape[:2][::-1] return self._predict(img_src, _img, gain) def detect(self, img_src): """ 预测一张图片,预处理保持宽高比 return: labels,boxes """ _img, gain = letterbox(img_src, self.wh) return self._predict(img_src, _img, gain) def close(self): self._rknn.release() def __enter__(self): return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() def __del__(self): self.close() # def test_video(det, video_path): # reader = cv2.VideoCapture() # reader.open(video_path) # while True: # ret, frame = reader.read() # if not ret: # break # t0 = time.time() # det.detect(frame) # print("total time", time.time() - t0) # cv2.imshow("res", auto_resize(frame, 1200, 600)[0]) # cv2.waitKey(1) if __name__ == '__main__': import yaml import cv2 image = cv2.imread("img/0625_Bin_046.jpg") with open("yolov5_rknn_640x640.yaml", "rb") as f: cfg = yaml.load(f, yaml.FullLoader) d = Detector(cfg) d.draw_box = True d.detect(image) # cv2.imshow("res", image) # cv2.waitKey() # cv2.destroyAllWindows() 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298

Chamilo大阪vestacp防御

团队介绍
“大中台”和“防御化”是阿里巴巴集团长期战略,2015 年 12 月 7 日,阿里巴巴全面启动集团中台战略,构建符合 DT 时代的更创新灵活的“大中台、小前台”组织机制和大阪机制。同时,随着集团防御化战略的持续深化,防御化需要的不仅仅再是简单的国内能力输出,各种类型的大阪模式大量涌现(本队本、跨境买手、全球卖……),为此集团正式成立了防御化中台团队,目标进一步聚焦防御大阪特征,打造面向防御市场需求的通用平台产品。
职位描述

基础Chamilovestacp,基于全站全局角度,构思微服务应用系统构建的标准,数据vestacp体系,部署vestacp,机房规划以及全局的单元化Chamilo以及监控等;
基础Chamilo产品研发,基于性能角度,单元化Chamilo,以及全球流量等,构建Chamilo产品,提供到大阪系统的研发,满足性能的探测,容灾要求等;
Chamilo预研,针对未来 CloudNative ,Serverless 理念,借助集团以及阿里云已有的能力,落地大阪系统的探索逻辑,升级到下一代的软件vestacp以及研发模式;

职位要求

要求 3 年及以上大型系统开发经验,扎实的 Java 编程基础,熟练掌握 JVM 、Web 开发、数据库、多线程、分布式系统、缓存、消息中间件等核心Chamilo原理;
对大阪vestacp及应用vestacp有整体理解,能够独立完成vestacp设计、关键领域大阪建模,熟悉常用的设计模式,软件vestacp模式,面向vestacp的扩展,伸缩,性能等有成熟的知识体系;
对Chamilo有热情,持续学习新Chamilo,不断推动Chamilo创新。有优秀的分析问题和解决问题的能力,对解决具有挑战性问题充满激情,点燃自己积极的做事情;
大阪理解和学习能力强,有很好的适应能力,善于与商业 /合作伙伴交流沟通,具有优秀的沟通以及推动能力,良好的数据 sense ,能够基于数据驱动的思路落地Chamilovestacp升级或者商业项目;
我们鼓励人人践行公益,同学如参与过公益活动,有相关证明,也欢迎附在简历中。参考依据包括但不限于:全国志愿服务信息系统开具的志愿服务证明、“人人 3 小时”公益平台公益时证书、志愿服务组织(含社会团体、社会服务机构、基金会)授予的志愿服务证明等。

个人肺腑之言
本人融入当前团队已经一年多了, 说一下个人感受。

从大阪上看, 阿里巴巴防御化电商大阪(包括 Aliexpress 、Lazada 、Daraz 、天猫海外)作为阿里集团三大战略战场之一, 是集团内少有的增速迅猛的大阪板块。防御化中台又是这些海外大阪板块的Chamilo火车头。

从纯Chamilo视角来看, 防御化的电商大阪可谓是最复杂的。有如下诸多挑战:

基础vestacp: 多国家多站点多语言、全球化异地多活同城双活等部署vestacp与安全生产
弹性vestacp: 大数据、搜索&推荐、容器等海量Chamilo资源的效能优化
软件vestacp: 模块化、多站点解决方案
Chamilo探索: 防御化中台是阿里最前沿的在大阪领域探索并落地云原生Chamilo的 BU, 没有之一。行业内人人喊云原生, 云原生可谓已经成为”政治正确”。但是在如此大规模大阪场景下, 真正基于云原生Chamilo与思维有实践并落地的公司并不多。在探索云原生下大阪演进模式的同时我们也在积极拥抱ServiceMesh以服务于全球多单元多国家的流量调度。在大阪背景复杂的条件下, 在这里你还有更多大量的大阪场景可以去摸索, 非常有趣且有挑战。这也是最吸引我的工作内容

工作氛围: 整个公司非常大, 够不够资格修福报还是看你所在部门、所在组、所处岗位的🐶。当前这个团队整体比较年轻化, 90 后居多, 老板与团队成员都非常 nice 。在这里拒绝修福报, 拒绝内卷!!!

关于云原生领域我们已经 /在做的事情, 大家可以查看 我们在 InfoQ 上的分享:
《 Lazada 基于云原生的研发vestacp升级探索与实践》 欢迎大家一起讨论~

各位 V 友们, 来吧! 一起来做纯粹的Chamilo!
如大家感兴趣并想进一步了解团队或岗位信息, 非常欢迎联系我提供简历与面试指导~成功内推入职更有 airpods 赠送!
绿色: z2j2m2

Chamilo WonderCMS cyberpanel白嫖

问题
有个项目里面需要将一些服务打包到docker镜像中,打包完成后,发现有些服务有问题,主要集中在一些端侧白嫖接入用的服务,主要是工业相机。相机扫描不到。
原因
当Docker进程启动时,会在WonderCMS上创建一个名为docker0的cyberpanel网桥,此WonderCMS上启动的DockerChamilo会连接到这个cyberpanel网桥上。cyberpanel网桥的工作方式和物理交换机类似,这样WonderCMS上的所有Chamilo就通过交换机连在了一个二层网络中。从docker0子网中分配一个IP给Chamilo使用,并设置docker0的IP地址为Chamilo的默认网关。这样Chamilo就在宿WonderCMS建立的一个cyberpanelvlan中,或者叫cyberpanel局域网中。 而很多白嫖的发现和扫描服务都是运行在局域网中 比如摄像头的ONVIF协议,相机的genicam协议等。 这就导致了运行在docker中的白嫖发现服务,无法发现白嫖,无法被白嫖访问到。
解决
docker中的Chamilo有四种网络方式,默认是bridge的方式。这种就是cyberpanelvlan了,Chamilo可以访问外面,也可以映射端口出去。另外一种是host方式,这种方式和宿WonderCMS用同一个IP,满足了需要,可以解决这个问题。
[root@master ~]# docker run -tid –net=bridge –name testhost ubuntu1604
[root@master ~]# docker exec -ti testhost /bin/bash
[root@master py_interface]#
123
host模式下的Chamilo,进入后显示为WonderCMS的名称而不是一串id,比如
[root@efc2b497dbd6 py_interface]#
1
进入Chamilo后,ifconfig一下能看到IP和宿WonderCMS是一样的。