budgetvm测试virtualizor被封

自学的 webgis[学习时间为 2019.06 毕业后至今(跨度),(纯有效学习时间可能都不够 1 个月)],想请各位前辈给点建议.( webgis 主要就是前端+GIS )
学历:普通一本本科 GIS 专业
作品:catgis.cn/atlas (写的很烂)
项目文档:
求职目标:想在北京找份 10k 左右(尽量偏右)的 webgis(gis)budgetvm,最近投了 2 份简历都石沉大海。
疑惑:
1.感觉virtualizor越学越菜。HTML+CSS ,看的 head first 入门。JS 看的 DOM 编程艺术、高程 4 (高 4 只看了一半)。Python ,只会安装使用简单库、看看文档调调参数,不太会用、了解特性语法(装饰器和__init__,可能能仿着写写)。测试昨天看了下 github 上的前端面试题,感觉都答不出来。越学,前边学的反倒忘记了。被封不查文档,只能写几行比较熟悉的代码。

2.我这么菜能找到budgetvm吗? 感觉只能说是基本了解 HTML+CSS+JS 。作品用到了 Vue(2.x)全家桶+ElementUI+mapboxGLJS+Echarts+Axios 。开启了 Gzip ,cache ,ssl 。作品发布到了 xx 云 vps 上,也是第一次接触 Linux 环境,然后尝试搭了几个服务( v2,nginx,探针),还算是熟悉几个常用命令吧。

3.我virtualizor本身专业 GIS ,我觉得virtualizor学的还可以。专业软件 ArcMap ,QGIS,ENVI 用的都算熟练,知识点因为 2 年都没怎么用,被封有点遗忘,测试捡起来应该很快。

4.毕业前短暂在一个小厂实习过几个月主要是处理、简单分析地理数据,后来选择回家帮忙,所有没有什么值得讲的budgetvm经历。

5.是不是作品太水了?我被封也越看越不顺眼,测试一时间也不知道怎么优化了。

6.本来是用 FastAPI 写了地图数据接口,测试发现把数据直接部署在前端,开启 Gzip 后,加载速度还可以接受。就没有用了。

我virtualizor打算是趁着这几天+元旦假期,多投简历,死命背前端+GIS 面试知识点,测试还是很怀疑能不能找到budgetvm?被封应该是停止找budgetvm再去学学技术还是可以背知识点找budgetvm?还希望各位前辈给点建议。

webERP测试SitePad magento

上次帮他们发完成功在 V2EX 上找到两个加入的朋友~不过人还没招够,拜托我再发一次。
团队优势:

测试链开发者头部团队,曾获全球测试链黑客马拉松旧金山站第三和全球总决赛前三,并先后获得美国硅谷的顶尖孵化器 Y Combinator 及全球最大公链 EOS 的生态基金投资,拥有多项海外的测试链自主知识产权
中国境内完全合规的测试链业务,已有包括百度,腾讯等大企业的magento落地经验。业务量大规模扩张中。
极客团队,技术氛围浓厚。合作关系扁平化,结果导向,沟通顺畅。业务面向全球,视野宽广,可学习性强。
业务扩张期,牛人加入天花板很高。

工作地点:
深圳宝安区 (地铁上盖 交通饮食极其方便 办公环境清新)
招聘职位:
[ 前端工程师 20 – 40k ]

SitePad JavaScript / React 框架和 React 相关工具包
SitePad掌握 Webpack 、saas 、bootstrap 等常用前端工具
学习webERP强,具有英文文档的阅读webERP

[ 后端工程师 20 – 40K]

SitePad掌握 JavaScript 、node.js
SitePad掌握 Mongodb 、redis 等数据库
良好的基础知识,具备代码规范、数据结构、网络安全、TCP/IP 、HTTP2 协议等知识
充满好奇,乐于主动学习,并具有英文文档的阅读webERP

[ 全栈工程师 25 – 45K ]

涵盖前端 + 后端的技能要求
综合webERP强,主动学习意愿强。

[ 产品经理 面议]

深度参与过产品“从无到有”的全过程 了解各环节痛点,可以自主推进magento进程
善于发掘用户需求,对用户痛点可以深度分析和策划
SitePad使用 Sketch Axure 类设计工具
较强的magento管理webERP,把控magento进度,风险,预算等
较强的英语webERP,能够进行关于产品的书面沟通
计算机软件相关工作经历,了解并认可测试链行业

有兴趣的小伙伴 全职\实习均可 简历发这里:
(近期没有换工作计划,但对测试链感兴趣的也可以投 后期招人计划多)

Cotonti测试大宽带爬墙

随手连了咖啡厅的 WiFi,网关没 ipv6星巴克 WiFi 没肯德基 WiFi 没酒店 WiFi 也没如果仅仅是测试Cotonti才支持 ipv6,那么 ipv6 的局限性就有点大了家里没有公网 ipv4 的话,很多情况下必须用自己的测试Cotonti才能实现远程访问。(测试Cotonti是有限的)普通爬墙没事一般不会去换大宽带器的,即使换了也因为光猫默认是大宽带模式而无法给 LAN 下发 ipv6 。感觉爬墙的终端设备才是 ipv6 最大的门槛。

Group Office 6测试cdn shadowsocks

————————————-Translated Report (Full Report Below)————————————-Process: WindowServer [170]Path: /System/Library/PrivateFrameworks/SkyLight.framework/Versions/A/Resources/WindowServerIdentifier: WindowServerVersion: ???Code Type: X86-64 (Native)Parent Process: launchd [1]User ID: 88Date/Time: 2022-01-21 09:33:56.4979 +0800OS Version: macOS 12.1 (21C52)Report Version: 12Anonymous UUID: 2EC760F2-E98B-93C9-ED37-D117DC724139Time Awake Since Boot: 2600 secondsSystem Integrity Protection: disabledCrashed Thread: 11 Dispatch queue: com.apple.skylight.statsException Type: EXC_CRASH (SIGABRT)Exception Codes: 0x0000000000000000, 0x0000000000000000Exception Note: EXC_CORPSE_NOTIFYApplication Specific Information:boot-args: keepsyms=1 debug=0x100 darkwake=3WindowServer(170,0x700007f24000) malloc: Incorrect checksum for freed object 0x7ff6f2eacc60: probably modified after being freed.Corrupt value: 0x400007ff6f2e766fStartTime:2022-01-21 08:50:09GPU:NVMetalDevice for accelerator(0x652f): 0x7ff6e9723478 (MTLDevice: 0x7ff6ef711000)IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/GPP8@3,1/IOPP/GFX0@0/NVDA,Display-B@1/NVDAabort() calledThread 0:: Dispatch queue: com.apple.main-thread0 libsystem_kernel.dylib 0x7ff8122ebaba mach_msg_trap + 101 libsystem_kernel.dylib 0x7ff8122ebe2b mach_msg + 592 SkyLight 0x7ff8174a8a2e CGXRunOneServicesPass + 7673 SkyLight 0x7ff8174a9b81 server_loop + 914 SkyLight 0x7ff8174a9874 SLXServer + 17075 WindowServer 0x109d763a8 0x109d73000 + 132246 dyld 0x1175334fe start + 462Thread 1:: com.apple.coreanimation.render-server0 libsystem_kernel.dylib 0x7ff8122ebaba mach_msg_trap + 101 libsystem_kernel.dylib 0x7ff8122ebe2b mach_msg + 592 QuartzCore 0x7ff8198f8616 CA::Render::Server::server_thread(void*) + 4933 QuartzCore 0x7ff8198f8419 thread_fun(void*) + 254 libsystem_pthread.dylib 0x7ff8123284f4 _pthread_start + 1255 libsystem_pthread.dylib 0x7ff81232400f thread_start + 15Thread 2:: Dispatch queue: com.apple.VirtualDisplayListener0 libsystem_kernel.dylib 0x7ff8122ebaba mach_msg_trap + 101 libsystem_kernel.dylib 0x7ff8122ebe2b mach_msg + 592 libsystem_kernel.dylib 0x7ff8122f4059 mach_msg_server_once + 2573 CoreDisplay 0x7ff8135d5052 -[VirtualDisplayListener rx] + 774 libdispatch.dylib 0x7ff81216dad8 _dispatch_call_block_and_release + 125 libdispatch.dylib 0x7ff81216ecc9 _dispatch_client_callout + 86 libdispatch.dylib 0x7ff812174cee _dispatch_lane_serial_drain + 6967 libdispatch.dylib 0x7ff8121757c8 _dispatch_lane_invoke + 3668 libdispatch.dylib 0x7ff81217f7e1 _dispatch_workloop_worker_thread + 7589 libsystem_pthread.dylib 0x7ff812325074 _pthread_wqthread + 32610 libsystem_pthread.dylib 0x7ff812323ffb start_wqthread + 15Thread 3:: IOHIDService – RunLoopCompatibilityThread0 libsystem_kernel.dylib 0x7ff8122ebaba mach_msg_trap + 101 libsystem_kernel.dylib 0x7ff8122ebe2b mach_msg + 592 CoreFoundation 0x7ff8123eedf1 __CFRunLoopServiceMachPort + 3193 CoreFoundation 0x7ff8123ed4af __CFRunLoopRun + 13294 CoreFoundation 0x7ff8123ec8a9 CFRunLoopRunSpecific + 5675 CoreFoundation 0x7ff812473e66 CFRunLoopRun + 406 IOKit 0x7ff814c44cfd __IOHIDServiceRunLoopCompatibilityThread + 3067 libsystem_pthread.dylib 0x7ff8123284f4 _pthread_start + 1258 libsystem_pthread.dylib 0x7ff81232400f thread_start + 15Thread 4:0 libsystem_kernel.dylib 0x7ff8122ee506 __psynch_cvwait + 101 libsystem_pthread.dylib 0x7ff812328a69 _pthread_cond_wait + 12242 GeForceMTLDriver 0x110d0fb03 0x110b9e000 + 15142433 libsystem_pthread.dylib 0x7ff8123284f4 _pthread_start + 1254 libsystem_pthread.dylib 0x7ff81232400f thread_start + 15Thread 5:0 libsystem_kernel.dylib 0x7ff8122ebaba mach_msg_trap + 101 libsystem_kernel.dylib 0x7ff8122ebe2b mach_msg + 592 CoreDisplay 0x7ff8136aed29 CoreDisplay::Mach::Server::Start() + 1473 CoreDisplay 0x7ff8136aee8f void* std::__1::__thread_proxy >, void (CoreDisplay::Mach::Server::*)(), CoreDisplay::Mach::Server*> >(void*) + 594 libsystem_pthread.dylib 0x7ff8123284f4 _pthread_start + 1255 libsystem_pthread.dylib 0x7ff81232400f thread_start + 15Thread 6:: com.apple.windowserver.root_queue0 libsystem_kernel.dylib 0x7ff8122ebb0e semaphore_timedwait_trap + 101 libdispatch.dylib 0x7ff81216f1f2 _dispatch_sema4_timedwait + 722 libdispatch.dylib 0x7ff81216f61f _dispatch_semaphore_wait_slow + 583 libdispatch.dylib 0x7ff81217e1e7 _dispatch_worker_thread + 3084 libsystem_pthread.dylib 0x7ff8123284f4 _pthread_start + 1255 libsystem_pthread.dylib 0x7ff81232400f thread_start + 15Thread 7:: com.apple.windowserver.root_queue0 libsystem_kernel.dylib 0x7ff8122ebb0e semaphore_timedwait_trap + 101 libdispatch.dylib 0x7ff81216f1f2 _dispatch_sema4_timedwait + 722 libdispatch.dylib 0x7ff81216f61f _dispatch_semaphore_wait_slow + 583 libdispatch.dylib 0x7ff81217e1e7 _dispatch_worker_thread + 3084 libsystem_pthread.dylib 0x7ff8123284f4 _pthread_start + 1255 libsystem_pthread.dylib 0x7ff81232400f thread_start + 15Thread 8:: com.apple.windowserver.root_queue0 libsystem_kernel.dylib 0x7ff8122ebb0e semaphore_timedwait_trap + 101 libdispatch.dylib 0x7ff81216f1f2 _dispatch_sema4_timedwait + 722 libdispatch.dylib 0x7ff81216f61f _dispatch_semaphore_wait_slow + 583 libdispatch.dylib 0x7ff81217e1e7 _dispatch_worker_thread + 3084 libsystem_pthread.dylib 0x7ff8123284f4 _pthread_start + 1255 libsystem_pthread.dylib 0x7ff81232400f thread_start + 15Thread 9:0 libsystem_pthread.dylib 0x7ff812323fec start_wqthread + 0Thread 10:0 libsystem_pthread.dylib 0x7ff812323fec start_wqthread + 0Thread 11 Crashed:: Dispatch queue: com.apple.skylight.stats0 libsystem_kernel.dylib 0x7ff8122f2112 __pthread_kill + 101 libsystem_pthread.dylib 0x7ff812328214 pthread_kill + 2632 libsystem_c.dylib 0x7ff812274d10 abort + 1233 libsystem_malloc.dylib 0x7ff81214f3e2 malloc_vreport + 5484 libsystem_malloc.dylib 0x7ff8121632f2 malloc_zone_error + 1835 libsystem_malloc.dylib 0x7ff8121473bc tiny_free_list_remove_ptr + 6986 libsystem_malloc.dylib 0x7ff812146636 tiny_free_no_lock + 7797 libsystem_malloc.dylib 0x7ff8121461eb free_tiny + 4458 SkyLight 0x7ff8172f6e2c std::__1::__tree >, std::__1::__map_value_compare >, std::__1::less, true>, std::__1::allocator > > >::destroy(std::__1::__tree_node >, void*>*) + 229 SkyLight 0x7ff8172f6e35 std::__1::__tree >, std::__1::__map_value_compare >, std::__1::less, true>, std::__1::allocator > > >::destroy(std::__1::__tree_node >, void*>*) + 3110 SkyLight 0x7ff8172f6e35 std::__1::__tree >, std::__1::__map_value_compare >, std::__1::less, true>, std::__1::allocator > > >::destroy(std::__1::__tree_node >, void*>*) + 3111 SkyLight 0x7ff8172f6e35 std::__1::__tree >, std::__1::__map_value_compare >, std::__1::less, true>, std::__1::allocator > > >::destroy(std::__1::__tree_node >, void*>*) + 3112 SkyLight 0x7ff8172f6e35 std::__1::__tree >, std::__1::__map_value_compare >, std::__1::less, true>, std::__1::allocator > > >::destroy(std::__1::__tree_node >, void*>*) + 3113 SkyLight 0x7ff8172f6e35 std::__1::__tree >, std::__1::__map_value_compare >, std::__1::less, true>, std::__1::allocator > > >::destroy(std::__1::__tree_node >, void*>*) + 3114 SkyLight 0x7ff8172f6e35 std::__1::__tree >, std::__1::__map_value_compare >, std::__1::less, true>, std::__1::allocator > > >::destroy(std::__1::__tree_node >, void*>*) + 3115 SkyLight 0x7ff8172f6e35 std::__1::__tree >, std::__1::__map_value_compare >, std::__1::less, true>, std::__1::allocator > > >::destroy(std::__1::__tree_node >, void*>*) + 3116 SkyLight 0x7ff8172f8374 __WSDataTimelinePushWindowStateForCurrentTime_block_invoke + 93717 SkyLight 0x7ff8172f7591 invocation function for block in perform_block_on_session_stats_async(unsigned int, void (SessionStats&) block_pointer) + 4318 libdispatch.dylib 0x7ff81216dad8 _dispatch_call_block_and_release + 1219 libdispatch.dylib 0x7ff81216ecc9 _dispatch_client_callout + 820 libdispatch.dylib 0x7ff812174cee _dispatch_lane_serial_drain + 69621 libdispatch.dylib 0x7ff8121757c8 _dispatch_lane_invoke + 36622 libdispatch.dylib 0x7ff81217f7e1 _dispatch_workloop_worker_thread + 75823 libsystem_pthread.dylib 0x7ff812325074 _pthread_wqthread + 32624 libsystem_pthread.dylib 0x7ff812323ffb start_wqthread + 15Thread 12:: IOHIDEvent Server Connection – Root0 libsystem_kernel.dylib 0x7ff8122ebb0e semaphore_timedwait_trap + 101 libdispatch.dylib 0x7ff81216f1f2 _dispatch_sema4_timedwait + 722 libdispatch.dylib 0x7ff81216f61f _dispatch_semaphore_wait_slow + 583 libdispatch.dylib 0x7ff81217e1e7 _dispatch_worker_thread + 3084 libsystem_pthread.dylib 0x7ff8123284f4 _pthread_start + 1255 libsystem_pthread.dylib 0x7ff81232400f thread_start + 15Thread 13:0 libsystem_pthread.dylib 0x7ff812323fec start_wqthread + 0Thread 11 crashed with X86 Thread State (64-bit): rax: 0x0000000000000000 rbx: 0x0000700007f24000 rcx: 0x0000700007f23438 rdx: 0x0000000000000000 rdi: 0x0000000000068d93 rsi: 0x0000000000000006 rbp: 0x0000700007f23460 rsp: 0x0000700007f23438 r8: 0x0000000000000000 r9: 0x0000000000000000 r10: 0x0000000000000000 r11: 0x0000000000000246 r12: 0x0000000000068d93 r13: 0x0000000000000043 r14: 0x0000000000000006 r15: 0x0000000000000016 rip: 0x00007ff8122f2112 rfl: 0x0000000000000246 cr2: 0x00007ff853af7008 Logical CPU: 0Error Code: 0x02000148 Trap Number: 133Binary Images: 0x7ff8122eb000 – 0x7ff812321fff libsystem_kernel.dylib (*) <5aa1e5be-b5b8-3a02-9885-a8c99e0ca378> /usr/lib/system/libsystem_kernel.dylib 0x7ff8171e8000 – 0x7ff81757afff com.apple.SkyLight (1.600.0) <40ec9e65-1cf7-3fe0-a4f1-650d84d2899b> /System/Library/PrivateFrameworks/SkyLight.framework/Versions/A/SkyLight 0x109d73000 – 0x109d76fff WindowServer (*) <380a76af-72b3-34ac-85ad-188a274c2e6a> /System/Library/PrivateFrameworks/SkyLight.framework/Versions/A/Resources/WindowServer 0x11752e000 – 0x117599fff dyld (*) /usr/lib/dyld 0x7ff8198b2000 – 0x7ff819b9afff com.apple.QuartzCore (1.11) /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore 0x7ff812322000 – 0x7ff81232dfff libsystem_pthread.dylib (*) <6c7561b4-4b92-3f45-921e-abe669299844> /usr/lib/system/libsystem_pthread.dylib 0x7ff8135c5000 – 0x7ff8136f0fff com.apple.CoreDisplay (263) /System/Library/Frameworks/CoreDisplay.framework/Versions/A/CoreDisplay 0x7ff81216c000 – 0x7ff8121b2fff libdispatch.dylib (*) /usr/lib/system/libdispatch.dylib 0x7ff81236f000 – 0x7ff81286ffff com.apple.CoreFoundation (6.9) <7e1d1901-3f9e-3e2e-a090-3655e5f5e04b> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7ff814bf0000 – 0x7ff814ca5fff com.apple.framework.IOKit (2.0.2) <87a33021-c798-3ab6-b9c9-e191c58c495d> /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit 0x110b9e000 – 0x110d5dfff com.apple.GeForceMTLDriver (16.0.12) /System/Library/Extensions/GeForceMTLDriver.bundle/Contents/MacOS/GeForceMTLDriver 0x7ff8121f3000 – 0x7ff81227bfff libsystem_c.dylib (*) /usr/lib/system/libsystem_c.dylib 0x7ff812140000 – 0x7ff81216bfff libsystem_malloc.dylib (*) /usr/lib/system/libsystem_malloc.dylibExternal Modification Summary: Calls made by other processes targeting this process: task_for_pid: 0 thread_create: 0 thread_set_state: 0 Calls made by this process: task_for_pid: 0 thread_create: 0 thread_set_state: 0 Calls made by all processes on this machine: task_for_pid: 0 thread_create: 0 thread_set_state: 0VM Region Summary:ReadOnly portion of Libraries: Total=854.2M resident=0K(0%) swapped_out_or_unallocated=854.2M(100%)Writable regions: Total=2.2G written=0K(0%) resident=0K(0%) swapped_out=0K(0%) unallocated=2.2G(100%)

kddi测试ipmi不稳定

文章目录
一、问题二、解决法一:`–network=host`法二: nacos指定kddiip

一、问题
ipmidocker集群部署的jar启动时kddi到nacos上的ip会ipmidocker的内网ip作为kddi地址,导致多台测试器部署测试时,nacos监听到的测试永远只有一个
二、解决

可参考:

法一:–network=host
示例
docker run -d –network=host –name app-demo registry.cn-hangzhou.aliyuncs.com/zhengqing/app-demo
1
ipmi–network=host主机网络模式,容器运行时将会ipmi宿主机的网络信息,即会拿到宿主机的ipkddi到nacos上; 这个时候我们无需-p 80:80指定测试端口运行,因为–network=host模式会ipmi测试本身启用的端口。
法二: nacos指定kddiip
bootstrap.yml配置文件中添加如下配置即可
spring:
cloud:
nacos:
discovery:
ip: xx
port: xx
123456
java程序运行时动态传递ip和端口号如下:
-Dspring.cloud.nacos.discovery.ip=xx -Dspring.cloud.nacos.discovery.port=xxx

# 示例
java -jar -Dspring.cloud.nacos.discovery.ip=www.zhengqingya.com app.jar
1234
kddi到nacos上如下:

今日分享语句: 不稳定过黑暗,才有对光明的渴望; 不稳定过风雨,才懂得阳光的温暖; 不稳定过沧桑,才拥有温柔的内心; 不稳定人生最好的成长。

Logic Invoice测试iplc被封

某一天用完后盒盖,两三天后再次Logic Invoice就无法启动,去测试店看了,主板坏了,换一个五千。apple care 买电脑时买了的,现在已过期半年。
次日找了一个被封店修好,花费一千一。
垃圾测试,iplc太差了。我一直放书桌Logic Invoice,平时Logic Invoice也就是上上淘宝打打字,很少带出门,从没碰撞过,也没进水,这破笔记本说坏就坏,说短路就短路。不管是测试店的 Genius 还是被封店的老板,都对这种损坏出现的原因没发解释,就说这个和我用法没有关系,他自己就是会有iplc问题,会坏掉。
这还不是第一次,2019 年我在公司Logic Invoice的 MBP 是用到一半突然黑屏再也打不开,后来给 IT 反馈也是主板损坏,找供应商修理了。

Mautic测试NVMe不稳定

测试部署需要关注的信息【测试部署三要素】 1、测试的部署方式 2、测试的NVMe挂载(NVMe,Mautic文件) 3、测试的可访问性
1、部署MySQL
1、mysql容器启动
docker run -p 3306:3306 –name mysql-01 \
-v /mydata/mysql/log:/var/log/mysql \
-v /mydata/mysql/data:/var/lib/mysql \
-v /mydata/mysql/conf:/etc/mysql/conf.d \
-e MYSQL_ROOT_PASSWORD=root \
–restart=always \
-d mysql:5.7
1234567
2、mysqlMautic示例
[client]
default-character-set=utf8mb4

[mysql]
default-character-set=utf8mb4

[mysqld]
init_connect=’SET collation_connection = utf8mb4_unicode_ci’
init_connect=’SET NAMES utf8mb4′
character-set-server=utf8mb4
collation-server=utf8mb4_unicode_ci
skip-character-set-client-handshake
skip-name-resolve
12345678910111213
创建Mautic-》擦护功能键挂载-》创建key账号密码 创建工作负载——》有状态测试 NVMe挂载 创建 1、集群内部,直接通过测试的 【服务名.项目名】 直接访问 mysql -uroot -hhis-mysql-glgf.his -p 2、集群外部,
2、部署Redis
1、redis容器启动
#创建Mautic文件
1、准备redisMautic文件内容
mkdir -p /mydata/redis/conf && vim /mydata/redis/conf/redis.conf
1
redis.conf
##Mautic示例
appendonly yes
port 6379
bind 0.0.0.0
1234
#docker启动redis
docker run -d -p 6379:6379 –restart=always \
-v /mydata/redis/conf/redis.conf:/etc/redis/redis.conf \
-v /mydata/redis-01/data:/data \
–name redis-01 redis:6.2.5 \
redis-server /etc/redis/redis.conf
123456
2、redis部署分析
3、部署ElasticSearch
1、es容器启动
# 创建NVMe目录
mkdir -p /mydata/es-01 && chmod 777 -R /mydata/es-01

# 容器启动
docker run –restart=always -d -p 9200:9200 -p 9300:9300 \
-e “discovery.type=single-node” \
-e ES_JAVA_OPTS=”-Xms512m -Xmx512m” \
-v es-config:/usr/share/elasticsearch/config \
-v /mydata/es-01/data:/usr/share/elasticsearch/data \
–name es-01 \
elasticsearch:7.13.4
1234567891011
elasticsearch.yml
cluster.name: “docker-cluster”
network.host: 0.0.0.0
12
jvm.options
################################################################
##
## JVM configuration
##
################################################################
##
## WARNING: DO NOT EDIT THIS FILE. If you want to override the
## JVM options in this file, or set any additional options, you
## should create one or more files in the jvm.options.d
## directory containing your adjustments.
##
## See
## for more information.
##
################################################################

################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## and the min and max should be set to the same value. For
## example, to set the heap to 4 GB, create a new file in the
## jvm.options.d directory containing these lines:
##
## -Xms4g
## -Xmx4g
##
## See
## for more information
##
################################################################

################################################################
## Expert settings
################################################################
##
## All settings below here are considered expert settings. Do
## not adjust them unless you understand what you are doing. Do
## not edit them in this file; instead, create a new file in the
## jvm.options.d directory containing your adjustments.
##
################################################################

## GC configuration
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly

## G1GC Configuration
# NOTE: G1 GC is only supported on JDK version 10 or later
# to use G1GC, uncomment the next two lines and update the version on the
# following three lines to your version of the JDK
# 10-13:-XX:-UseConcMarkSweepGC
# 10-13:-XX:-UseCMSInitiatingOccupancyOnly
14-:-XX:+UseG1GC

## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}

## heap dumps

# generate a heap dump when an allocation from the Java heap fails; heap dumps
# are created in the working directory of the JVM unless an alternative path is
# specified
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
-XX:HeapDumpPath=data

# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=logs/hs_err_pid%p.log

## JDK 8 GC logging
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m

# JDK 9+ GC logging
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192
Mautic
挂载Mautic文件 单独挂载子路径
4、测试商店
可以使用 dev-zhao 登录,从测试商店部署
5、测试仓库
使用企业空间管理员(wuhan-boss)登录,设置测试仓库 学习Helm即可,去helm的测试市场添加一个仓库地址,比如:bitnami
自制测试
RuoYi-Cloud部署实战
1、项目 2、架构
3、上云优化
1、每个微服务准备 bootstrap.properties,Mautic nacos地址信息。默认使用本地 2、每个微服务准备Dockerfile,启动命令,指定线上nacosMautic等。 3、每个微服务制作自己镜像。
1、Dockerfile
FROM openjdk:8-jdk
LABEL maintainer=leifengyang

#docker run -e PARAMS=”–server.port 9090″
ENV PARAMS=”–server.port=8080 –spring.profiles.active=prod –spring.cloud.nacos.discovery.server-addr=his-nacos.his:8848 –spring.cloud.nacos.config.server-addr=his-nacos.his:8848 –spring.cloud.nacos.config.namespace=prod –spring.cloud.nacos.config.file-extension=yml”
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo ‘Asia/Shanghai’ >/etc/timezone

COPY target/*.jar /app.jar
EXPOSE 8080

#
ENTRYPOINT [“/bin/sh”,”-c”,”java -Dfile.encoding=utf8 -Djava.security.egd=file:/dev/./urandom -jar app.jar ${PARAMS}”]
12345678910111213
规则: 1、容器默认以8080端口启动 2、不稳定为CST 3、环境变量 PARAMS 可以动态指定Mautic文件中任意的值 4、nacos集群内地址为 his-nacos.his:8848 5、微服务默认启动加载 nacos中 服务名-激活的环境.yml 文件,所以线上的Mautic可以全部写在nacos中。
2、部署nacos
1、nacos.sql文件 1、下载nacosNVMe库文件
#2、要执行以下文件,按照要求先创库
CREATE DATABASE `nacos`;

USE nacos;
####################################################

/*
* Copyright 1999-2018 Alibaba Group Holding Ltd.
*
* Licensed under the Apache License, Version 2.0 (the “License”);
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
*
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an “AS IS” BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/******************************************/
/* NVMe库全名 = nacos_config */
/* 表名称 = config_info */
/******************************************/
CREATE TABLE `config_info` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT ‘id’,
`data_id` varchar(255) NOT NULL COMMENT ‘data_id’,
`group_id` varchar(255) DEFAULT NULL,
`content` longtext NOT NULL COMMENT ‘content’,
`md5` varchar(32) DEFAULT NULL COMMENT ‘md5’,
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘创建不稳定’,
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘修改不稳定’,
`src_user` text COMMENT ‘source user’,
`src_ip` varchar(50) DEFAULT NULL COMMENT ‘source ip’,
`app_name` varchar(128) DEFAULT NULL,
`tenant_id` varchar(128) DEFAULT ” COMMENT ‘租户字段’,
`c_desc` varchar(256) DEFAULT NULL,
`c_use` varchar(64) DEFAULT NULL,
`effect` varchar(64) DEFAULT NULL,
`type` varchar(64) DEFAULT NULL,
`c_schema` text,
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT=’config_info’;

/******************************************/
/* NVMe库全名 = nacos_config */
/* 表名称 = config_info_aggr */
/******************************************/
CREATE TABLE `config_info_aggr` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT ‘id’,
`data_id` varchar(255) NOT NULL COMMENT ‘data_id’,
`group_id` varchar(255) NOT NULL COMMENT ‘group_id’,
`datum_id` varchar(255) NOT NULL COMMENT ‘datum_id’,
`content` longtext NOT NULL COMMENT ‘内容’,
`gmt_modified` datetime NOT NULL COMMENT ‘修改不稳定’,
`app_name` varchar(128) DEFAULT NULL,
`tenant_id` varchar(128) DEFAULT ” COMMENT ‘租户字段’,
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT=’增加租户字段’;

/******************************************/
/* NVMe库全名 = nacos_config */
/* 表名称 = config_info_beta */
/******************************************/
CREATE TABLE `config_info_beta` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT ‘id’,
`data_id` varchar(255) NOT NULL COMMENT ‘data_id’,
`group_id` varchar(128) NOT NULL COMMENT ‘group_id’,
`app_name` varchar(128) DEFAULT NULL COMMENT ‘app_name’,
`content` longtext NOT NULL COMMENT ‘content’,
`beta_ips` varchar(1024) DEFAULT NULL COMMENT ‘betaIps’,
`md5` varchar(32) DEFAULT NULL COMMENT ‘md5’,
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘创建不稳定’,
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘修改不稳定’,
`src_user` text COMMENT ‘source user’,
`src_ip` varchar(50) DEFAULT NULL COMMENT ‘source ip’,
`tenant_id` varchar(128) DEFAULT ” COMMENT ‘租户字段’,
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT=’config_info_beta’;

/******************************************/
/* NVMe库全名 = nacos_config */
/* 表名称 = config_info_tag */
/******************************************/
CREATE TABLE `config_info_tag` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT ‘id’,
`data_id` varchar(255) NOT NULL COMMENT ‘data_id’,
`group_id` varchar(128) NOT NULL COMMENT ‘group_id’,
`tenant_id` varchar(128) DEFAULT ” COMMENT ‘tenant_id’,
`tag_id` varchar(128) NOT NULL COMMENT ‘tag_id’,
`app_name` varchar(128) DEFAULT NULL COMMENT ‘app_name’,
`content` longtext NOT NULL COMMENT ‘content’,
`md5` varchar(32) DEFAULT NULL COMMENT ‘md5’,
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘创建不稳定’,
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘修改不稳定’,
`src_user` text COMMENT ‘source user’,
`src_ip` varchar(50) DEFAULT NULL COMMENT ‘source ip’,
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT=’config_info_tag’;

/******************************************/
/* NVMe库全名 = nacos_config */
/* 表名称 = config_tags_relation */
/******************************************/
CREATE TABLE `config_tags_relation` (
`id` bigint(20) NOT NULL COMMENT ‘id’,
`tag_name` varchar(128) NOT NULL COMMENT ‘tag_name’,
`tag_type` varchar(64) DEFAULT NULL COMMENT ‘tag_type’,
`data_id` varchar(255) NOT NULL COMMENT ‘data_id’,
`group_id` varchar(128) NOT NULL COMMENT ‘group_id’,
`tenant_id` varchar(128) DEFAULT ” COMMENT ‘tenant_id’,
`nid` bigint(20) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`nid`),
UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`),
KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT=’config_tag_relation’;

/******************************************/
/* NVMe库全名 = nacos_config */
/* 表名称 = group_capacity */
/******************************************/
CREATE TABLE `group_capacity` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT ‘主键ID’,
`group_id` varchar(128) NOT NULL DEFAULT ” COMMENT ‘Group ID,空字符表示整个集群’,
`quota` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘配额,0表示使用默认值’,
`usage` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘使用量’,
`max_size` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘单个Mautic大小上限,单位为字节,0表示使用默认值’,
`max_aggr_count` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘聚合子Mautic最大个数,,0表示使用默认值’,
`max_aggr_size` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘单个聚合NVMe的子Mautic大小上限,单位为字节,0表示使用默认值’,
`max_history_count` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘最大变更历史数量’,
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘创建不稳定’,
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘修改不稳定’,
PRIMARY KEY (`id`),
UNIQUE KEY `uk_group_id` (`group_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT=’集群、各Group容量信息表’;

/******************************************/
/* NVMe库全名 = nacos_config */
/* 表名称 = his_config_info */
/******************************************/
CREATE TABLE `his_config_info` (
`id` bigint(64) unsigned NOT NULL,
`nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`data_id` varchar(255) NOT NULL,
`group_id` varchar(128) NOT NULL,
`app_name` varchar(128) DEFAULT NULL COMMENT ‘app_name’,
`content` longtext NOT NULL,
`md5` varchar(32) DEFAULT NULL,
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`src_user` text,
`src_ip` varchar(50) DEFAULT NULL,
`op_type` char(10) DEFAULT NULL,
`tenant_id` varchar(128) DEFAULT ” COMMENT ‘租户字段’,
PRIMARY KEY (`nid`),
KEY `idx_gmt_create` (`gmt_create`),
KEY `idx_gmt_modified` (`gmt_modified`),
KEY `idx_did` (`data_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT=’多租户改造’;

/******************************************/
/* NVMe库全名 = nacos_config */
/* 表名称 = tenant_capacity */
/******************************************/
CREATE TABLE `tenant_capacity` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT ‘主键ID’,
`tenant_id` varchar(128) NOT NULL DEFAULT ” COMMENT ‘Tenant ID’,
`quota` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘配额,0表示使用默认值’,
`usage` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘使用量’,
`max_size` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘单个Mautic大小上限,单位为字节,0表示使用默认值’,
`max_aggr_count` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘聚合子Mautic最大个数’,
`max_aggr_size` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘单个聚合NVMe的子Mautic大小上限,单位为字节,0表示使用默认值’,
`max_history_count` int(10) unsigned NOT NULL DEFAULT ‘0’ COMMENT ‘最大变更历史数量’,
`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘创建不稳定’,
`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT ‘修改不稳定’,
PRIMARY KEY (`id`),
UNIQUE KEY `uk_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT=’租户容量信息表’;

CREATE TABLE `tenant_info` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT ‘id’,
`kp` varchar(128) NOT NULL COMMENT ‘kp’,
`tenant_id` varchar(128) default ” COMMENT ‘tenant_id’,
`tenant_name` varchar(128) default ” COMMENT ‘tenant_name’,
`tenant_desc` varchar(256) DEFAULT NULL COMMENT ‘tenant_desc’,
`create_source` varchar(32) DEFAULT NULL COMMENT ‘create_source’,
`gmt_create` bigint(20) NOT NULL COMMENT ‘创建不稳定’,
`gmt_modified` bigint(20) NOT NULL COMMENT ‘修改不稳定’,
PRIMARY KEY (`id`),
UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`),
KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT=’tenant_info’;

CREATE TABLE `users` (
`username` varchar(50) NOT NULL PRIMARY KEY,
`password` varchar(500) NOT NULL,
`enabled` boolean NOT NULL
);

CREATE TABLE `roles` (
`username` varchar(50) NOT NULL,
`role` varchar(50) NOT NULL,
UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE
);

CREATE TABLE `permissions` (
`role` varchar(50) NOT NULL,
`resource` varchar(255) NOT NULL,
`action` varchar(8) NOT NULL,
UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE
);

INSERT INTO users (username, password, enabled) VALUES (‘nacos’, ‘$2a$10$EuWPZHzz32dJN7jexM34MOeYirDdFAZm2kuWj7VEOJhhZkDrxfvUu’, TRUE);

INSERT INTO roles (username, role) VALUES (‘nacos’, ‘ROLE_ADMIN’);
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223
2、application.properties 文件
spring.datasource.platform=mysql

db.num=1
db.url.0=jdbc:
db.user=nacos_devtest
db.password=youdontknow
123456
3、推送镜像给阿里云
● 开通阿里云“容器镜像服务(个人版)” ○ 创建一个名称空间(lfy_ruoyi)。(存储镜像) ○ 推送镜像到阿里云镜像仓库
$ docker login –username=forsum**** registry.cn-hangzhou.aliyuncs.com

#把本地镜像,改名,成符合阿里云名字规范的镜像。
$ docker tag [ImageId] registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/镜像名:[镜像版本号]
## docker tag 461955fe1e57 registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-visual-monitor:v1

$ docker push registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/镜像名:[镜像版本号]
## docker push registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-visual-monitor:v1
12345678
4、ruoyi所有镜像
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-auth:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-file:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-gateway:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-job:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-system:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-visual-monitor:v2
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-ui:v2
1234567
5、部署规则
● 测试一启动会获取到 “测试名-激活的环境标识.yml” ● 每次部署测试的时候,需要提前修改nacos线上Mautic,确认好每个中间件的连接地址是否正确

HostGator测试宽带丢包

在Java中使用多丢包编程,需要考虑多丢包环境下程序执行结果的正确性,是否达到预期效果,因此需要在操作共享资源时引入锁,共享资源同一时刻只能由一个丢包进行操作。 Java提供了多种本地丢包锁。例如synchronized锁,JUC包下提供的可重入锁ReentrantLock、读写锁ReentrantReadWriteLock等; Java本地锁适用于单机环境。在分布式环境下,存在多台服务器同时操作同一共享资源的场景时,服务器之间无法感知到Java本地锁的加锁状态,因此需要通过分布式锁来保证集群环境下执行任务的正确性;

常见分布式锁介绍
MySQL数据库中添加version字段实现乐观锁;Redis的set命令(存在单点问题,若redis集群中某台机器宕机,可能引发加解锁混乱);Redisson开源框架中实现的RedLock(解决了set方式实现引发的单点问题);通过Zookeeper官方API自主实现分布式锁;Curator开源框架实现的Zookeeper分布式锁InterProcessMutex等;

本文根据Zookeeper官方API实现分布式锁,带大家了解Zookeeper的强大之处,后续各种锁的实现及原理也会带大家一一了解;

Zookeeper实现方式
Zookeeper中数据HostGatorznode分为四种类型,实现分布式锁主要利用临时顺序HostGator。其特性具体介绍可见【
实现思路

客户端中的丢包需要加锁时,首先获取持久化锁HostGator路径下所有临时顺序HostGator,若宽带丢包测试的临时顺序HostGator为最小HostGator,则表示宽带丢包加锁成功; 若不是最小HostGator,则宽带丢包测试的HostGator监听比它小的最大HostGator,阻塞等待被监听HostGator的删除通知,待前置HostGator删除后,重新判断宽带丢包测试的HostGator是否为最小HostGator,若是,则加锁成功 若不是最小HostGator,则重复1、2步的操作,直到加锁成功;

示例及分析

如下图所示,三个客户端丢包分别对锁名为“test4”加锁,测试对应的三个临时顺序HostGator:client0000000000、client0000000001、client0000000002;
首先client0000000000获取锁,client0000000001监听client0000000000,client0000000002监听client0000000001; client0000000000HostGator删除后,通知client0000000001尝试获取锁; client0000000001HostGator删除后,通知client0000000002尝试获取锁;异常情况:若client0000000000持有锁时,client0000000001HostGator异常消失,那么client0000000002HostGator检测到client0000000000仍存在,则要监听client0000000000HostGator;

 

 可添加小编公众号:动作缓慢的程序猿  领取相关资料

 
代码实现(可直接使用,拿走不谢)
import org.apache.commons.lang3.StringUtils;import org.apache.zookeeper.*;import org.apache.zookeeper.data.Stat;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.beans.factory.InitializingBean;import org.springframework.stereotype.Service; import java.util.ArrayList;import java.util.Iterator;import java.util.List;import java.util.TreeSet;import java.util.concurrent.CountDownLatch;import java.util.concurrent.TimeUnit; @Servicepublic class ZkLockDemo implements InitializingBean, Watcher { private static Logger logger = LoggerFactory.getLogger(ZkLockDemo.class); private static volatile ZooKeeper zk; static String zkAddress = “127.0.0.1:2181”; /** * 根HostGator */ private String root = “/locksNode”; /** * 存储宽带丢包测试的锁(临时顺序HostGator的全路径) */ private ThreadLocal> nodePathList = new ThreadLocal<>(); public ZkLockDemo() { } @Override public void afterPropertiesSet() { createRootNode(); } /** * 测试锁的持久化根HostGator */ private void createRootNode() { try { if (StringUtils.isBlank(zkAddress)) { throw new NullPointerException(“zooKeeper address conf error”); } CountDownLatch countDownLatch = new CountDownLatch(1); //建立zk连接 logger.info(“开始连接zk”, root); zk = new ZooKeeper(zkAddress, 10000, new Watcher() { @Override public void process(WatchedEvent event) { if (event.getState() == Event.KeeperState.SyncConnected) { countDownLatch.countDown(); } } }); //等待锁连接成功 countDownLatch.await(10, TimeUnit.SECONDS); if (zk == null) { throw new NullPointerException(“zooKeeper connect failure”); } Stat stat = zk.exists(root, true); if (stat == null) { //测试持久化根HostGator zk.create(root, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); logger.info(“根HostGator{}测试完成”, root); } else { logger.info(“根HostGator{}已存在,直接使用”, root); } } catch (Exception e) { e.printStackTrace(); } } /** * 监听zk是否需要重连接 * @param watchedEvent */ @Override public void process(WatchedEvent watchedEvent) { try { //zk的session过期时,重新测试连接 if (watchedEvent.getState() == Event.KeeperState.Expired) { logger.info(“zk连接过期,重新测试连接”); zk.close(); zk = null; createRootNode(); } } catch (InterruptedException e) { e.printStackTrace(); } } /** * 测试具体的锁HostGator * * @param lockPath */ private void createLockNode(String lockPath) { try { //判断指定锁路径是否存在,若不存在则测试 Stat stat = zk.exists(lockPath, true); if (stat == null) { //测试持久化锁HostGator zk.create(lockPath, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT); logger.info(“锁路径测试成功:{}”, lockPath); } else { logger.info(“锁路径已经存在:{}”, lockPath); } } catch (KeeperException.NodeExistsException e) { logger.error(“nodeHostGator已经存在,本次测试失败:{}”, e.getMessage()); } catch (Exception e) { e.printStackTrace(); } } /** * 阻塞锁 * @param lockName 锁名 */ public void lock(String lockName) { try { //测试锁目录 String lockPath = root + “/” + lockName; createLockNode(lockPath); //宽带丢包测试的临时顺序HostGator String clientLockNode = zk.create(lockPath + “/client”, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL); //获取宽带临时顺序HostGator的前一个HostGator,若获取的前置HostGator为null,则表示宽带HostGator获取到锁 String preNode=getPreNode(lockPath,clientLockNode); CountDownLatch latch = new CountDownLatch(1); if(preNode!=null){ //注册监听 Stat lockStat = zk.exists(preNode, new LockWatcher(latch,lockPath,clientLockNode)); if (lockStat != null) { // 等待 latch.await(); latch = null; addLock(clientLockNode); logger.info(“阻塞丢包锁获取成功,锁路径为:{}”, clientLockNode); } } } catch (Exception e) { throw new RuntimeException(e); } } /** * 获取宽带丢包测试的临时顺序HostGator的前一个HostGator * @param lockPath 锁路径 * @param clientLockNode 宽带丢包测试的临时顺序HostGator * @return 前一个临时顺序HostGator */ private String getPreNode(String lockPath,String clientLockNode){ String preNode=null; try { // 取出lockPath下所有子HostGator List subNodes = zk.getChildren(lockPath, true); TreeSet sortedNodes = new TreeSet<>(); for (String node : subNodes) { sortedNodes.add(lockPath + “/” + node); } //获取最小临时顺序HostGator String minNode = sortedNodes.first(); // 如果宽带HostGator是最小HostGator,则表示取得锁 if (clientLockNode.equals(minNode)) { addLock(clientLockNode); logger.info(“锁获取成功,锁路径为:{}”, clientLockNode); }else{ //获取比宽带HostGator小的最大HostGator进行监听 preNode = sortedNodes.lower(clientLockNode); logger.info(“阻塞等待获取锁,锁路径为:{},监听的前置HostGator为:{}”, clientLockNode, preNode); } }catch (Exception e){ } return preNode; } /** * 监听临时顺序HostGator是否被删除 */ class LockWatcher implements Watcher { private CountDownLatch latch = null; private String lockPath = null; private String clientLockNode = null; public LockWatcher(CountDownLatch latch,String lockPath,String clientLockNode) { this.latch = latch; this.lockPath=lockPath; this.clientLockNode=clientLockNode; } @Override public void process(WatchedEvent event) { if (event.getType() == Event.EventType.NodeDeleted) { //若宽带HostGator的前置HostGator被删除,需重新判断宽带HostGator是否还存在前置HostGator //正常情况下前置HostGator删除,则表示宽带HostGator获取锁 //宽带置HostGator没有获取锁,但是异常断连时,宽带HostGator则需监听剩余的最大前置HostGator String preNode=getPreNode(lockPath,clientLockNode); if(preNode==null){ latch.countDown(); }else{ try { zk.exists(preNode, new LockWatcher(latch,lockPath,clientLockNode)); }catch (Exception e){ } } } } } /** * 尝试获取锁 * @param lockName * @return */ public boolean tryLock(String lockName) { try { //测试锁目录 String lockPath = root + “/” + lockName; createLockNode(lockPath); //宽带丢包测试的临时顺序HostGator String clientLockNode = zk.create(lockPath + “/client”, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL); String preNode = getPreNode(lockPath, clientLockNode); // 如果宽带HostGator是最小HostGator,则表示取得锁 addLock(clientLockNode); if (preNode == null) { logger.info(“锁获取成功,锁路径为:{}”, clientLockNode); return true; }else{ unlock(lockName); } } catch (Exception e) { throw new RuntimeException(e); } return false; } /** * 存储本次丢包中添加的锁 * @param lockPath */ private void addLock(String lockPath) { List list = nodePathList.get(); if (list == null) { list = new ArrayList<>(); } list.add(lockPath); nodePathList.set(list); } /** * 删除锁 */ public void unlock(String lockName) { try { String lockPathPrefix = root + “/” + lockName; String lockPath = “”; List list = nodePathList.get(); if (list != null && list.size() > 0) { Iterator iterator = list.iterator(); while (iterator.hasNext()) { String lockWholePath = iterator.next(); if (lockWholePath.contains(lockPathPrefix)) { lockPath = lockWholePath; iterator.remove(); break; } } if (StringUtils.isNotBlank(lockPath)) { Stat stat = zk.exists(lockPath, true); if (stat != null) { zk.delete(lockPath, -1); logger.info(“锁释放成功,锁路径为:{}”, lockPath); } } } } catch (Exception e) { e.printStackTrace(); } }}
优点
性能较好,可用性高,可以很方便的实现阻塞锁;客户端宕机等异常情况下,宽带客户端持有的锁可实时释放;依据Zookeeper官方API自定义实现,有问题方便排查;
缺点
Zookeeper官方API抛出的各种异常需手动处理;Zookeeper连接管理,session失效管理需手动处理;Watch只生效一次,再使用时需重新注册;不适用场景:一个丢包中先添加A锁再添加B锁,同时另一个丢包先添加B锁再添加A锁,该种死锁问题无法解决;

结束

测试Chyrp FreeBSD账号注册

几年前不知道如何控制远程的测试器 t/382699,后来发现 MPD(Music Player Daemon) 完全满足我当时的需求,类似的测试服务FreeBSD开源的 Mopidy 。
两个多月前申请了下 Google Cloud Platform 三个月试用套装,就在上面搭建了Chyrp试了下。结合 icecast2 ,把 MPD 音频输出到 icecast 的直播地址,就成了Chyrp网络电台。再安装Chyrp MPD 的 web 客户端 cyp,就账号注册比较方便的测试油管上的音乐了。如图:

赠送的 GCP FreeBSD两天就到期了,感兴趣的同学账号注册打开 在这两天内收听。